id
int64 39
11.1M
| section
stringlengths 3
4.51M
| length
int64 2
49.9k
| title
stringlengths 1
182
| chunk_id
int64 0
68
|
---|---|---|---|---|
8,460 |
# Dieting
**Dieting** is the practice of eating food in a regulated way to decrease, maintain, or increase body weight, or to prevent and treat diseases such as diabetes and obesity. As weight loss depends on calorie intake, different kinds of calorie-reduced diets, such as those emphasising particular macronutrients (low-fat, low-carbohydrate, etc.), have been shown to be no more effective than one another. As weight regain is common, diet success is best predicted by long-term adherence. Regardless, the outcome of a diet can vary widely depending on the individual.
The first popular diet was \"Banting\", named after William Banting. In his 1863 pamphlet, *Letter on Corpulence, Addressed to the Public*, he outlined the details of a particular low-carbohydrate, low-calorie diet that led to his own dramatic weight loss.
Some guidelines recommend dieting to lose weight for people with weight-related health problems, but not for otherwise healthy people. One survey found that almost half of all American adults attempt to lose weight through dieting, including 66.7% of obese adults and 26.5% of normal weight or underweight adults. Dieters who are overweight (but not obese), who are normal weight, or who are underweight may have an increased mortality rate as a result of dieting.
## History
The word *diet* comes from the Greek *δίαιτα (diaita)*, which represents a notion of a whole way healthy lifestyle including both mental and physical health, rather than a narrow weight-loss regimen.
One of the first dietitians was the English doctor George Cheyne. He himself was tremendously overweight and would constantly eat large quantities of rich food and drink. He began a meatless diet, taking only milk and vegetables, and soon regained his health. He began publicly recommending his diet for everyone who was obese. In 1724, he wrote *An Essay of Health and Long Life*, in which he advises exercise and fresh air and avoiding luxury foods.
The Scottish military surgeon, John Rollo, published *Notes of a Diabetic Case* in 1797. It described the benefits of a meat diet for those with diabetes, basing this recommendation on Matthew Dobson\'s discovery of glycosuria in diabetes mellitus. By means of Dobson\'s testing procedure (for glucose in the urine) Rollo worked out a diet that had success for what is now called type 2 diabetes.
The first popular diet was \"Banting\", named after the English undertaker William Banting. In 1863, he wrote a booklet called *Letter on Corpulence, Addressed to the Public*, which contained the particular plan for the diet he had successfully followed. His own diet was four meals per day, consisting of meat, greens, fruits, and dry wine. The emphasis was on avoiding sugar, sweet foods, starch, beer, milk and butter. Banting\'s pamphlet was popular for years to come, and would be used as a model for modern diets. The pamphlet\'s popularity was such that the question \"Do you bant?\" referred to his method, and eventually to dieting in general. His booklet remains in print as of 2007.
The first weight-loss book to promote calorie counting, and the first weight-loss book to become a bestseller, was the 1918 *Diet and Health: With Key to the Calories* by American physician and columnist Lulu Hunt Peters.
It was estimated that over 1000 weight-loss diets have been developed up to 2014.
| 541 |
Dieting
| 0 |
8,460 |
# Dieting
## Types
A restricted diet is most commonly pursued by those who want to lose weight. Some people follow a diet to gain weight (such as people who are underweight or who are attempting to gain more muscle). Diets can also be used to maintain a stable body weight or to improve health.
### Low-fat {#low_fat}
Low-fat diets involve the reduction of the percentage of fat in one\'s diet. Calorie consumption is reduced because less fat is consumed. Diets of this type include NCEP Step I and II. A meta-analysis of 16 trials of 2--12 months\' duration found that low-fat diets (without intentional restriction of caloric intake) resulted in average weight loss of 3.2 kg over habitual eating.
A low-fat, plant-based diet has been found to improve control of weight, blood sugar levels, and cardiovascular health.
### Low-carbohydrate {#low_carbohydrate}
### Low-calorie {#low_calorie}
Low-calorie diets usually produce an energy deficit of 500--1,000 calories per day, which can result in a 0.5 to weight loss per week. The National Institutes of Health reviewed 34 randomized controlled trials to determine the effectiveness of low-calorie diets. They found that these diets lowered total body mass by 8% in the short term, over 3--12 months. Women doing low-calorie diets should have at least 1,000 calories per day and men should have approximately 1,200 calories per day. These caloric intake values vary depending on additional factors, such as age and weight.
### Very low-calorie {#very_low_calorie}
Very low calorie diets provide 200--800 calories per day, maintaining protein intake but limiting calories from both fat and carbohydrates. They subject the body to starvation and produce an average loss of 1.5 -- per week. \"2-4-6-8\", a popular diet of this variety, follows a four-day cycle in which only 200 calories are consumed the first day, 400 the second day, 600 the third day, 800 the fourth day, and then totally fasting, after which the cycle repeats. There is some evidence that these diets results in considerable weight loss. These diets are not recommended for general use and should be reserved for the management of obesity as they are associated with adverse side effects such as loss of lean muscle mass, increased risks of gout, and electrolyte imbalances. People attempting these diets must be monitored closely by a physician to prevent complications.
The concept of crash dieting is to drastically reduce calories, using a very-low-calorie diet. Crash dieting can be highly dangerous because it can cause various kind of issues for the human body. Crash dieting can produce weight loss but without professional supervision all along, the extreme reduction in calories and potential unbalance in the diet\'s composition can lead to detrimental effects, including sudden death.
| 448 |
Dieting
| 1 |
8,460 |
# Dieting
## Types
### Fasting
Fasting is the act of intentional taking a long time interval between meals. Lengthy fasting (multiple days in a week) might be dangerous due to the risk of malnutrition. During prolonged fasting or very low calorie diets the reduction of blood glucose, the preferred energy source of the brain, causes the body to deplete its glycogen stores. Once glycogen is depleted the body begins to fuel the brain using ketones, while also metabolizing body protein (including but not limited to skeletal muscle) to be used to synthesize sugars for use as energy by the rest of the body. Most experts believe that a prolonged fast can lead to muscle wasting, although some`{{who|date=April 2022}}`{=mediawiki} dispute this. The use of short-term fasting, or various forms of intermittent fasting, have been used as a form of dieting to circumvent the issues of long fasting.
Intermittent fasting commonly takes the form of periodic fasting, alternate-day fasting, time-restricted feeding, and/or religious fasting. It can be a form of reduced-calorie dieting but pertains entirely to when the metabolism is activated during the day for digestion. The changes to eating habits on a regular basis do not have to be severe or absolutely restrictive to see benefits to cardiovascular health, such as improved glucose metabolism, reduced inflammation, and reduced blood pressure. Studies have suggested that for people in intensive care, an intermittent fasting regimen might \"\[preserve\] energy supply to vital organs and tissues\... \[and\] powerfully activates cell-protective and cellular repair pathways, including autophagy, mitochondrial biogenesis and antioxidant defenses, which may promote resilience to cellular stress.\" The effects of decreased serum glucose and depleted hepatic glycogen causing the body to switch to ketogenic metabolism are similar to the effects of reduced carbohydrate-based diets. There is evidence demonstrating profound metabolic benefits of intermittent fasting in rodents. However, evidence is lacking or contradictory in humans and requires further investigation, especially over the long-term. Some evidence suggests that intermittent restriction of caloric intake has no weight-loss advantages over continuous calorie restriction plans. For adults, fasting diets appear to be safe and tolerable, however there is a possibility that periods of fasting and hunger could lead to overeating and to weight regain after the fasting period. Adverse effects of fasting are often moderate and include halitosis, fatigue, weakness, and headaches. Fasting diets may be harmful to children and the elderly.
### Exclusion Diet {#exclusion_diet}
This type of diet is based on the restriction of specific foods or food groups. Examples include gluten-free, Paleo, plant-based, and Mediterranean diets.
Plant-based diets include vegetarian and vegan diets, and can range from the simple exclusion of meat products to diets that only include raw vegetables, fruits, nuts, seeds, legumes, and sprouted grains. Exclusion of animal products can reduce the intake of certain nutrients, which might lead to nutritional deficiencies of protein, iron, zinc, calcium, and vitamins D and B~12~. Therefore, long term implementation of a plant-based diet requires effective counseling and nutritional supplementation as necessary. Plant-based diets are effective for short-term treatment of overweight and obesity, likely due to the high consumption of low energy density foods. However, evidence for long-term efficacy is limited.
The Paleo diet includes foods that it identifies as having been available to Paleolithic peoples including meat, nuts, eggs, some oils, fresh fruits, and vegetables. Overall, it is high in protein and moderate in fats and carbohydrates. Some limited evidence suggests various health benefits and effective weight loss with this diet. However, similar to the plant-based diet, the Paleo diet has potential nutritional deficiency risks, specifically with vitamin D, calcium, and iodine.
Gluten-free diets are often used for weight loss but little has been studied about the efficacy of this diet and metabolic mechanism for its effectiveness is unclear.
The Mediterranean diet is characterized by high consumption of vegetables, fruits, legumes, whole-grain cereals, seafood, olive oil, and nuts. Red meat, dairy and alcohol are only recommended in moderation. Studies show that the Mediterranean diet is associated with short term as well as long term weight loss in addition to health and metabolic benefits.
### Detox
Detox diets are promoted with unsubstantiated claims that they can eliminate \"toxins\" from the human body. Many of these diets use herbs or celery and other juicy low-calorie vegetables. Detox diets can include fasting or exclusion (as in juice fasting). Detox diets tend to result in short-term weight loss (because of calorie restriction), followed by weight gain.
### Environmentally sustainable {#environmentally_sustainable}
Another kind of diet focuses not on the dieter\'s health effects, but on its environment. The One Blue Dot plan of the BDA offers recommendations towards reducing diets\' environmental impacts, by:
1. Reducing meat to 70g per person per day.
2. Prioritising plant proteins.
3. Promoting fish from sustainable sources.
4. Moderate dairy consumption.
5. Focusing on wholegrain starchy foods.
6. Promoting seasonal locally sourced fruits and vegetables.
7. Reducing high fat, sugar and salty foods overconsumption.
8. Promoting tap water and unsweetened tea/coffee as the de facto choice for healthy hydration.
9. Reducing food waste.
| 836 |
Dieting
| 2 |
8,460 |
# Dieting
## Effectiveness
Several diets are effective for short-term weight loss for obese individuals, with diet success most predicted by adherence and little effect resulting from the type or brand of diet. As weight maintenance depends on calorie intake, diets emphasising certain macronutrients (low-fat, low-carbohydrate, etc.) have been shown to be no more effective than one another and no more effective than diets that maintain a typical mix of foods with smaller portions and perhaps some substitutions (e.g. low-fat milk, or less salad dressing). A meta-analysis of six randomized controlled trials found no difference between low-calorie, low-carbohydrate, and low-fat diets in terms of short-term weight loss, with a 2--4 kilogram weight loss over 12--18 months in all studies. Diets that severely restrict calorie intake do not lead to long term weight loss. Extreme diets may, in some cases, lead to malnutrition.
A major challenge regarding weight loss and dieting relates to compliance. While dieting can effectively promote weight loss in the short term, the intervention is hard to maintain over time and suppresses skeletal muscle thermogenesis. Suppressed thermogenesis accelerates weight regain once the diet stops, unless that phase is accompanied by a well-timed exercise intervention, as described by the Summermatter cycle. Most diet studies do not assess long-term weight loss.
Some studies have found that, on average, short-term dieting results in a \"meaningful\" long-term weight-loss, although limited because of gradual 1 to 2 kg/year weight regain. Because people who do not participate in weight-loss programs also tend to gain weight over time, and baseline data from such \"untreated\" participants are typically not included in diet studies, it is possible that diets do result in lower weights in the long-term relative to people who do not diet. Others have suggested that dieting is ineffective as a long-term intervention. For each individual, the results will be different, with some even regaining more weight than they lost, while a few others achieve a tremendous loss, so that the \"average weight loss\" of a diet is not indicative of the results other dieters may achieve. A 2001 meta-analysis of 29 American studies found that participants of structured weight-loss programs maintained an average of 23% (3 kg) of their initial weight loss after five years, representing a sustained 3.2% reduction in body mass. Unfortunately, patients are generally unhappy with weight loss of \<10%, and reductions even as high as 10% are insufficient for changing someone with an \"obese\" BMI to a \"normal weight\" BMI.
Partly because diets do not reliably produce long-term positive health outcomes, some argue against using weight loss as a goal, preferring other measures of health such as improvements in cardiovascular biomarkers, sometimes called a Health at Every Size (HAES) approach or a \"weight neutral\" approach.
Long term losses from dieting are best maintained with continuing professional support, long term increases in physical activity, the use of anti-obesity medications, continued use of meal replacements, and additional periods of dieting to undo weight regain. The most effective approach to weight loss is an in-person, high-intensity, comprehensive lifestyle intervention: overweight or obese adults should maintain regular (at least monthly) contact with a trained interventionalist who can help them engage in exercise, monitor their body weight, and reduce their calorie consumption. Even with high-intensity, comprehensive lifestyle interventions (consisting of diet, physical exercise, and bimonthly or even more frequent contact with trained interventionists), gradual weight regain of 1--2 kg/year still occurs. For patients at high medical risk, bariatric surgery or medications may be warranted in addition to the lifestyle intervention, as dieting by itself may not lead to sustained weight loss.
Many studies overestimate the benefits of calorie restriction because the studies confound exercise and diet (testing the effects of diet and exercise as a combined intervention, rather than the effects of diet alone).
### Adverse effects {#adverse_effects}
#### Increased mortality rate {#increased_mortality_rate}
A number of studies have found that intentional weight loss is associated with an increase in mortality in people without weight-related health problems. A 2009 meta-analysis of 26 studies found that \"intentional weight loss had a small benefit for individuals classified as unhealthy (with obesity-related risk factors), especially unhealthy obese, but appeared to be associated with slightly increased mortality for healthy individuals, and for those who were overweight but not obese.\"
#### Dietary supplements {#dietary_supplements}
Due to extreme or unbalanced diets, dietary supplements are sometimes taken in an attempt to replace missing vitamins or minerals. While some supplements could be helpful for people eating an unbalanced diet (if replacing essential nutrients, for example), overdosing on any dietary supplement can cause a range of side effects depending on the supplement and dose that is taken. Supplements should not replace foods that are important to a healthy diet.
#### Eating disorders {#eating_disorders}
In an editorial for *Psychological Medicine*, George Hsu concludes that dieting is likely to lead to the development of an eating disorder in the presence of certain risk factors. A 2006 study found that dieting and unhealthy weight-control behaviors were predictive of obesity and eating disorders five years later, with the authors recommending a \"shift away from dieting and drastic weight-control measures toward the long-term implementation of healthful eating and physical activity\".
| 860 |
Dieting
| 3 |
8,460 |
# Dieting
## Mechanism
When the body is expending more energy than it is consuming (e.g. when exercising), the body\'s cells rely on internally stored energy sources, such as complex carbohydrates and fats, for energy. The first source to which the body turns is glycogen (by glycogenolysis). Glycogen is a complex carbohydrate, 65% of which is stored in skeletal muscles and the remainder in the liver (totaling about 2,000 kcal in the whole body). It is created from the excess of ingested macronutrients, mainly carbohydrates. When glycogen is nearly depleted, the body begins lipolysis, the mobilization and catabolism of fat stores for energy. In this process fats, obtained from adipose tissue, or fat cells, are broken down into glycerol and fatty acids, which can be used to generate energy. The primary by-products of metabolism are carbon dioxide and water; carbon dioxide is expelled through the respiratory system.
### Set-Point Theory {#set_point_theory}
The Set-Point Theory, first introduced in 1953, postulated that each body has a preprogrammed fixed weight, with regulatory mechanisms to compensate. This theory was quickly adopted and used to explain failures in developing effective and sustained weight loss procedures. A 2019 systematic review of multiple weight change procedures, including alternate day fasting and time-restricted feeding but also exercise and overeating, found systematic \"energetic errors\" for all these procedures. This shows that the body cannot precisely compensate for errors in energy/calorie intake, countering the Set-Point Theory and potentially explaining both weight loss and weight gain such as obesity. This review was conducted on short-term studies, therefore such a mechanism cannot be excluded in the long term, as evidence is currently lacking on this timeframe.
| 274 |
Dieting
| 4 |
8,460 |
# Dieting
## Methods
### Meal timing {#meal_timing}
A meal timing schedule is known to be an important factor of any diet. Recent evidence suggest that new scheduling strategies, such as intermittent fasting or skipping meals, and strategically placed snacks before meals, may be recommendable to reduce cardiovascular risks as part of a broader lifestyle and dietary change.
### Food diary {#food_diary}
A 2008 study published in the American Journal of Preventive Medicine showed that dieters who kept a daily food diary (or diet journal), lost twice as much weight as those who did not keep a food log, suggesting that if a person records their eating, they are more aware of what they consume and therefore eat fewer calories.
### Water
A 2009 review found limited evidence suggesting that encouraging water consumption and substituting energy-free beverages for energy-containing beverages (i.e., reducing caloric intake) may facilitate weight management. A 2009 article found that drinking 500 ml of water prior to meals for a 12-week period resulted in increased long-term weight reduction. (References given in main article.)
## Society
It is estimated that about 1 out of 3 Americans is dieting at any given time. 85% of dieters are women. Approximately sixty billion dollars are spent every year in the USA on diet products, including \"diet foods\", such as light sodas, gym memberships or specific regimes. 80% of dieters start by themselves, whereas 20% see a professional or join a paid program. The typical dieter attempts 4 tries per year.
### Weight loss groups {#weight_loss_groups}
Some weight loss groups aim to make money, others work as charities. The former include Weight Watchers and Peertrainer. The latter include Overeaters Anonymous, TOPS Club and groups run by local organizations.
These organizations\' customs and practices differ widely. Some groups are modelled on twelve-step programs, while others are quite informal. Some groups advocate certain prepared foods or special menus, while others train dieters to make healthy choices from restaurant menus and while grocery-shopping and cooking.
Attending group meetings for weight reduction programmes rather than receiving one-on-one support may increase the likelihood that obese people will lose weight. Those who participated in groups had more treatment time and were more likely to lose enough weight to improve their health. Study authors suggested that one explanation for the difference is that group participants spent more time with the clinician (or whoever delivered the programme) than those receiving one-on-one support
| 402 |
Dieting
| 5 |
8,472 |
# Disk storage
**Disk storage** (also sometimes called **drive storage**) is a data storage mechanism based on a rotating disk. The recording employs various electronic, magnetic, optical, or mechanical changes to the disk\'s surface layer. A **disk drive** is a device implementing such a storage mechanism. Notable types are hard disk drives (HDD), containing one or more non-removable rigid platters; the floppy disk drive (FDD) and its removable floppy disk; and various optical disc drives (ODD) and associated optical disc media.
(The spelling *disk* and *disc* are used interchangeably except where trademarks preclude one usage, e.g., the Compact Disc logo. The choice of a particular form is frequently historical, as in IBM\'s usage of the *disk* form beginning in 1956 with the \"IBM 350 disk storage unit\".)
## Background
Audio information was originally recorded by analog methods (see Sound recording and reproduction). Similarly the first video disc used an analog recording method. In the music industry, analog recording has been mostly replaced by digital optical technology where the data is recorded in a digital format with optical information.
The first commercial digital disk storage device was the IBM 350 which shipped in 1956 as a part of the IBM 305 RAMAC computing system. The random-access, low-density storage of disks was developed to complement the already used sequential-access, high-density storage provided by tape drives using magnetic tape. Vigorous innovation in disk storage technology, coupled with less vigorous innovation in tape storage, has reduced the difference in acquisition cost per terabyte between disk storage and tape storage; however, the total cost of ownership of data on disk including power and management remains larger than that of tape.
Disk storage is now used in both computer storage and consumer electronic storage, e.g., audio CDs and video discs (VCD, DVD and Blu-ray).
Data on modern disks is stored in fixed length blocks, usually called sectors and varying in length from a few hundred to many thousands of bytes. Gross disk drive capacity is simply the number of disk surfaces times the number of blocks/surface times the number of bytes/block. In certain legacy IBM CKD drives the data was stored on magnetic disks with variable length blocks, called records; record length could vary on and between disks. Capacity decreased as record length decreased due to the necessary gaps between blocks.
| 385 |
Disk storage
| 0 |
8,472 |
# Disk storage
## Access methods {#access_methods}
Digital disk drives are block storage devices. Each disk is divided into logical blocks (collection of sectors). Blocks are addressed using their logical block addresses (LBA). Read from or write to disk happens at the granularity of blocks.
Originally the disk capacity was quite low and has been improved in one of several ways. Improvements in mechanical design and manufacture allowed smaller and more precise heads, meaning that more tracks could be stored on each of the disks. Advancements in data compression methods permitted more information to be stored in each of the individual sectors.
The drive stores data onto cylinders, heads, and sectors. The sector unit is the smallest size of data to be stored in a hard disk drive, and each file will have many sector units assigned to it. The smallest entity in a CD is called a frame, which consists of 33 bytes and contains six complete 16-bit stereo samples (two bytes × two channels × six samples = 24 bytes). The other nine bytes consist of eight CIRC error-correction bytes and one subcode byte used for control and display.
The information is sent from the computer processor to the BIOS into a chip controlling the data transfer. This is then sent out to the hard drive via a multi-wire connector. Once the data is received onto the circuit board of the drive, they are translated and compressed into a format that the individual drive can use to store onto the disk itself. The data is then passed to a chip on the circuit board that controls the access to the drive. The drive is divided into sectors of data stored onto one of the sides of one of the internal disks. An HDD with two disks internally will typically store data on all four surfaces.
The hardware on the drive tells the actuator arm where it is to go for the relevant track, and the compressed information is then sent down to the head, which changes the physical properties, optically or magnetically, for example, of each byte on the drive, thus storing the information. A file is not stored in a linear manner; rather, it is held in the best way for quickest retrieval.
## Rotation speed and track layout {#rotation_speed_and_track_layout}
Mechanically there are two different motions occurring inside the drive. One is the rotation of the disks inside the device. The other is the side-to-side motion of the head across the disk as it moves between tracks.
There are two types of disk rotation methods:
- constant linear velocity (used mainly in optical storage) varies the rotational speed of the optical disc depending upon the position of the head, and
- constant angular velocity (used in HDDs, standard FDDs, a few optical disc systems, and vinyl audio records) spins the media at one constant speed regardless of where the head is positioned.
Track positioning also follows two different methods across disk storage devices. Storage devices focused on holding computer data, e.g., HDDs, FDDs, and Iomega zip drives, use concentric tracks to store data. During a sequential read or write operation, after the drive accesses all the sectors in a track, it repositions the head(s) to the next track. This will cause a momentary delay in the flow of data between the device and the computer. In contrast, optical audio and video discs use a single spiral track that starts at the innermost point on the disc and flows continuously to the outer edge. When reading or writing data, there is no need to stop the flow of data to switch tracks. This is similar to vinyl records, except vinyl records started at the outer edge and spiraled in toward the center.
## Interfaces
The disk drive interface is the mechanism/protocol of communication between the rest of the system and the disk drive itself. Storage devices intended for desktop and mobile computers typically use ATA (PATA) and SATA interfaces. Enterprise systems and high-end storage devices will typically use SCSI, SAS, and FC interfaces in addition to some use of SATA.
| 682 |
Disk storage
| 1 |
8,472 |
# Disk storage
## Basic terminology {#basic_terminology}
Disk : Generally refers to magnetic media and devices.\
Disc : Required by trademarks for certain optical media and devices.\
Platter : An individual recording disk. A hard disk drive contains a set of platters. Developments in optical technology have led to multiple recording layers on DVDs.\
Spindle : the spinning axle on which the platters are mounted.\
Rotation : Platters rotate; two techniques are common:
:\* Constant angular velocity (CAV) keeps the disk spinning at a fixed rate, measured in revolutions per minute (RPM). This means the heads cover more distance per unit of time on the outer tracks than on the inner tracks. This method is typical with computer hard drives.
:\*Constant linear velocity (CLV) keeps the distance covered by the heads per unit time fixed. Thus the disk has to slow down as the arm moves to the outer tracks. This method is typical for CD drives.
Track : The circle of recorded data on a single recording surface of a platter.
```{=html}
<!-- -->
```
Sector : A segment of a track\
Low level formatting : Establishing the tracks and sectors.\
Head : The device that reads and writes the information---magnetic or optical---on the disk surface.\
Arm : The mechanical assembly that supports the head as it moves in and out.\
Seek time : Time needed to move the head to a new position (specific track).\
Rotational latency : Average time, once the arm is on the right track, before a head is over a desired sector.\
Data transfer rate : The rate at which user data bits are transferred from or to the medium. Technically, this would more accurately be entitled the \"gross\" data transfer rate
| 288 |
Disk storage
| 2 |
8,476 |
# Disk operating system
A **disk operating system** (**DOS**) is a computer operating system that requires a disk or other direct-access storage device as secondary storage. A DOS provides a file system and a means for loading and running programs stored on the disk.
The term is now historical, as most if not all operating systems for general-purpose computers now require direct-access storage devices as secondary storage.
## History
Before modern storage such as the disk drive, floppy disk, and flash storage, early computers used storage such as delay line, core memory, punched card, punched tape, magnetic tape, and magnetic drum. Early microcomputers and home computers used paper tape, audio cassette tape (such as Kansas City standard), or no permanent storage at all. Without permanent storage, programs and data are input directly into memory using front panel switches, or is input through a computer terminal or keyboard, sometimes controlled by a BASIC interpreter in ROM. When power is turned off, all information is lost.
In the early 1960s, as disk drives became larger and more affordable, various mainframe and minicomputer vendors introduced disk operating systems and modified existing operating systems to use disks.
Hard disks and floppy disk drives require software to manage rapid access to block storage of sequential and other data. For most microcomputers, a disk drive of any kind was an optional peripheral. Systems could be used with a tape drive or booted without a storage device at all. The disk operating system component of the operating system was only needed when a disk drive was used.
By the time IBM announced the System/360 mainframes, the concept of a disk operating system was well established. Although IBM did offer Basic Programming Support (BPS/360) and TOS/360 for small systems, they were out of the mainstream and most customers used either DOS/360 or OS/360.
Most home and personal computers of the late 1970s and 1980s used a disk operating system; most often with \"DOS\" in the name and simply referred to as \"DOS\" in the context of its user community. For example, CBM DOS, Atari DOS, TRS-DOS, Apple DOS, Apple ProDOS, and MS-DOS. CP/M is also a disk operating system, despite not having \"DOS\" in the name.
A DOS is usually loaded from a disk, but there are exceptions, such as Commodore\'s disk drive for the Commodore 64 and VIC-20 which contain the DOS in ROM. Some versions of AmigaDOS mostly resides in ROM, as a part of a Kickstart firmware.
## OS extensions {#os_extensions}
- Commodore DOS is on 8-bit Commodore computers such as the Commodore 64. Unlike most other DOS systems, it is integrated into the disk drives, not loaded into the computer\'s own memory.
- Atari DOS is used by the Atari 8-bit computers. The Atari OS only offers low-level disk-access, so an extra layer called DOS can be booted from a floppy for higher level functions such as filesystems. Third-party replacements for Atari DOS include DOS XL, SpartaDOS, MyDOS, TurboDOS, and Top-DOS.
- MSX-DOS is for the MSX computer standard. The initial version, released in 1984, is MS-DOS 1.0 ported to Z80. In 1988, version 2 has facilities such as subdirectories, memory management, and environment strings. The MSX-DOS kernel resides in ROM (built-in on the disk controller) so basic file access capacity is available even without the command interpreter, by using BASIC extended commands.
- Disc Filing System (DFS) is an optional component for the Acorn BBC Micro, as a kit with a disk controller chip, a ROM chip, and a few logic chips, to be installed inside the computer.
- Advanced Disc Filing System (ADFS) is a successor to Acorn\'s DFS.
- AMSDOS is for the Amstrad CPC computers.
- GDOS and G+DOS is for the +D and DISCiPLE disk interfaces for the ZX Spectrum.
| 631 |
Disk operating system
| 0 |
8,476 |
# Disk operating system
## Main OSes {#main_oses}
Some disk operating systems are the operating systems for the entire computer system.
- The Burroughs (now Unisys) Master Control Program (MCP) for the B5000 originally runs from a drum, but starting with the B5500 it runs from a disk. It is the basis for the MCP on the B6500, B7500, and successors.
- The SIPROS, Chippewa Operating System (COS), SCOPE, MACE and KRONOS operating systems on the Control Data Corporation (CDC) 6000 series and 7600 are all disk operating systems. KRONOS became NOS and SCOPE became NOS/BE.
- The GECOS operating system for the GE (later Honeywell and Groupe Bull) 600 family of mainframe computers (it later became GCOS).
- The IBM Basic Operating System/360 (BOS/360), Disk Operating System/360 (DOS/360) and Operating System/360 (OS/360) are standard for all but the smallest System/360 installations; the 360/67 also has Control Program-67 /Cambridge Monitor System (CP-67/CMS) and Time Sharing System/360 (TSS/360). BOS is gone, CP-67/CMS has evolved into z/VM, DOS has evolved into z/VSE, OS has evolved into z/OS and TSS/360 evolved into TSS/370 PRPQ, which is now gone.
- The EXEC II operating system for the UNIVAC 1107 and 1108, and the EXEC 8 operating system for the 1108, which has evolved into OS 2200 for the Unisys ClearPath Dorado Series.
- The DOS-11 operating system for DEC PDP-11 minicomputers.
- CP/M is a disk operating system, as the main or alternate operating system for numerous microcomputers of the 1970s and 1980s.
- Apple DOS is the primary operating system for the Apple II, from 1979 with the introduction of the floppy disk drive, until 1983 when it was replaced by ProDOS.
- TRSDOS is the operating system for the TRS-80 line of computers from Tandy.
- MS-DOS for IBM PC compatibles with Intel x86 CPUs. 86-DOS was modeled on CP/M, and then was adapted as the basis for Microsoft\'s MS-DOS. It was rebranded by IBM as PC DOS until 1993. Various compatible systems were later produced by different organizations, starting with DR-DOS in 1988
| 342 |
Disk operating system
| 1 |
8,483 |
# Diesel cycle
The **Diesel cycle** is a combustion process of a reciprocating internal combustion engine. In it, fuel is ignited by heat generated during the compression of air in the combustion chamber, into which fuel is then injected. This is in contrast to igniting the fuel-air mixture with a spark plug as in the Otto cycle (four-stroke/petrol) engine. Diesel engines are used in aircraft, automobiles, power generation, diesel--electric locomotives, and both surface ships and submarines.
The Diesel cycle is assumed to have constant pressure during the initial part of the combustion phase ($V_2$ to $V_3$ in the diagram, below). This is an idealized mathematical model: real physical diesels do have an increase in pressure during this period, but it is less pronounced than in the Otto cycle. In contrast, the idealized Otto cycle of a gasoline engine approximates a constant volume process during that phase.
## Idealized Diesel cycle {#idealized_diesel_cycle}
thumb\|upright=1.2\|p--V diagram for the ideal **Diesel cycle**. The cycle follows the numbers 1--4 in clockwise direction.
The image shows a p--V diagram for the ideal Diesel cycle; where $p$ is pressure and V the volume or $v$ the specific volume if the process is placed on a unit mass basis. The *idealized* Diesel cycle assumes an ideal gas and ignores combustion chemistry, exhaust- and recharge procedures and simply follows four distinct processes:
- 1→2 : isentropic compression of the fluid (blue)
- 2→3 : constant pressure heating (red)
- 3→4 : isentropic expansion (yellow)
- 4→1 : constant volume cooling (green)
The Diesel engine is a heat engine: it converts heat into work. During the bottom isentropic processes (blue), energy is transferred into the system in the form of work $W_{in}$, but by definition (isentropic) no energy is transferred into or out of the system in the form of heat. During the constant pressure (red, isobaric) process, energy enters the system as heat $Q_{in}$. During the top isentropic processes (yellow), energy is transferred out of the system in the form of $W_{out}$, but by definition (isentropic) no energy is transferred into or out of the system in the form of heat. During the constant volume (green, isochoric) process, some of the energy flows out of the system as heat through the right depressurizing process $Q_{out}$. The work that leaves the system is equal to the work that enters the system plus the difference between the heat added to the system and the heat that leaves the system; in other words, net gain of work is equal to the difference between the heat added to the system and the heat that leaves the system.
- Work in ($W_{in}$) is done by the piston compressing the air (system)
- Heat in ($Q_{in}$) is done by the combustion of the fuel
- Work out ($W_{out}$) is done by the working fluid expanding and pushing a piston (this produces usable work)
- Heat out ($Q_{out}$) is done by venting the air
- Net work produced = $Q_{in}$ - $Q_{out}$
The net work produced is also represented by the area enclosed by the cycle on the p--V diagram. The net work is produced per cycle and is also called the useful work, as it can be turned to other useful types of energy and propel a vehicle (kinetic energy) or produce electrical energy. The summation of many such cycles per unit of time is called the developed power. The $W_{out}$ is also called the gross work, some of which is used in the next cycle of the engine to compress the next charge of air.
### Maximum thermal efficiency {#maximum_thermal_efficiency}
The maximum thermal efficiency of a Diesel cycle is dependent on the compression ratio and the cut-off ratio. It has the following formula under cold air standard analysis:
$\eta_{th}=1-\frac{1}{r^{\gamma-1}}\left ( \frac{\alpha^{\gamma}-1}{\gamma(\alpha-1)} \right )$
where
$$\eta_{th}$$ is thermal efficiency
$$\alpha$$ is the cut-off ratio $\frac{V_3}{V_2}$ (ratio between the end and start volume for the combustion phase)
: is the compression ratio $\frac{V_1}{V_2}$
: $\gamma$ is ratio of specific heats (C~p~/C~v~)
The cut-off ratio can be expressed in terms of temperature as shown below:
$$\frac{T_2}{T_1} ={\left(\frac{V_1}{V_2}\right)^{\gamma-1}} = r^{\gamma-1}$$
$$\displaystyle {T_2} ={T_1} r^{\gamma-1}$$
$$\frac{V_3}{V_2} = \frac{T_3}{T_2}$$
$$\alpha = \left(\frac{T_3}{T_1}\right)\left(\frac{1}{r^{\gamma-1}}\right)$$
$T_3$ can be approximated to the flame temperature of the fuel used. The flame temperature can be approximated to the adiabatic flame temperature of the fuel with corresponding air-to-fuel ratio and compression pressure, $p_3$. $T_1$ can be approximated to the inlet air temperature.
This formula only gives the ideal thermal efficiency. The actual thermal efficiency will be significantly lower due to heat and friction losses. The formula is more complex than the Otto cycle (petrol/gasoline engine) relation that has the following formula:
$\eta_{otto,th}=1-\frac{1}{r^{\gamma-1}}$
The additional complexity for the Diesel formula comes around since the heat addition is at constant pressure and the heat rejection is at constant volume. The Otto cycle by comparison has both the heat addition and rejection at constant volume.
### Comparing efficiency to Otto cycle {#comparing_efficiency_to_otto_cycle}
Comparing the two formulae it can be seen that for a given compression ratio (`{{math|r}}`{=mediawiki}), the *ideal* Otto cycle will be more efficient. However, a *real* diesel engine will be more efficient overall since it will have the ability to operate at higher compression ratios. If a petrol engine were to have the same compression ratio, then knocking (self-ignition) would occur and this would severely reduce the efficiency, whereas in a diesel engine, the self ignition is the desired behavior. Additionally, both of these cycles are only idealizations, and the actual behavior does not divide as clearly or sharply. Furthermore, the ideal Otto cycle formula stated above does not include throttling losses, which do not apply to diesel engines.
## Applications
| 943 |
Diesel cycle
| 0 |
8,483 |
# Diesel cycle
## Applications
### Diesel engines {#diesel_engines}
Diesel engines have the lowest specific fuel consumption of any large internal combustion engine employing a single cycle, 0.26 lb/hp·h (0.16 kg/kWh) for very large marine engines (combined cycle power plants are more efficient, but employ two engines rather than one). Two-stroke diesels with high pressure forced induction, particularly turbocharging, make up a large percentage of the very largest diesel engines.
In North America, diesel engines are primarily used in large trucks, where the low-stress, high-efficiency cycle leads to much longer engine life and lower operational costs. These advantages also make the diesel engine ideal for use in the heavy-haul railroad and earthmoving environments.
### Other internal combustion engines without spark plugs {#other_internal_combustion_engines_without_spark_plugs}
Many model airplanes use very simple \"glow\" and \"diesel\" engines. Glow engines use glow plugs. \"Diesel\" model airplane engines have variable compression ratios. Both types depend on special fuels.
Some 19th-century or earlier experimental engines used external flames, exposed by valves, for ignition, but this becomes less attractive with increasing compression. (It was the research of Nicolas Léonard Sadi Carnot that established the thermodynamic value of compression.) A historical implication of this is that the diesel engine could have been invented without the aid of electricity.\
See the development of the hot-bulb engine and indirect injection for historical significance
| 221 |
Diesel cycle
| 1 |
8,495 |
# Data set
A **data set** (or **dataset**) is a collection of data. In the case of tabular data, a data set corresponds to one or more database tables, where every column of a table represents a particular variable, and each row corresponds to a given record of the data set in question. The data set lists values for each of the variables, such as for example height and weight of an object, for each member of the data set. Data sets can also consist of a collection of documents or files.
In the open data discipline, a dataset is a unit used to measure the amount of information released in a public open data repository. The European data.europa.eu portal aggregates more than a million data sets.
## Properties
Several characteristics define a data set\'s structure and properties. These include the number and types of the attributes or variables, and various statistical measures applicable to them, such as standard deviation and kurtosis.
The values may be numbers, such as real numbers or integers, for example representing a person\'s height in centimeters, but may also be nominal data (i.e., not consisting of numerical values), for example representing a person\'s ethnicity. More generally, values may be of any of the kinds described as a level of measurement. For each variable, the values are normally all of the same kind. Missing values may exist, which must be indicated somehow.
In statistics, data sets usually come from actual observations obtained by sampling a statistical population, and each row corresponds to the observations on one element of that population. Data sets may further be generated by algorithms for the purpose of testing certain kinds of software. Some modern statistical analysis software such as SPSS still present their data in the classical data set fashion. If data is missing or suspicious an imputation method may be used to complete a data set.
## Classics
Several classic data sets have been used extensively in the statistical literature:
- Iris flower data set -- Multivariate data set introduced by Ronald Fisher (1936). [Provided online by University of California-Irvine Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Iris).
- MNIST database -- Images of handwritten digits commonly used to test classification, clustering, and image processing algorithms
- *Categorical data analysis* -- Data sets used in the book, *An Introduction to Categorical Data Analysis*, [provided online](https://stats.oarc.ucla.edu/other/examples/icda/) by UCLA Advanced Research Computing.
- *Robust statistics* -- Data sets used in *Robust Regression and Outlier Detection* (Rousseeuw and Leroy, 1968). [Provided online](https://web.archive.org/web/20050207032959/http://www.uni-koeln.de/themen/statistik/data/rousseeuw/) at the University of Cologne.
- *Time series* -- Data used in Chatfield\'s book, *The Analysis of Time Series*, are [provided on-line](https://web.archive.org/web/20110102201323/http://lib.stat.cmu.edu/modules.php?op=modload&name=PostWrap&file=index&page=datasets/) by StatLib.
- *Extreme values* -- Data used in the book, *An Introduction to the Statistical Modeling of Extreme Values* are [a snapshot of the data as it was provided on-line by Stuart Coles](https://web.archive.org/web/20060910161517/http://homes.stat.unipd.it/coles/public_html/ismev/ismev.dat), the book\'s author.
- *Bayesian Data Analysis* -- Data used in the book are [provided on-line](http://www.stat.columbia.edu/~gelman/book/data/) ([archive link](https://web.archive.org/web/20230122121643/http://www.stat.columbia.edu/~gelman/book/data/)) by Andrew Gelman, one of the book\'s authors.
- The [Bupa liver data](https://web.archive.org/web/20171023174701/http://ftp.ics.uci.edu:80/pub/machine-learning-databases/liver-disorders/) -- Used in several papers in the machine learning (data mining) literature.
- Anscombe\'s quartet -- Small data set illustrating the importance of graphing the data to avoid statistical fallacies
| 531 |
Data set
| 0 |
8,508 |
# Slalom skiing
**Slalom** is an alpine skiing and alpine snowboarding discipline, involving skiing between poles or gates. These are spaced more closely than those in giant slalom, super giant slalom and downhill, necessitating quicker and shorter turns. Internationally, the sport is contested at the FIS Alpine World Ski Championships, and at the Olympic Winter Games.
## History
The term **slalom** comes from the Morgedal/Seljord dialect of the Norwegian word \"slalåm\": \"sla\", meaning \"slightly inclining hillside\", and \"låm\", meaning \"track after skis\". The inventors of modern skiing classified their trails according to their difficulty:
- *Slalåm* was a trail used in Telemark by boys and girls not yet able to try themselves on the more challenging runs.
- *Ufsilåm* was a trail with one obstacle (*ufse*) like a jump, a fence, a difficult turn, a gorge, a cliff (often more than 10 m high), et cetera.
- *Uvyrdslåm* was a trail with several obstacles.
A Norwegian military downhill competition in 1767 included racing downhill among trees \"without falling or breaking skis\". Sondre Norheim and other skiers from Telemark practiced *uvyrdslåm* or \"disrespectful/reckless downhill\" where they raced downhill in difficult and untested terrain (i.e., off piste). The 1866 \"ski race\" in Oslo was a combined cross-country, jumping and slalom competition. In the slalom participants were allowed use poles for braking and steering, and they were given points for style (appropriate skier posture). During the late 19th century Norwegian skiers participated in all branches (jumping, slalom, and cross-country) often with the same pair of skis. Slalom and variants of slalom were often referred to as hill races. Around 1900 hill races were abandoned in the Oslo championships at Huseby and Holmenkollen. Mathias Zdarsky\'s development of the Lilienfeld binding helped change hill races into a specialty of the Alps region.
The rules for the modern slalom were developed by Arnold Lunn in 1922 for the British National Ski Championships, and adopted for alpine skiing at the 1936 Winter Olympics. Under these rules gates were marked by pairs of flags rather than single ones, were arranged so that the racers had to use a variety of turn lengths to negotiate them, and scoring was on the basis of time alone, rather than on both time and style.
## Course
A course is constructed by laying out a series of gates, formed by alternating pairs of red and blue poles. The skier must pass between the two poles forming the gate, with the tips of both skis and the skier\'s feet passing between the poles. A course has 55 to 75 gates for men and 40 to 60 for women. The vertical drop for a men\'s course is 180 to and measures slightly less for women. The gates are arranged in a variety of configurations to challenge the competitor.
## Clearing the gates {#clearing_the_gates}
Traditionally, bamboo poles were used for gates, the rigidity of which forced skiers to maneuver their entire body around each gate. In the early 1980s, rigid poles were replaced by hard plastic poles, hinged at the base. The hinged gates require, according to FIS rules, only that the skis and boots of the skier go around each gate.
The new gates allow a more direct path down a slalom course through the process of cross-blocking or shinning the gates. Cross-blocking is a technique in which the legs go around the gate with the upper body inclined toward, or even across, the gate; in this case the racer\'s outside pole and shinguards hit the gate, knocking it down and out of the way. Cross-blocking is done by pushing the gate down with the arms, hands, or shins. By 1989, most of the top technical skiers in the world had adopted the cross-block technique.
## Equipment
With the innovation of shaped skis around the turn of the 21st century, equipment used for slalom in international competition changed drastically. World Cup skiers commonly skied on slalom skis at a length of 203 - in the 1980s and 1990s but by the 2002 Olympic Winter Games in Salt Lake City, the majority of competitors were using skis measuring 160 cm or less.
The downside of the shorter skis was that athletes found that recoveries were more difficult with a smaller platform underfoot. Out of concern for the safety of athletes, the FIS began to set minimum ski lengths for international slalom competition. The minimum was initially set at 155 cm for men and 150 cm for women, but was increased to 165 cm for men and 155 cm for women for the 2003--2004 season.
The equipment minimums and maximums imposed by the International Ski Federation (FIS) have created a backlash from skiers, suppliers, and fans. The main objection is that the federation is regressing the equipment, and hence the sport, by two decades.
American Bode Miller hastened the shift to the shorter, more radical sidecut skis when he achieved unexpected success after becoming the first Junior Olympic athlete to adopt the equipment in giant slalom and super-G in 1996. A few years later, the technology was adapted to slalom skis as well.
| 847 |
Slalom skiing
| 0 |
8,508 |
# Slalom skiing
## Men\'s slalom World Cup podiums {#mens_slalom_world_cup_podiums}
In the following table men\'s slalom World Cup podiums in the World Cup since first season in 1967.
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| Season | 1st | 2nd | 3rd |
+========+==================================================+======================================================+======================================================+
| 1967 | Jean-Claude Killy | Guy Perillat | Heinrich Messner |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1968 | Dumeng Giovanoli | Jean-Claude Killy | Patrick Russel |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1969 | Alain Penz\ | | |
| | `{{flagicon|AUT}}`{=mediawiki} Alfred Matt\ | | |
| | `{{flagicon|FRA}}`{=mediawiki} Jean-Noel Augert\ | | |
| | `{{flagicon|FRA}}`{=mediawiki} Patrick Russel | | |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1970 | Alain Penz | Jean-Noël Augert\ | |
| | | `{{flagicon|FRA}}`{=mediawiki} Patrick Russel | |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1971 | Jean-Noël Augert | Gustav Thöni | Tyler Palmer |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1972 | Jean-Noël Augert | Andrzej Bachleda | Roland Thöni |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1973 | Gustav Thöni | Christian Neureuther | Jean-Noël Augert |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1974 | Gustav Thöni | Christian Neureuther | Johann Kniewasser |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1975 | Ingemar Stenmark | Gustav Thöni | Piero Gros |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1976 | Ingemar Stenmark | Piero Gros | Gustav Thöni\ |
| | | | `{{flagicon|AUT}}`{=mediawiki} Hans Hinterseer |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1977 | Ingemar Stenmark | Klaus Heidegger | Paul Frommelt |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1978 | Ingemar Stenmark | Klaus Heidegger | Phil Mahre |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1979 | Ingemar Stenmark | Phil Mahre | Christian Neureuther |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1980 | Ingemar Stenmark | Bojan Križaj | Christian Neureuther |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1981 | Ingemar Stenmark | Phil Mahre | Bojan Križaj\ |
| | | | `{{flagicon|USA}}`{=mediawiki} Steve Mahre |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1982 | Phil Mahre | Ingemar Stenmark | Steve Mahre |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1983 | Ingemar Stenmark | Stig Strand | Andreas Wenzel |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1984 | Marc Girardelli | Ingemar Stenmark | Franz Gruber |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1985 | Marc Girardelli | Paul Frommelt | Ingemar Stenmark |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1986 | Rok Petrovič | Bojan Križaj\ | |
| | | `{{flagicon|SWE}}`{=mediawiki} Ingemar Stenmark\ | |
| | | `{{flagicon|LIE}}`{=mediawiki} Paul Frommelt | |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1987 | Bojan Križaj | Ingemar Stenmark | Armin Bittner |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1988 | Alberto Tomba | Günther Mader | Felix McGrath |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1989 | Armin Bittner | Alberto Tomba | Marc Girardelli\ |
| | | | `{{flagicon|NOR}}`{=mediawiki} Ole Kristian Furuseth |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1990 | Armin Bittner | Alberto Tomba\ | |
| | | `{{flagicon|NOR}}`{=mediawiki} Ole Kristian Furuseth | |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1991 | Marc Girardelli | Ole Kristian Furuseth | Rudolf Nierlich |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1992 | Alberto Tomba | Paul Accola | Finn Christian Jagge |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1993 | Thomas Fogdö | Alberto Tomba | Thomas Stangassinger |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1994 | Alberto Tomba | Thomas Stangassinger | Jure Košir |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1995 | Alberto Tomba | Michael Tritscher | Jure Košir |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1996 | Sebastien Amiez | Alberto Tomba | Thomas Sykora |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1997 | Thomas Sykora | Thomas Stangassinger | Finn Christian Jagge |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1998 | Thomas Sykora | Thomas Stangassinger | Hans Petter Buraas |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 1999 | Thomas Stangassinger | Jure Košir | Finn Christian Jagge |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 2000 | Kjetil André Aamodt | Ole Kristian Furuseth | Matjaž Vrhovnik |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 2001 | Benjamin Raich | Heinz Schilchegger | Mario Matt |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 2002 | Ivica Kostelić | Bode Miller | Jean-Pierre Vidal |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 2003 | Kalle Palander | Ivica Kostelić | Rainer Schönfelder |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 2004 | Rainer Schönfelder | Kalle Palander | Benjamin Raich |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 2005 | Benjamin Raich | Rainer Schönfelder | Manfred Pranger |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 2006 | Giorgio Rocca | Kalle Palander | Benjamin Raich |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 2007 | Benjamin Raich | Mario Matt | Jens Byggmark |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 2008 | Manfred Mölgg | Jean-Baptiste Grange | Reinfried Herbst |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 2009 | Jean-Baptiste Grange | Ivica Kostelić | Julien Lizeroux |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 2010 | Reinfried Herbst | Julien Lizeroux | Silvan Zurbriggen |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 2011 | Ivica Kostelić | Jean-Baptiste Grange | André Myhrer |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 2012 | André Myhrer | Ivica Kostelić | Marcel Hirscher |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 2013 | Marcel Hirscher | Felix Neureuther | Ivica Kostelić |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 2014 | Marcel Hirscher | Felix Neureuther | Henrik Kristoffersen |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 2015 | Marcel Hirscher | Felix Neureuther | Alexander Khoroshilov |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 2016 | Henrik Kristoffersen | Marcel Hirscher | Felix Neureuther |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 2017 | Marcel Hirscher | Henrik Kristoffersen | Manfred Mölgg |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 2018 | Marcel Hirscher | Henrik Kristoffersen | André Myhrer |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 2019 | Marcel Hirscher | Clément Noël | Daniel Yule |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 2020 | Henrik Kristoffersen | Clément Noël | Daniel Yule |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 2021 | Marco Schwarz | Clément Noël | Ramon Zenhäusern |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 2022 | Henrik Kristoffersen | Manuel Feller | Atle Lie McGrath |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 2023 | Lucas Braathen | Henrik Kristoffersen | Ramon Zenhäusern |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
| 2024 | Manuel Feller | Linus Straßer | Timon Haugan |
+--------+--------------------------------------------------+------------------------------------------------------+------------------------------------------------------+
## Women\'s slalom World Cup podiums {#womens_slalom_world_cup_podiums}
In the following table women\'s slalom World Cup podiums in the World Cup since first season in 1967
| 898 |
Slalom skiing
| 1 |
8,528 |
# Disjunction introduction
**Disjunction introduction** or **addition** (also called **or introduction**) is a rule of inference of propositional logic and almost every other deduction system. The rule makes it possible to introduce disjunctions to logical proofs. It is the inference that if *P* is true, then *P or Q* must be true.
An example in English:
: Socrates is a man.
: Therefore, Socrates is a man or pigs are flying in formation over the English Channel.
The rule can be expressed as:
$$\frac{P}{\therefore P \lor Q}$$
where the rule is that whenever instances of \"$P$\" appear on lines of a proof, \"$P \lor Q$\" can be placed on a subsequent line.
More generally it\'s also a simple valid argument form, this means that if the premise is true, then the conclusion is also true as any rule of inference should be, and an immediate inference, as it has a single proposition in its premises.
Disjunction introduction is not a rule in some paraconsistent logics because in combination with other rules of logic, it leads to explosion (i.e. everything becomes provable) and paraconsistent logic tries to avoid explosion and to be able to reason with contradictions. One of the solutions is to introduce disjunction with over rules. See `{{slink|Paraconsistent logic|Tradeoffs}}`{=mediawiki}.
## Formal notation {#formal_notation}
The *disjunction introduction* rule may be written in sequent notation:
: $P \vdash (P \lor Q)$
where $\vdash$ is a metalogical symbol meaning that $P \lor Q$ is a syntactic consequence of $P$ in some logical system;
and expressed as a truth-functional tautology or theorem of propositional logic:
$$P \to (P \lor Q)$$
where $P$ and $Q$ are propositions expressed in some formal system
| 278 |
Disjunction introduction
| 0 |
8,529 |
# Disjunction elimination
In propositional logic, **disjunction elimination** (sometimes named **proof by cases**, **case analysis**, or **or elimination**) is the valid argument form and rule of inference that allows one to eliminate a disjunctive statement from a logical proof. It is the inference that if a statement $P$ implies a statement $Q$ and a statement $R$ also implies $Q$, then if either $P$ or $R$ is true, then $Q$ has to be true. The reasoning is simple: since at least one of the statements P and R is true, and since either of them would be sufficient to entail Q, Q is certainly true.
An example in English:
: If I\'m inside, I have my wallet on me.
: If I\'m outside, I have my wallet on me.
: It is true that either I\'m inside or I\'m outside.
: Therefore, I have my wallet on me.
It is the rule can be stated as:
$$\frac{P \to Q, R \to Q, P \lor R}{\therefore Q}$$
where the rule is that whenever instances of \"$P \to Q$\", and \"$R \to Q$\" and \"$P \lor R$\" appear on lines of a proof, \"$Q$\" can be placed on a subsequent line.
## Formal notation {#formal_notation}
The *disjunction elimination* rule may be written in sequent notation:
: $(P \to Q), (R \to Q), (P \lor R) \vdash Q$
where $\vdash$ is a metalogical symbol meaning that $Q$ is a syntactic consequence of $P \to Q$, and $R \to Q$ and $P \lor R$ in some logical system;
and expressed as a truth-functional tautology or theorem of propositional logic:
$$(((P \to Q) \land (R \to Q)) \land (P \lor R)) \to Q$$
where $P$, $Q$, and $R$ are propositions expressed in some formal system
| 289 |
Disjunction elimination
| 0 |
8,536 |
# Differential cryptanalysis
**Differential cryptanalysis** is a general form of cryptanalysis applicable primarily to block ciphers, but also to stream ciphers and cryptographic hash functions. In the broadest sense, it is the study of how differences in information input can affect the resultant difference at the output. In the case of a block cipher, it refers to a set of techniques for tracing differences through the network of transformation, discovering where the cipher exhibits non-random behavior, and exploiting such properties to recover the secret key (cryptography key).
## History
The discovery of differential cryptanalysis is generally attributed to Eli Biham and Adi Shamir in the late 1980s, who published a number of attacks against various block ciphers and hash functions, including a theoretical weakness in the Data Encryption Standard (DES). It was noted by Biham and Shamir that DES was surprisingly resistant to differential cryptanalysis, but small modifications to the algorithm would make it much more susceptible.
In 1994, a member of the original IBM DES team, Don Coppersmith, published a paper stating that differential cryptanalysis was known to IBM as early as 1974, and that defending against differential cryptanalysis had been a design goal. According to author Steven Levy, IBM had discovered differential cryptanalysis on its own, and the NSA was apparently well aware of the technique. IBM kept some secrets, as Coppersmith explains: \"After discussions with NSA, it was decided that disclosure of the design considerations would reveal the technique of differential cryptanalysis, a powerful technique that could be used against many ciphers. This in turn would weaken the competitive advantage the United States enjoyed over other countries in the field of cryptography.\" Within IBM, differential cryptanalysis was known as the \"T-attack\" or \"Tickle attack\".
While DES was designed with resistance to differential cryptanalysis in mind, other contemporary ciphers proved to be vulnerable. An early target for the attack was the FEAL block cipher. The original proposed version with four rounds (FEAL-4) can be broken using only eight chosen plaintexts, and even a 31-round version of FEAL is susceptible to the attack. In contrast, the scheme can successfully cryptanalyze DES with an effort on the order of 2^47^ chosen plaintexts.
| 362 |
Differential cryptanalysis
| 0 |
8,536 |
# Differential cryptanalysis
## Attack mechanics {#attack_mechanics}
Differential cryptanalysis is usually a chosen plaintext attack, meaning that the attacker must be able to obtain ciphertexts for some set of plaintexts of their choosing. There are, however, extensions that would allow a known plaintext or even a ciphertext-only attack. The basic method uses pairs of plaintexts related by a constant *difference*. Difference can be defined in several ways, but the eXclusive OR (XOR) operation is usual. The attacker then computes the differences of the corresponding ciphertexts, hoping to detect statistical patterns in their distribution. The resulting pair of differences is called a **differential**. Their statistical properties depend upon the nature of the S-boxes used for encryption, so the attacker analyses differentials $(\Delta_x, \Delta_y)$ where $\Delta_y = S(x \oplus \Delta_x) \oplus S(x)$ (and ⊕ denotes exclusive or) for each such S-box *S*. In the basic attack, one particular ciphertext difference is expected to be especially frequent. In this way, the cipher can be distinguished from random. More sophisticated variations allow the key to be recovered faster than an exhaustive search.
In the most basic form of key recovery through differential cryptanalysis, an attacker requests the ciphertexts for a large number of plaintext pairs, then assumes that the differential holds for at least *r* − 1 rounds, where *r* is the total number of rounds.`{{fact|date=February 2025}}`{=mediawiki} The attacker then deduces which round keys (for the final round) are possible, assuming the difference between the blocks before the final round is fixed. When round keys are short, this can be achieved by simply exhaustively decrypting the ciphertext pairs one round with each possible round key. When one round key has been deemed a potential round key considerably more often than any other key, it is assumed to be the correct round key.
For any particular cipher, the input difference must be carefully selected for the attack to be successful. An analysis of the algorithm\'s internals is undertaken; the standard method is to trace a path of highly probable differences through the various stages of encryption, termed a *differential characteristic*.
Since differential cryptanalysis became public knowledge, it has become a basic concern of cipher designers. New designs are expected to be accompanied by evidence that the algorithm is resistant to this attack and many including the Advanced Encryption Standard, have been proven secure against the attack.
| 391 |
Differential cryptanalysis
| 1 |
8,536 |
# Differential cryptanalysis
## Attack in detail {#attack_in_detail}
The attack relies primarily on the fact that a given input/output difference pattern only occurs for certain values of inputs. Usually the attack is applied in essence to the non-linear components as if they were a solid component (usually they are in fact look-up tables or *S-boxes*). Observing the desired output difference (between two chosen or known plaintext inputs) *suggests* possible key values.
For example, if a differential of 1 =\> 1 (implying a difference in the least significant bit (LSB) of the input leads to an output difference in the LSB) occurs with probability of 4/256 (possible with the non-linear function in the AES cipher for instance) then for only 4 values (or 2 pairs) of inputs is that differential possible. Suppose we have a non-linear function where the key is XOR\'ed before evaluation and the values that allow the differential are {2,3} and {4,5}. If the attacker sends in the values of {6, 7} and observes the correct output difference it means the key is either 6 ⊕ K = 2, or 6 ⊕ K = 4, meaning the key K is either 2 or 4.
In essence, to protect a cipher from the attack, for an n-bit non-linear function one would ideally seek as close to 2^−(*n*\ −\ 1)^ as possible to achieve *differential uniformity*. When this happens, the differential attack requires as much work to determine the key as simply brute forcing the key.
The AES non-linear function has a maximum differential probability of 4/256 (most entries however are either 0 or 2). Meaning that in theory one could determine the key with half as much work as brute force, however, the high branch of AES prevents any high probability trails from existing over multiple rounds. In fact, the AES cipher would be just as immune to differential and linear attacks with a much *weaker* non-linear function. The incredibly high branch (active S-box count) of 25 over 4R means that over 8 rounds, no attack involves fewer than 50 non-linear transforms, meaning that the probability of success does not exceed Pr\[attack\] ≤ Pr\[best attack on S-box\]^50^. For example, with the current S-box AES emits no fixed differential with a probability higher than (4/256)^50^ or 2^−300^ which is far lower than the required threshold of 2^−128^ for a 128-bit block cipher. This would have allowed room for a more efficient S-box, even if it is 16-uniform the probability of attack would have still been 2^−200^.
There exist no bijections for even sized inputs/outputs with 2-uniformity. They exist in odd fields (such as GF(2^7^)) using either cubing or inversion (there are other exponents that can be used as well). For instance, S(x) = x^3^ in any odd binary field is immune to differential and linear cryptanalysis. This is in part why the MISTY designs use 7- and 9-bit functions in the 16-bit non-linear function. What these functions gain in immunity to differential and linear attacks, they lose to algebraic attacks.`{{why|date=February 2017}}`{=mediawiki} That is, they are possible to describe and solve via a SAT solver. This is in part why AES (for instance) has an affine mapping after the inversion
| 529 |
Differential cryptanalysis
| 2 |
8,544 |
# Drawing
**Drawing** is a visual art that uses an instrument to mark paper or another two-dimensional surface, or a digital representation of such. Traditionally, the instruments used to make a drawing include pencils, crayons, and ink pens, sometimes in combination. More modern tools include computer styluses with graphics tablets and gamepads in VR drawing software.
A drawing instrument releases a small amount of material onto a surface, leaving a visible mark. The most common support for drawing is paper, although other materials, such as cardboard, vellum, wood, plastic, leather, canvas, and board, have been used. Temporary drawings may be made on a blackboard or whiteboard. Drawing has been a popular and fundamental means of public expression throughout human history. It is one of the simplest and most efficient means of communicating ideas. The wide availability of drawing instruments makes drawing one of the most common artistic activities.
In addition to its more artistic forms, drawing is frequently used in commercial illustration, animation, architecture, engineering, and technical drawing. A quick, freehand drawing, usually not intended as a finished work, is sometimes called a sketch. An artist who practices or works in technical drawing may be called a drafter, draftsman, or draughtsman.
## Overview
Drawing is one of the oldest forms of human expression within the visual arts. It is generally concerned with the marking of lines and areas of tone onto paper/other material, where the accurate representation of the visual world is expressed upon a plane surface. Traditional drawings were monochrome, or at least had little colour, while modern colored-pencil drawings may approach or cross a boundary between drawing and painting. In Western terminology, drawing is distinct from painting, even though similar media often are employed in both tasks. Dry media, normally associated with drawing, such as chalk, may be used in pastel paintings. Drawing may be done with a liquid medium, applied with brushes or pens. Using a brush for drawing is very widespread and here it is more the process of using lines and hatching, that characterises something as a drawing. Similar supports likewise can serve both: painting generally involves the application of liquid paint onto prepared canvas or panels, but sometimes an underdrawing is drawn first on that same support. Drawing is often exploratory, with considerable emphasis on observation, problem-solving and composition. Drawing is also regularly used in preparation for a painting, further obfuscating their distinction. Drawings created for these purposes are called sketches.
There are several categories of drawing, including:
- figure drawing
- cartooning
- doodling
- sketch
- freehand.
There are also many drawing methods, such as:
- line drawing
- stippling
- shading
- entopic graphomania (a surrealist method in which dots are made at the sites of impurities in a blank sheet of paper, and lines are then made between the dots)
- tracing (drawing on a translucent paper, such as *tracing paper*, around the outline of preexisting shapes that show through the paper).
In fields outside art, technical drawings or plans of buildings, machinery, circuitry and other things are often called \"drawings\" even when they have been transferred to another medium by printing.
| 522 |
Drawing
| 0 |
8,544 |
# Drawing
## History
### In communication {#in_communication}
Drawing is one of the oldest forms of human expression, with evidence for its existence preceding that of written communication. It is believed that drawing was used as a specialised form of communication before the invention of the written language, demonstrated by the production of cave and rock paintings around 30,000 years ago (Art of the Upper Paleolithic). These drawings, known as pictograms, depicted objects and abstract concepts. The sketches and paintings produced by Neolithic times were eventually stylised and simplified in to symbol systems (proto-writing) and eventually into early writing systems.
### In manuscripts {#in_manuscripts}
Before the widespread availability of paper in Europe, monks in European monasteries used drawings, either as underdrawings for illuminated manuscripts on vellum or parchment, or as the final image. Drawing has also been used extensively in the field of science, as a method of discovery, understanding and explanation.
### In science {#in_science}
thumb\|upright=0.7\|Galileo Galilei, *Phases of the Moon*, 1609 or 1610, brown ink and wash on paper. 208 × 142 mm. National Central Library (Florence), Gal. 48, fol. 28rDrawing diagrams of observations is an important part of scientific study.
In 1609, astronomer Galileo Galilei explained the changing phases of Venus and also the sunspots through his observational telescopic drawings. In 1924, geophysicist Alfred Wegener used illustrations to visually demonstrate the origin of the continents.
### As artistic expression {#as_artistic_expression}
Drawing is one of the easiest ways to visualise ideas and to express one\'s creativity; therefore it has been prominent in the world of art. Throughout much of history, drawing was regarded as the foundation for artistic practice. Initially, artists used and reused wooden tablets for the production of their drawings. Following the widespread availability of paper in the 14th century, the use of drawing in the arts increased. At this point, drawing was commonly used as a tool for thought and investigation, acting as a study medium whilst artists were preparing for their final pieces of work. The Renaissance brought about a great sophistication in drawing techniques, enabling artists to represent things more realistically than before, and revealing an interest in geometry and philosophy.
The invention of the first widely available form of photography led to a shift in the hierarchy of the arts. Photography offered an alternative to drawing as a method for accurately representing visual phenomena, and traditional drawing practice was given less emphasis as an essential skill for artists, particularly so in Western society.
| 411 |
Drawing
| 1 |
8,544 |
# Drawing
## Notable artists and draftsmen {#notable_artists_and_draftsmen}
Drawing became significant as an art form around the late 15th century, with artists and master engravers such as Albrecht Dürer and Martin Schongauer (c. 1448--1491), the first Northern engraver known by name. Schongauer came from Alsace, and was born into a family of goldsmiths. Albrecht Dürer, a master of the next generation, was also the son of a goldsmith.
Old Master Drawings often reflect the history of the country in which they were produced, and the fundamental characteristics of a nation at that time. In 17th-century Holland, a Protestant country, there were almost no religious artworks, and, with no King or court, most art was bought privately. Drawings of landscapes or genre scenes were often viewed not as sketches but as highly finished works of art. Italian drawings, however, show the influence of Catholicism and the Church, which played a major role in artistic patronage. The same is often true of French drawings, although in the 17th century the disciplines of French Classicism meant drawings were less Baroque than the more free Italian counterparts, which conveyed a greater sense of movement.
In the 20th century Modernism encouraged \"imaginative originality\" and some artists\' approach to drawing became less literal, more abstract. World-renowned artists such as Pablo Picasso, Andy Warhol and Jean-Michel Basquiat helped challenge the status quo, with drawing being very much at the centre of their practice, and often re-interpreting traditional technique.
Basquiat\'s drawings were produced in many different mediums, most commonly ink, pencil, felt-tip or marker, and oil-stick, and he drew on any surface that came to hand, such as doors, clothing, refrigerators, walls and baseball helmets.
The centuries have produced a canon of notable artists and draftsmen, each with their own distinct language of drawing, including:
- 14th, 15th and 16th: Leonardo da Vinci • Albrecht Dürer • Hans Holbein the Younger • Michelangelo • Pisanello • Raphael
- 17th: Claude • Jacques de Gheyn II • Guercino • Nicolas Poussin • Rembrandt • Peter Paul Rubens • Pieter Saenredam
- 18th: François Boucher • Jean-Honoré Fragonard • Giovanni Battista Tiepolo • Antoine Watteau
- 19th: Aubrey Beardsley • Paul Cézanne • Jacques-Louis David • Honoré Daumier • Edgar Degas • Théodore Géricault • Francisco Goya • Jean-Auguste-Dominique Ingres • Pierre-Paul Prud\'hon • Odilon Redon • John Ruskin • Georges Seurat • Henri de Toulouse-Lautrec • Vincent van Gogh
- 20th: Max Beckmann • Jean Dubuffet • M. C. Escher • Arshile Gorky • George Grosz • Paul Klee • Oskar Kokoschka • Käthe Kollwitz • Alfred Kubin • André Masson • Alphonse Mucha • Jules Pascin • Pablo Picasso • Egon Schiele • Jean-Michel Basquiat • Andy Warhol
| 450 |
Drawing
| 2 |
8,544 |
# Drawing
## Materials
The *medium* is the means by which ink, pigment, or color are delivered onto the drawing surface. Most drawing media either are dry (e.g. graphite, charcoal, pastels, Conté, silverpoint), or use a fluid solvent or carrier (marker, pen and ink). Watercolor pencils can be used dry like ordinary pencils, then moistened with a wet brush to get various painterly effects. Very rarely, artists have drawn with (usually decoded) invisible ink. Metalpoint drawing usually employs either silver or lead. More rarely used are gold, platinum, copper, brass, bronze, and tinpoint.
Paper comes in a variety of different sizes and qualities, ranging from newspaper grade up to high quality and relatively expensive paper sold as individual sheets. Papers vary in texture, hue, acidity, and strength when wet. Smooth paper is good for rendering fine detail, but a more \"toothy\" paper holds the drawing material better. Thus a coarser material is useful for producing deeper contrast.
Newsprint and typing paper may be useful for practice and rough sketches. Tracing paper is used to experiment over a half-finished drawing, and to transfer a design from one sheet to another. Cartridge paper is the basic type of drawing paper sold in pads. Bristol board and even heavier acid-free boards, frequently with smooth finishes, are used for drawing fine detail and do not distort when wet media (ink, washes) are applied. Vellum is extremely smooth and suitable for very fine detail. Coldpressed watercolor paper may be favored for ink drawing due to its texture.
Acid-free, archival quality paper keeps its color and texture far longer than wood pulp based paper such as newsprint, which turns yellow and becomes brittle much sooner.
The basic tools are a drawing board or table, pencil sharpener and eraser, and for ink drawing, blotting paper. Other tools used are circle compass, ruler, and set square. Fixative is used to prevent pencil and crayon marks from smudging. Drafting tape is used to secure paper to drawing surface, and also to mask an area to keep it free of accidental marks, such as sprayed or spattered materials and washes. An easel or slanted table is used to keep the drawing surface in a suitable position, which is generally more horizontal than the position used in painting.
| 377 |
Drawing
| 3 |
8,544 |
# Drawing
## Technique
Almost all draftsmen use their hands and fingers to apply the media, with the exception of some disabled individuals who draw with their mouth or feet.
Prior to working on an image, the artist typically explores how various media work. They may try different drawing implements on practice sheets to determine value and texture, and how to apply the implement to produce various effects.
The artist\'s choice of drawing strokes affects the appearance of the image. Pen and ink drawings often use hatching -- groups of parallel lines. Cross-hatching uses hatching in two or more different directions to create a darker tone. Broken hatching, or lines with intermittent breaks, form lighter tones -- and controlling the density of the breaks achieves a gradation of tone. Stippling uses dots to produce tone, texture and shade. Different textures can be achieved depending on the method used to build tone.
Drawings in dry media often use similar techniques, though pencils and drawing sticks can achieve continuous variations in tone. Typically a drawing is filled in based on which hand the artist favors. A right-handed artist draws from left to right to avoid smearing the image. Erasers can remove unwanted lines, lighten tones, and clean up stray marks. In a sketch or outline drawing, lines drawn often follow the contour of the subject, creating depth by looking like shadows cast from a light in the artist\'s position.
Sometimes the artist leaves a section of the image untouched while filling in the remainder. The shape of the area to preserve can be painted with masking fluid or cut out of a frisket and applied to the drawing surface, protecting the surface from stray marks until the mask is removed.
Another method to preserve a section of the image is to apply a spray-on *fixative* to the surface. This holds loose material more firmly to the sheet and prevents it from smearing. However the fixative spray typically uses chemicals that can harm the respiratory system, so it should be employed in a well-ventilated area such as outdoors.
Another technique is subtractive drawing in which the drawing surface is covered with graphite or charcoal and then erased to make the image.
| 368 |
Drawing
| 4 |
8,544 |
# Drawing
## Tone
thumb\|upright=.8\|A pencil portrait by Henry Macbeth-Raeburn, with hatching and shading (1909) Shading is the technique of varying the tonal values on the paper to represent the shade of the material as well as the placement of the shadows. Careful attention to reflected light, shadows and highlights can result in a very realistic rendition of the image.
Blending uses an implement to soften or spread the original drawing strokes. Blending is most easily done with a medium that does not immediately fix itself, such as graphite, chalk, or charcoal, although freshly applied ink can be smudged, wet or dry, for some effects. For shading and blending, the artist can use a blending stump, tissue, a kneaded eraser, a fingertip, or any combination of them. A piece of chamois is useful for creating smooth textures, and for removing material to lighten the tone. Continuous tone can be achieved with graphite on a smooth surface without blending, but the technique is laborious, involving small circular or oval strokes with a somewhat blunt point.
Shading techniques that also introduce texture to the drawing include hatching and stippling. A number of other methods produce texture. In addition to the choice of paper, drawing material and technique affect texture. Texture can be made to appear more realistic when it is drawn next to a contrasting texture; a coarse texture is more obvious when placed next to a smoothly blended area. A similar effect can be achieved by drawing different tones close together. A light edge next to a dark background stands out to the eye, and almost appears to float above the surface.
The direction and quality of light play a crucial role in shading, influencing the depth and dimension of a drawing. Understanding how light interacts with different surfaces helps artists create a sense of realism, whether rendering smooth, reflective materials or rough, matte textures. Observing real-world lighting conditions and practicing from life can enhance an artist's ability to depict convincing shadows and highlights.
Additionally, advanced shading techniques, such as cross-hatching and scumbling, allow for greater control over tonal transitions and surface detail. Cross-hatching involves layering intersecting lines to build depth and tone, while scumbling uses circular or scribbled strokes to create soft, organic shading. These methods, when combined with careful blending and texture application, provide artists with a versatile toolkit for achieving a range of effects, from soft gradients to bold, high-contrast compositions.
| 403 |
Drawing
| 5 |
8,544 |
# Drawing
## Form and proportion {#form_and_proportion}
Measuring the dimensions of a subject while blocking in the drawing is an important step in producing a realistic rendition of the subject. Tools such as a compass can be used to measure the angles of different sides. These angles can be reproduced on the drawing surface and then rechecked to make sure they are accurate. Another form of measurement is to compare the relative sizes of different parts of the subject with each other. A finger placed at a point along the drawing implement can be used to compare that dimension with other parts of the image. A ruler can be used both as a straightedge and a device to compute proportions.
thumb\|upright=.4\|Variation of proportion with age
When attempting to draw a complicated shape such as a human figure, it is helpful at first to represent the form with a set of primitive volumes. Almost any form can be represented by some combination of the cube, sphere, cylinder, and cone. Once these basic volumes have been assembled into a likeness, then the drawing can be refined into a more accurate and polished form. The lines of the primitive volumes are removed and replaced by the final likeness. Drawing the underlying construction is a fundamental skill for representational art, and is taught in many books and schools. Its correct application resolves most uncertainties about smaller details, and makes the final image look consistent.
A more refined art of figure drawing relies upon the artist possessing a deep understanding of anatomy and the human proportions. A trained artist is familiar with the skeleton structure, joint location, muscle placement, tendon movement, and how the different parts work together during movement. This allows the artist to render more natural poses that do not appear artificially stiff. The artist is also familiar with how the proportions vary depending on the age of the subject, particularly when drawing a portrait.
| 322 |
Drawing
| 6 |
8,544 |
# Drawing
## Perspective
Linear perspective is a method of portraying objects on a flat surface so that the dimensions shrink with distance. Each set of parallel, straight edges of any object, whether a building or a table, follows lines that eventually converge at a vanishing point. Typically this convergence point is somewhere along the horizon, as buildings are built level with the flat surface. When multiple structures are aligned with each other, such as buildings along a street, the horizontal tops and bottoms of the structures typically converge at a vanishing point.
When both the fronts and sides of a building are drawn, then the parallel lines forming a side converge at a second point along the horizon (which may be off the drawing paper.) This is a two-point perspective. Converging the vertical lines to a third point above or below the horizon then produces a three-point perspective.
Depth can also be portrayed by several techniques in addition to the perspective approach above. Objects of similar **size** should appear ever smaller the further they are from the viewer. Thus the back wheel of a cart appears slightly smaller than the front wheel. Depth can be portrayed through the use of **texture**. As the texture of an object gets further away it becomes more compressed and busy, taking on an entirely different character than if it was close. Depth can also be portrayed by reducing the contrast in more distant objects, and by making their colors less saturated. This reproduces the effect of **atmospheric** haze, and cause the eye to focus primarily on objects drawn in the foreground.
| 268 |
Drawing
| 7 |
8,544 |
# Drawing
## Composition
The composition of the image is an important element in producing an interesting work of artistic merit. The artist plans element placement in the art to communicate ideas and feelings with the viewer. The composition can determine the focus of the art, and result in a harmonious whole that is aesthetically appealing and stimulating.
The illumination of the subject is also a key element in creating an artistic piece, and the interplay of light and shadow is a valuable method in the artist\'s toolbox. The placement of the light sources can make a considerable difference in the type of message that is being presented. Multiple light sources can wash out any wrinkles in a person\'s face, for instance, and give a more youthful appearance. In contrast, a single light source, such as harsh daylight, can serve to highlight any texture or interesting features.
When drawing an object or figure, the skilled artist pays attention to both the area within the silhouette and what lies outside. The exterior is termed the negative space, and can be as important in the representation as the figure. Objects placed in the background of the figure should appear properly placed wherever they can be viewed.
A study is a draft drawing that is made in preparation for a planned final image. Studies can be used to determine the appearances of specific parts of the completed image, or for experimenting with the best approach for accomplishing the end goal. However a well-crafted study can be a piece of art in its own right, and many hours of careful work can go into completing a study.
| 273 |
Drawing
| 8 |
8,544 |
# Drawing
## Process
Individuals display differences in their ability to produce visually accurate drawings, when a visually accurate drawing is \"recognized as a particular object at a particular time and in a particular space, rendered with little addition of visual detail that can not be seen in the object represented or with little deletion of visual detail.\"
Investigative studies have aimed to explain the reasons why some individuals draw better than others. One study posited four key abilities in the drawing process: motor skills required for mark-making, the drawer\'s own perception of their drawing, perception of objects being drawn, and the ability to make good representational decisions. Following this hypothesis, several studies have sought to conclude which of these processes are most significant in affecting the accuracy of drawings.
Motor control
Motor control is an important physical component in the \'Production Phase\' of the drawing process. It has been suggested that motor control plays a role in drawing ability, though its effects are not significant.
Perception
It has been suggested that an individual\'s ability to perceive an object they are drawing is the most important stage in the drawing process. This suggestion is supported by the discovery of a robust relationship between perception and drawing ability.
This evidence acted as the basis of Betty Edwards\' how-to-draw book, *Drawing on the Right Side of the Brain*. Edwards aimed to teach her readers how to draw, based on the development of the reader\'s perceptual abilities.
Furthermore, the influential artist and art critic John Ruskin emphasised the importance of perception in the drawing process in his book *The Elements of Drawing*. He stated, \"For I am nearly convinced, that once we see keenly enough, there is very little difficult in drawing what we see.\"
Visual memory
This has also been shown to influence one\'s ability to create visually accurate drawings. Short-term memory plays an important part in drawing as one\'s gaze shifts between the object they are drawing and the drawing itself.
Decision-making
Some studies comparing artists to non-artists have found that artists spend more time thinking strategically while drawing. In particular, artists spend more time on \'metacognitive\' activities such as considering different hypothetical plans for how they might progress with a drawing
| 371 |
Drawing
| 9 |
8,561 |
# Denormalization
**Denormalization** is a strategy used on a previously-normalized database to increase performance. In computing, denormalization is the process of trying to improve the read performance of a database, at the expense of losing some write performance, by adding redundant copies of data or by grouping data. It is often motivated by performance or scalability in relational database software needing to carry out very large numbers of read operations. Denormalization differs from the unnormalized form in that denormalization benefits can only be fully realized on a data model that is otherwise normalized.
## Implementation
A normalized design will often \"store\" different but related pieces of information in separate logical tables (called relations). If these relations are stored physically as separate disk files, completing a database query that draws information from several relations (a *join operation*) can be slow. If many relations are joined, it may be prohibitively slow. There are two strategies for dealing with this by denormalization:
- \"DBMS support\": The database management system stores redundant copies in the background, which are kept consistent by the DBMS software
- \"DBA implementation\": The database administrator (or designer) design around the problem by denormalizing the logical data design
### DBMS support {#dbms_support}
With this approach, database administrators can keep the logical design normalized, but allow the database management system (DBMS) to store additional redundant information on disk to optimize query response. In this case it is the DBMS software\'s responsibility to ensure that any redundant copies are kept consistent. This method is often implemented in SQL as indexed views (Microsoft SQL Server) or materialized views (Oracle, PostgreSQL). A view may, among other factors, represent information in a format convenient for querying, and the index ensures that queries against the view are optimized physically.
### DBA implementation {#dba_implementation}
With this approach, a database administrator or designer has to denormalize the logical data design. With care this can achieve a similar improvement in query response, but at a cost --- it is now the database designer\'s responsibility to ensure that the denormalized database does not become inconsistent. This is done by creating rules in the database called *constraints*, that specify how the redundant copies of information must be kept synchronized, which may easily make the de-normalization procedure pointless. It is the increase in logical complexity of the database design and the added complexity of the additional constraints that make this approach hazardous. Moreover, constraints introduce a trade-off, speeding up reads (`SELECT` in SQL) while slowing down writes (`INSERT`, `UPDATE`, and `DELETE`). This means a denormalized database under heavy write load may offer *worse* performance than its functionally equivalent normalized counterpart.
## Denormalization versus not normalized data {#denormalization_versus_not_normalized_data}
A denormalized data model is not the same as a data model that has not been normalized, and denormalization should only take place after a satisfactory level of normalization has taken place and that any required constraints and/or rules have been created to deal with the inherent anomalies in the design. For example, all the relations are in third normal form and any relations with join dependencies and multi-valued dependencies are handled appropriately.
Examples of denormalization techniques include:
- \"Storing\" the count of the \"many\" elements in a one-to-many relationship as an attribute of the \"one\" relation
- Adding attributes to a relation from another relation with which it will be joined
- Star schemas, which are also known as fact-dimension models and have been extended to snowflake schemas
- Prebuilt summarization or OLAP cubes
With the continued dramatic increase in all three of storage, processing power and bandwidth, on all levels, denormalization in databases has moved from being an unusual or extension technique, to the commonplace, or even the norm.`{{when|date=June 2024}}`{=mediawiki} For example, one specific downside of denormalization was, simply, that it \"uses more storage\" (that is to say, literally more columns in a database). With the exception of truly enormous systems, increased storage requirements is considered a relatively small problem in the 2020s
| 660 |
Denormalization
| 0 |
8,562 |
# Differential topology
In mathematics, **differential topology** is the field dealing with the topological properties and smooth properties of smooth manifolds. In this sense differential topology is distinct from the closely related field of differential geometry, which concerns the *geometric* properties of smooth manifolds, including notions of size, distance, and rigid shape. By comparison differential topology is concerned with coarser properties, such as the number of holes in a manifold, its homotopy type, or the structure of its diffeomorphism group. Because many of these coarser properties may be captured algebraically, differential topology has strong links to algebraic topology. The central goal of the field of differential topology is the classification of all smooth manifolds up to diffeomorphism. Since dimension is an invariant of smooth manifolds up to diffeomorphism type, this classification is often studied by classifying the (connected) manifolds in each dimension separately:
- In dimension 1, the only smooth manifolds up to diffeomorphism are the circle, the real number line, and allowing a boundary, the half-closed interval $[0,1)$ and fully closed interval $[0,1]$.
- In dimension 2, every closed surface is classified up to diffeomorphism by its genus, the number of holes (or equivalently its Euler characteristic), and whether or not it is orientable. This is the famous classification of closed surfaces. Already in dimension two the classification of non-compact surfaces becomes difficult, due to the existence of exotic spaces such as Jacob\'s ladder.
- In dimension 3, William Thurston\'s geometrization conjecture, proven by Grigori Perelman, gives a partial classification of compact three-manifolds. Included in this theorem is the Poincaré conjecture, which states that any closed, simply connected three-manifold is homeomorphic (and in fact diffeomorphic) to the 3-sphere.
Beginning in dimension 4, the classification becomes much more difficult for two reasons. Firstly, every finitely presented group appears as the fundamental group of some 4-manifold, and since the fundamental group is a diffeomorphism invariant, this makes the classification of 4-manifolds at least as difficult as the classification of finitely presented groups. By the word problem for groups, which is equivalent to the halting problem, it is impossible to classify such groups, so a full topological classification is impossible. Secondly, beginning in dimension four it is possible to have smooth manifolds that are homeomorphic, but with distinct, non-diffeomorphic smooth structures. This is true even for the Euclidean space $\mathbb{R}^4$, which admits many exotic $\mathbb{R}^4$ structures. This means that the study of differential topology in dimensions 4 and higher must use tools genuinely outside the realm of the regular continuous topology of topological manifolds. One of the central open problems in differential topology is the four-dimensional smooth Poincaré conjecture, which asks if every smooth 4-manifold that is homeomorphic to the 4-sphere, is also diffeomorphic to it. That is, does the 4-sphere admit only one smooth structure? This conjecture is true in dimensions 1, 2, and 3, by the above classification results, but is known to be false in dimension 7 due to the Milnor spheres.
Important tools in studying the differential topology of smooth manifolds include the construction of smooth topological invariants of such manifolds, such as de Rham cohomology or the intersection form, as well as smoothable topological constructions, such as smooth surgery theory or the construction of cobordisms. Morse theory is an important tool which studies smooth manifolds by considering the critical points of differentiable functions on the manifold, demonstrating how the smooth structure of the manifold enters into the set of tools available. Oftentimes more geometric or analytical techniques may be used, by equipping a smooth manifold with a Riemannian metric or by studying a differential equation on it. Care must be taken to ensure that the resulting information is insensitive to this choice of extra structure, and so genuinely reflects only the topological properties of the underlying smooth manifold. For example, the Hodge theorem provides a geometric and analytical interpretation of the de Rham cohomology, and gauge theory was used by Simon Donaldson to prove facts about the intersection form of simply connected 4-manifolds. In some cases techniques from contemporary physics may appear, such as topological quantum field theory, which can be used to compute topological invariants of smooth spaces.
Famous theorems in differential topology include the Whitney embedding theorem, the hairy ball theorem, the Hopf theorem, the Poincaré--Hopf theorem, Donaldson\'s theorem, and the Poincaré conjecture.
## Description
Differential topology considers the properties and structures that require only a smooth structure on a manifold to be defined. Smooth manifolds are \'softer\' than manifolds with extra geometric structures, which can act as obstructions to certain types of equivalences and deformations that exist in differential topology. For instance, volume and Riemannian curvature are invariants that can distinguish different geometric structures on the same smooth manifold---that is, one can smoothly \"flatten out\" certain manifolds, but it might require distorting the space and affecting the curvature or volume.
On the other hand, smooth manifolds are more rigid than the topological manifolds. John Milnor discovered that some spheres have more than one smooth structure---see Exotic sphere and Donaldson\'s theorem. Michel Kervaire exhibited topological manifolds with no smooth structure at all. Some constructions of smooth manifold theory, such as the existence of tangent bundles, can be done in the topological setting with much more work, and others cannot.
One of the main topics in differential topology is the study of special kinds of smooth mappings between manifolds, namely immersions and submersions, and the intersections of submanifolds via transversality. More generally one is interested in properties and invariants of smooth manifolds that are carried over by diffeomorphisms, another special kind of smooth mapping. Morse theory is another branch of differential topology, in which topological information about a manifold is deduced from changes in the rank of the Jacobian of a function.
For a list of differential topology topics, see the following reference: List of differential geometry topics.
| 972 |
Differential topology
| 0 |
8,562 |
# Differential topology
## Differential topology versus differential geometry {#differential_topology_versus_differential_geometry}
Differential topology and differential geometry are first characterized by their *similarity*. They both study primarily the properties of differentiable manifolds, sometimes with a variety of structures imposed on them.
One major difference lies in the nature of the problems that each subject tries to address. In one view, differential topology distinguishes itself from differential geometry by studying primarily those problems that are *inherently global*. Consider the example of a coffee cup and a donut. From the point of view of differential topology, the donut and the coffee cup are *the same* (in a sense). This is an inherently global view, though, because there is no way for the differential topologist to tell whether the two objects are the same (in this sense) by looking at just a tiny (*local*) piece of either of them. They must have access to each entire (*global*) object.
From the point of view of differential geometry, the coffee cup and the donut are *different* because it is impossible to rotate the coffee cup in such a way that its configuration matches that of the donut. This is also a global way of thinking about the problem. But an important distinction is that the geometer does not need the entire object to decide this. By looking, for instance, at just a tiny piece of the handle, they can decide that the coffee cup is different from the donut because the handle is thinner (or more curved) than any piece of the donut.
To put it succinctly, differential topology studies structures on manifolds that, in a sense, have no interesting local structure. Differential geometry studies structures on manifolds that do have an interesting local (or sometimes even infinitesimal) structure.
More mathematically, for example, the problem of constructing a diffeomorphism between two manifolds of the same dimension is inherently global since *locally* two such manifolds are always diffeomorphic. Likewise, the problem of computing a quantity on a manifold that is invariant under differentiable mappings is inherently global, since any local invariant will be *trivial* in the sense that it is already exhibited in the topology of $\R^n$. Moreover, differential topology does not restrict itself necessarily to the study of diffeomorphism. For example, symplectic topology---a subbranch of differential topology---studies global properties of symplectic manifolds. Differential geometry concerns itself with problems---which may be local *or* global---that always have some non-trivial local properties. Thus differential geometry may study differentiable manifolds equipped with a *connection*, a *metric* (which may be Riemannian, pseudo-Riemannian, or Finsler), a special sort of *distribution* (such as a CR structure), and so on.
This distinction between differential geometry and differential topology is blurred, however, in questions specifically pertaining to local diffeomorphism invariants such as the tangent space at a point. Differential topology also deals with questions like these, which specifically pertain to the properties of differentiable mappings on $\R^n$ (for example the tangent bundle, jet bundles, the Whitney extension theorem, and so forth).
The distinction is concise in abstract terms:
- Differential topology is the study of the (infinitesimal, local, and global) properties of structures on manifolds that have *only trivial* local moduli.
- Differential geometry is such a study of structures on manifolds that have one or more *non-trivial* local moduli
| 544 |
Differential topology
| 1 |
8,564 |
# Diffeomorphism
In mathematics, a **diffeomorphism** is an isomorphism of differentiable manifolds. It is an invertible function that maps one differentiable manifold to another such that both the function and its inverse are continuously differentiable.
## Definition
Given two differentiable manifolds $M$ and $N$, a continuously differentiable map $f \colon M \rightarrow N$ is a **diffeomorphism** if it is a bijection and its inverse $f^{-1} \colon N \rightarrow M$ is differentiable as well. If these functions are $r$ times continuously differentiable, $f$ is called a $C^r$-diffeomorphism.
Two manifolds $M$ and $N$ are **diffeomorphic** (usually denoted $M \simeq N$) if there is a diffeomorphism $f$ from $M$ to $N$. Two $C^r$-differentiable manifolds are $C^r$-diffeomorphic if there is an $r$ times continuously differentiable bijective map between them whose inverse is also $r$ times continuously differentiable.
## Diffeomorphisms of subsets of manifolds {#diffeomorphisms_of_subsets_of_manifolds}
Given a subset $X$ of a manifold $M$ and a subset $Y$ of a manifold $N$, a function $f:X\to Y$ is said to be smooth if for all $p$ in $X$ there is a neighborhood $U\subset M$ of $p$ and a smooth function $g:U\to N$ such that the restrictions agree: $g_{|U \cap X} = f_{|U \cap X}$ (note that $g$ is an extension of $f$). The function $f$ is said to be a diffeomorphism if it is bijective, smooth and its inverse is smooth.
## Local description {#local_description}
Testing whether a differentiable map is a diffeomorphism can be made locally under some mild restrictions. This is the Hadamard-Caccioppoli theorem:
If $U$, $V$ are connected open subsets of $\R^n$ such that $V$ is simply connected, a differentiable map $f:U\to V$ is a diffeomorphism if it is proper and if the differential $Df_x:\R^n\to\R^n$ is bijective (and hence a linear isomorphism) at each point $x$ in $U$.
Some remarks:
It is essential for $V$ to be simply connected for the function $f$ to be globally invertible (under the sole condition that its derivative be a bijective map at each point). For example, consider the \"realification\" of the complex square function
: \\begin{cases}
f : \\R\^2 \\setminus \\{(0,0)\\} \\to \\R\^2 \\setminus \\{(0,0)\\} \\\\ (x,y)\\mapsto(x\^2-y\^2,2xy). \\end{cases} Then $f$ is surjective and it satisfies
: $\det Df_x = 4(x^2+y^2) \neq 0.$
Thus, though $Df_x$ is bijective at each point, $f$ is not invertible because it fails to be injective (e.g. $f(1,0)=(1,0)=f(-1,0)$).
Since the differential at a point (for a differentiable function)
: $Df_x : T_xU \to T_{f(x)}V$
is a linear map, it has a well-defined inverse if and only if $Df_x$ is a bijection. The matrix representation of $Df_x$ is the $n\times n$ matrix of first-order partial derivatives whose entry in the $i$-th row and $j$-th column is $\partial f_i / \partial x_j$. This so-called Jacobian matrix is often used for explicit computations.
Diffeomorphisms are necessarily between manifolds of the same dimension. Imagine $f$ going from dimension $n$ to dimension $k$. If $n<k$ then $Df_x$ could never be surjective, and if $n>k$ then $Df_x$ could never be injective. In both cases, therefore, $Df_x$ fails to be a bijection.
If $Df_x$ is a bijection at $x$ then $f$ is said to be a local diffeomorphism (since, by continuity, $Df_y$ will also be bijective for all $y$ sufficiently close to $x$).
Given a smooth map from dimension $n$ to dimension $k$, if $Df$ (or, locally, $Df_x$) is surjective, $f$ is said to be a submersion (or, locally, a \"local submersion\"); and if $Df$ (or, locally, $Df_x$) is injective, $f$ is said to be an immersion (or, locally, a \"local immersion\").
A differentiable bijection is *not* necessarily a diffeomorphism. $f(x)=x^3$, for example, is not a diffeomorphism from $\R$ to itself because its derivative vanishes at 0 (and hence its inverse is not differentiable at 0). This is an example of a homeomorphism that is not a diffeomorphism.
When $f$ is a map between differentiable manifolds, a diffeomorphic $f$ is a stronger condition than a homeomorphic $f$. For a diffeomorphism, $f$ and its inverse need to be differentiable; for a homeomorphism, $f$ and its inverse need only be continuous. Every diffeomorphism is a homeomorphism, but not every homeomorphism is a diffeomorphism.
$f:M\to N$ is a diffeomorphism if, in coordinate charts, it satisfies the definition above. More precisely: Pick any cover of $M$ by compatible coordinate charts and do the same for $N$. Let $\phi$ and $\psi$ be charts on, respectively, $M$ and $N$, with $U$ and $V$ as, respectively, the images of $\phi$ and $\psi$. The map $\psi f\phi^{-1}:U\to V$ is then a diffeomorphism as in the definition above, whenever $f(\phi^{-1}(U))\subseteq\psi^{-1}(V)$.
| 751 |
Diffeomorphism
| 0 |
8,564 |
# Diffeomorphism
## Examples
Since any manifold can be locally parametrised, we can consider some explicit maps from $\R^2$ into $\R^2$.
- Let
:
: $f(x,y) = \left (x^2 + y^3, x^2 - y^3 \right ).$
: We can calculate the Jacobian matrix:
: $J_f = \begin{pmatrix} 2x & 3y^2 \\ 2x & -3y^2 \end{pmatrix} .$
: The Jacobian matrix has zero determinant if and only if $xy=0$. We see that $f$ could only be a diffeomorphism away from the $x$-axis and the $y$-axis. However, $f$ is not bijective since $f(x,y)=f(-x,y)$, and thus it cannot be a diffeomorphism.
- Let
:
: $g(x,y) = \left (a_0 + a_{1,0}x + a_{0,1}y + \cdots, \ b_0 + b_{1,0}x + b_{0,1}y + \cdots \right )$
: where the $a_{i,j}$ and $b_{i,j}$ are arbitrary real numbers, and the omitted terms are of degree at least two in *x* and *y*. We can calculate the Jacobian matrix at **0**:
: $J_g(0,0) = \begin{pmatrix} a_{1,0} & a_{0,1} \\ b_{1,0} & b_{0,1} \end{pmatrix}.$
: We see that *g* is a local diffeomorphism at **0** if, and only if,
: $a_{1,0}b_{0,1} - a_{0,1}b_{1,0} \neq 0,$
: i.e. the linear terms in the components of *g* are linearly independent as polynomials.
- Let
:
: $h(x,y) = \left (\sin(x^2 + y^2), \cos(x^2 + y^2) \right ).$
: We can calculate the Jacobian matrix:
: $J_h = \begin{pmatrix} 2x\cos(x^2 + y^2) & 2y\cos(x^2 + y^2) \\ -2x\sin(x^2+y^2) & -2y\sin(x^2 + y^2) \end{pmatrix} .$
: The Jacobian matrix has zero determinant everywhere! In fact we see that the image of *h* is the unit circle.
### Surface deformations {#surface_deformations}
In mechanics, a stress-induced transformation is called a deformation and may be described by a diffeomorphism. A diffeomorphism $f:U\to V$ between two surfaces $U$ and $V$ has a Jacobian matrix $Df$ that is an invertible matrix. In fact, it is required that for $p$ in $U$, there is a neighborhood of $p$ in which the Jacobian $Df$ stays non-singular. Suppose that in a chart of the surface, $f(x,y) = (u,v).$
The total differential of *u* is
$$du = \frac{\partial u}{\partial x} dx + \frac{\partial u}{\partial y} dy$$, and similarly for *v*. Then the image $(du, dv) = (dx, dy) Df$ is a linear transformation, fixing the origin, and expressible as the action of a complex number of a particular type. When (*dx*, *dy*) is also interpreted as that type of complex number, the action is of complex multiplication in the appropriate complex number plane. As such, there is a type of angle (Euclidean, hyperbolic, or slope) that is preserved in such a multiplication. Due to *Df* being invertible, the type of complex number is uniform over the surface. Consequently, a surface deformation or diffeomorphism of surfaces has the **conformal property** of preserving (the appropriate type of) angles.
| 465 |
Diffeomorphism
| 1 |
8,564 |
# Diffeomorphism
## Diffeomorphism group {#diffeomorphism_group}
Let $M$ be a differentiable manifold that is second-countable and Hausdorff. The **diffeomorphism group** of $M$ is the group of all $C^r$ diffeomorphisms of $M$ to itself, denoted by $\text{Diff}^r(M)$ or, when $r$ is understood, $\text{Diff}(M)$. This is a \"large\" group, in the sense that---provided $M$ is not zero-dimensional---it is not locally compact.
### Topology
The diffeomorphism group has two natural topologies: *weak* and *strong* `{{harv|Hirsch|1997}}`{=mediawiki}. When the manifold is compact, these two topologies agree. The weak topology is always metrizable. When the manifold is not compact, the strong topology captures the behavior of functions \"at infinity\" and is not metrizable. It is, however, still Baire.
Fixing a Riemannian metric on $M$, the weak topology is the topology induced by the family of metrics
: $d_K(f,g) = \sup\nolimits_{x\in K} d(f(x),g(x)) + \sum\nolimits_{1\le p\le r} \sup\nolimits_{x\in K} \left \|D^pf(x) - D^pg(x) \right \|$
as $K$ varies over compact subsets of $M$. Indeed, since $M$ is $\sigma$-compact, there is a sequence of compact subsets $K_n$ whose union is $M$. Then:
: $d(f,g) = \sum\nolimits_n 2^{-n}\frac{d_{K_n}(f,g)}{1+d_{K_n}(f,g)}.$
The diffeomorphism group equipped with its weak topology is locally homeomorphic to the space of $C^r$ vector fields `{{harv|Leslie|1967}}`{=mediawiki}. Over a compact subset of $M$, this follows by fixing a Riemannian metric on $M$ and using the exponential map for that metric. If $r$ is finite and the manifold is compact, the space of vector fields is a Banach space. Moreover, the transition maps from one chart of this atlas to another are smooth, making the diffeomorphism group into a Banach manifold with smooth right translations; left translations and inversion are only continuous. If $r=\infty$, the space of vector fields is a Fréchet space. Moreover, the transition maps are smooth, making the diffeomorphism group into a Fréchet manifold and even into a regular Fréchet Lie group. If the manifold is $\sigma$-compact and not compact the full diffeomorphism group is not locally contractible for any of the two topologies. One has to restrict the group by controlling the deviation from the identity near infinity to obtain a diffeomorphism group which is a manifold; see `{{harv|Michor|Mumford|2013}}`{=mediawiki}.
### Lie algebra {#lie_algebra}
The Lie algebra of the diffeomorphism group of $M$ consists of all vector fields on $M$ equipped with the Lie bracket of vector fields. Somewhat formally, this is seen by making a small change to the coordinate $x$ at each point in space:
: $x^{\mu} \mapsto x^{\mu} + \varepsilon h^{\mu}(x)$
so the infinitesimal generators are the vector fields
: $L_{h} = h^{\mu}(x)\frac{\partial}{\partial x^\mu}.$
### Examples {#examples_1}
- When $M=G$ is a Lie group, there is a natural inclusion of $G$ in its own diffeomorphism group via left-translation. Let $\text{Diff}(G)$ denote the diffeomorphism group of $G$, then there is a splitting $\text{Diff}(G)\simeq G\times\text{Diff}(G,e)$, where $\text{Diff}(G,e)$ is the subgroup of $\text{Diff}(G)$ that fixes the identity element of the group.
- The diffeomorphism group of Euclidean space $\R^n$ consists of two components, consisting of the orientation-preserving and orientation-reversing diffeomorphisms. In fact, the general linear group is a deformation retract of the subgroup $\text{Diff}(\R^n,0)$ of diffeomorphisms fixing the origin under the map $f(x)\to f(tx)/t, t\in(0,1]$. In particular, the general linear group is also a deformation retract of the full diffeomorphism group.
- For a finite set of points, the diffeomorphism group is simply the symmetric group. Similarly, if $M$ is any manifold there is a group extension $0\to\text{Diff}_0(M)\to\text{Diff}(M)\to\Sigma(\pi_0(M))$. Here $\text{Diff}_0(M)$ is the subgroup of $\text{Diff}(M)$ that preserves all the components of $M$, and $\Sigma(\pi_0(M))$ is the permutation group of the set $\pi_0(M)$ (the components of $M$). Moreover, the image of the map $\text{Diff}(M)\to\Sigma(\pi_0(M))$ is the bijections of $\pi_0(M)$ that preserve diffeomorphism classes.
### Transitivity
For a connected manifold $M$, the diffeomorphism group acts transitively on $M$. More generally, the diffeomorphism group acts transitively on the configuration space $C_k M$. If $M$ is at least two-dimensional, the diffeomorphism group acts transitively on the configuration space $F_k M$ and the action on $M$ is multiply transitive `{{harv|Banyaga|1997|p=29}}`{=mediawiki}.
### Extensions of diffeomorphisms {#extensions_of_diffeomorphisms}
In 1926, Tibor Radó asked whether the harmonic extension of any homeomorphism or diffeomorphism of the unit circle to the unit disc yields a diffeomorphism on the open disc. An elegant proof was provided shortly afterwards by Hellmuth Kneser. In 1945, Gustave Choquet, apparently unaware of this result, produced a completely different proof.
The (orientation-preserving) diffeomorphism group of the circle is pathwise connected. This can be seen by noting that any such diffeomorphism can be lifted to a diffeomorphism $f$ of the reals satisfying $[f(x+1)=f(x)+1]$; this space is convex and hence path-connected. A smooth, eventually constant path to the identity gives a second more elementary way of extending a diffeomorphism from the circle to the open unit disc (a special case of the Alexander trick). Moreover, the diffeomorphism group of the circle has the homotopy-type of the orthogonal group $O(2)$.
The corresponding extension problem for diffeomorphisms of higher-dimensional spheres $S^{n-1}$ was much studied in the 1950s and 1960s, with notable contributions from René Thom, John Milnor and Stephen Smale. An obstruction to such extensions is given by the finite abelian group $\Gamma_n$, the \"group of twisted spheres\", defined as the quotient of the abelian component group of the diffeomorphism group by the subgroup of classes extending to diffeomorphisms of the ball $B^n$.
### Connectedness
For manifolds, the diffeomorphism group is usually not connected. Its component group is called the mapping class group. In dimension 2 (i.e. surfaces), the mapping class group is a finitely presented group generated by Dehn twists; this has been proved by Max Dehn, W. B. R. Lickorish, and Allen Hatcher). Max Dehn and Jakob Nielsen showed that it can be identified with the outer automorphism group of the fundamental group of the surface.
William Thurston refined this analysis by classifying elements of the mapping class group into three types: those equivalent to a periodic diffeomorphism; those equivalent to a diffeomorphism leaving a simple closed curve invariant; and those equivalent to pseudo-Anosov diffeomorphisms. In the case of the torus $S^1\times S^1=\R^2/\Z^2$, the mapping class group is simply the modular group $\text{SL}(2,\Z)$ and the classification becomes classical in terms of elliptic, parabolic and hyperbolic matrices. Thurston accomplished his classification by observing that the mapping class group acted naturally on a compactification of Teichmüller space; as this enlarged space was homeomorphic to a closed ball, the Brouwer fixed-point theorem became applicable. Smale conjectured that if $M$ is an oriented smooth closed manifold, the identity component of the group of orientation-preserving diffeomorphisms is simple. This had first been proved for a product of circles by Michel Herman; it was proved in full generality by Thurston.
| 1,098 |
Diffeomorphism
| 2 |
8,564 |
# Diffeomorphism
## Diffeomorphism group {#diffeomorphism_group}
### Homotopy types {#homotopy_types}
- The diffeomorphism group of $S^2$ has the homotopy-type of the subgroup $O(3)$. This was proven by Steve Smale.
- The diffeomorphism group of the torus has the homotopy-type of its linear automorphisms: $S^1\times S^1\times\text{GL}(2,\Z)$.
- The diffeomorphism groups of orientable surfaces of genus $g>1$ have the homotopy-type of their mapping class groups (i.e. the components are contractible).
- The homotopy-type of the diffeomorphism groups of 3-manifolds are fairly well understood via the work of Ivanov, Hatcher, Gabai and Rubinstein, although there are a few outstanding open cases (primarily 3-manifolds with finite fundamental groups).
- The homotopy-type of diffeomorphism groups of $n$-manifolds for $n>3$ are poorly understood. For example, it is an open problem whether or not $\text{Diff}(S^4)$ has more than two components. Via Milnor, Kahn and Antonelli, however, it is known that provided $n>6$, $\text{Diff}(S^n)$ does not have the homotopy-type of a finite CW-complex.
## Homeomorphism and diffeomorphism {#homeomorphism_and_diffeomorphism}
Since every diffeomorphism is a homeomorphism, given a pair of manifolds which are diffeomorphic to each other they are in particular homeomorphic to each other. The converse is not true in general.
While it is easy to find homeomorphisms that are not diffeomorphisms, it is more difficult to find a pair of homeomorphic manifolds that are not diffeomorphic. In dimensions 1, 2 and 3, any pair of homeomorphic smooth manifolds are diffeomorphic. In dimension 4 or greater, examples of homeomorphic but not diffeomorphic pairs exist. The first such example was constructed by John Milnor in dimension 7. He constructed a smooth 7-dimensional manifold (called now Milnor\'s sphere) that is homeomorphic to the standard 7-sphere but not diffeomorphic to it. There are, in fact, 28 oriented diffeomorphism classes of manifolds homeomorphic to the 7-sphere (each of them is the total space of a fiber bundle over the 4-sphere with the 3-sphere as the fiber).
More unusual phenomena occur for 4-manifolds. In the early 1980s, a combination of results due to Simon Donaldson and Michael Freedman led to the discovery of exotic $\R^4$: there are uncountably many pairwise non-diffeomorphic open subsets of $\R^4$ each of which is homeomorphic to $\R^4$, and also there are uncountably many pairwise non-diffeomorphic differentiable manifolds homeomorphic to $\R^4$ that do not embed smoothly in $\R^4$
| 378 |
Diffeomorphism
| 3 |
8,578 |
# Dyne
The **dyne** (symbol: **dyn**; `{{etymology|grc|''{{wikt-lang|grc|δύναμις}}'' ({{grc-transl|δύναμις}})|power, force}}`{=mediawiki}) is a derived unit of force specified in the centimetre--gram--second (CGS) system of units, a predecessor of the modern SI.
## History
The name **dyne** was first proposed as a CGS unit of force in 1873 by a Committee of the British Association for the Advancement of Science.
## Definition
The dyne is defined as \"the force required to accelerate a mass of one gram at a rate of one centimetre per second squared\". An equivalent definition of the dyne is \"that force which, acting for one second, will produce a change of velocity of one centimetre per second in a mass of one gram\".
One dyne is equal to 10 micronewtons, 10^−5^ N or to 10 nsn (nanosthenes) in the old metre--tonne--second system of units.
- 1 dyn = 1 g⋅cm/s^2^ = 10^−5^ kg⋅m/s^2^ = 10^−5^ N
- 1 N = 1 kg⋅m/s^2^ = 10^5^ g⋅cm/s^2^ = 10^5^ dyn
## Use
The **dyne per centimetre** is a unit traditionally used to measure surface tension. For example, the surface tension of distilled water is 71.99 dyn/cm at 25 °C (77 °F). (In SI units this is `{{val|71.99|e=-3|u=N/m}}`{=mediawiki} or `{{val|71.99|u=mN/m}}`{=mediawiki}
| 198 |
Dyne
| 0 |
8,589 |
# December 9
| 3 |
December 9
| 0 |
8,591 |
# Diaspora studies
**Diaspora studies** is an academic field established in the late 20th century to study dispersed ethnic populations, which are often termed diaspora peoples. The usage of the term diaspora carries the connotation of forced resettlement, due to expulsion, coercion, slavery, racism, or war, especially nationalist conflicts.
## Academic institutes {#academic_institutes}
- The International Institute for Diasporic and Transcultural Studies (IIDTS) --- a transnational institute incorporating Jean Moulin University (Lyons, France), the University of Cyprus, Sun Yat-sen University (Guangzhou, China) and Liverpool Hope University (UK) --- is a dedicated research network operating in a transdisciplinary logic and focused on cultural representation (and auto-representation) of diasporic communities throughout the world. The institute sponsors the trilingual publication *Transtext(e)s-Transcultures: A Journal of Global Cultural Studies*.
```{=html}
<!-- -->
```
- Nehru University\'s School of International Studies, [www.jnu.ac.in](http://Jawaharlal%20Nehru%20University,%20New%20Delhi)`{{Dead link|date=November 2019 |bot=InternetArchiveBot |fix-attempted=yes }}`{=mediawiki} has a strong research programme, DIMP (Diaspora and International Programme) and its faculty run a network [www.odi.in](https://web.archive.org/web/20130813135257/http://www.odi/) (Organisation for Diaspora Initiatives), an international network of higher academic researchers focused on studying Diaspora from International Perspective and examining diaspora as a resource in international relations. ODI publishes a research journal [www.tandfonline/rdst](http://Diaspora%20Studies)`{{Dead link|date=November 2019 |bot=InternetArchiveBot |fix-attempted=yes }}`{=mediawiki} with Routledge, London.
```{=html}
<!-- -->
```
- Golong Gilig Institute of Javanese Diaspora Studies, Indonesia
| 212 |
Diaspora studies
| 0 |
8,594 |
# Stab-in-the-back myth
thumb\|upright=1.35\|An illustration from a 1919 Austrian postcard showing a caricatured Jew stabbing a German Army soldier in the back with a dagger. The capitulation of the Central Powers was blamed on communists, Bolsheviks, and the Weimar Republic, but in particular on Jews. The **stab-in-the-back myth** (*Dolchstoßlegende*, `{{IPA|de|ˈdɔlçʃtoːsleˌɡɛndə|pron|De-Dolchstoßlegende.ogg}}`{=mediawiki}, `{{literally|dagger-stab legend}}`{=mediawiki}) was an antisemitic and anti-communist conspiracy theory that was widely believed and promulgated in Germany after 1918. It maintained that the Imperial German Army did not lose World War I on the battlefield, but was instead betrayed by certain citizens on the home front -- especially Jews, revolutionary socialists who fomented strikes and labour unrest, and republican politicians who had overthrown the House of Hohenzollern in the German Revolution of 1918--1919. Advocates of the myth denounced the German government leaders who had signed the Armistice of 11 November 1918 as the \"**November criminals**\" (*label=none*).
When Adolf Hitler and the Nazi Party rose to power in 1933, they made the conspiracy theory an integral part of their official history of the 1920s, portraying the Weimar Republic as the work of the \"November criminals\" who had \"stabbed the nation in the back\" in order to seize power. Nazi propaganda depicted Weimar Germany as \"a morass of corruption, degeneracy, national humiliation, ruthless persecution of the honest \'national opposition\'`{{snd}}`{=mediawiki}fourteen years of rule by Jews, Marxists, and \'cultural Bolsheviks\', who had at last been swept away by the National Socialist movement under Hitler and the victory of the \'national revolution\' of 1933\".
Historians inside and outside of Germany, whilst recognising that economic and morale collapse on the home front was a factor in German defeat, unanimously reject the myth. Historians and military theorists point to lack of further Imperial German Army reserves, the danger of invasion from the south, and the overwhelming of German forces on the western front by more numerous Allied forces, particularly after the entrance of the United States into the war, as evidence that Germany had already lost the war militarily by late 1918.
| 335 |
Stab-in-the-back myth
| 0 |
8,594 |
# Stab-in-the-back myth
## Background
In the later part of World War I, the Supreme High Command (*Oberste Heeresleitung*, OHL) controlled not only the military but also a large part of the economy through the Auxiliary Services Act of December 1916, which under the Hindenburg Programme aimed at a total mobilisation of the economy for war production. In order to implement the Act, however, *Generalfeldmarschall* Paul von Hindenburg and his Chief-of-Staff, First Quartermaster General Erich Ludendorff had to make significant concessions to labour unions and the Reichstag. Hindenburg and Ludendorff threatened to resign in July 1917 if the Emperor did not remove Chancellor Theobald von Bethmann Hollweg. He had lost his usefulness to them when he lost the confidence of the Reichstag after it passed the Reichstag Peace Resolution calling for a negotiated peace without annexations. Bethmann Hollweg resigned and was replaced by Georg Michaelis, whose appointment was supported by the OHL. After only 100 days in office, however, he became the first chancellor to be ousted by the Reichstag.
After years of fighting and having incurred millions of casualties, the United Kingdom and France were wary about an invasion of Germany with its unknown consequences. However the Allies had been amply resupplied by the United States, which had fresh armies ready for combat. On the Western Front, although the Hindenburg Line had been penetrated and German forces were in retreat, the Allied armies had only crossed the 1914 German frontier in a few places in Alsace-Lorraine (see below map). Meanwhile, on the Eastern Front, Germany had already won its war against Russia, concluded with the Treaty of Brest-Litovsk. In the West, Germany had successes with the Spring Offensive of 1918 but the attack had run out of momentum, the Allies had regrouped and in the Hundred Days Offensive retaken lost ground with no sign of stopping. Contributing to the *Dolchstoßlegende*, the overall failure of the German offensive was blamed on strikes in the arms industry at a critical moment, leaving soldiers without an adequate supply of materiel. The strikes were seen as having been instigated by treasonous elements, with the Jews taking most of the blame.
The weakness of Germany\'s strategic position was exacerbated by the rapid collapse of the other Central Powers in late 1918, following Allied victories on the Macedonian and Italian fronts. Bulgaria was the first to sign an armistice on 29 September 1918, at Salonica. On 30 October the Ottoman Empire capitulated at Mudros. On 3 November Austria-Hungary sent a flag of truce to the Italian Army to ask for an armistice. The terms, arranged by telegraph with the Allied Authorities in Paris, were communicated to the Austro-Hungarian commander and accepted. The armistice with Austria-Hungary was signed in the Villa Giusti, near Padua, on 3 November. Austria and Hungary signed separate treaties following the collapse of the Austro-Hungarian empire.
Importantly the Austro-Hungarian capitulation left Germany\'s southern frontier under threat of Allied invasion from Austria. Indeed, on 4 November the Allies decided to prepare an advance across the Alps by three armies towards Munich from Austrian territory within five weeks.
After the last German offensive on the Western Front failed in 1918, Hindenburg and Ludendorff admitted that the war effort was doomed, and they pressed Kaiser Wilhelm II for an armistice to be negotiated, and for a rapid change to a civilian government in Germany. They began to take steps to deflect the blame for losing the war from themselves and the German Army to others. Ludendorff said to his staff on 1 October:
> I have \... asked His Majesty to include in the government those circles who are largely responsible for things having developed as they have. We will now see these gentlemen move into the ministries. Let them be the ones to sign the peace treaty that must now be negotiated. Let them eat the soup that they have cooked for us!
In this way, Ludendorff was setting up the republican politicians -- many of them Socialists -- who would be brought into the government, and would become the parties that negotiated the armistice with the Allies, as the scapegoats to take the blame for losing the war, instead of himself and Hindenburg. Normally, during wartime an armistice is negotiated between the military commanders of the hostile forces, but Hindenburg and Ludendorff had instead handed this task to the new civilian government. The attitude of the military was \"\[T\]he parties of the left have to take on the odium of this peace. The storm of anger will then turn against them,\" after which the military could step in again to ensure that things would once again be run \"in the old way\".
On 5 October, the German Chancellor, Prince Maximilian of Baden, contacted U.S. President Woodrow Wilson, indicating that Germany was willing to accept his Fourteen Points as a basis for discussions. Wilson\'s response insisted that Germany institute parliamentary democracy, give up the territory it had gained to that point in the war, and significantly disarm, including giving up the German High Seas Fleet. On 26 October, Ludendorff was dismissed from his post by the Emperor and replaced by Lieutenant General Wilhelm Groener, who started to prepare the withdrawal and demobilisation of the army.
On 11 November 1918, the representatives of the newly formed Weimar Republic -- created after the Revolution of 1918--1919 forced the abdication of the Kaiser -- signed the armistice that ended hostilities. The military commanders had arranged it so that they would not be blamed for suing for peace, but the republican politicians associated with the armistice would: the signature on the armistice document was of Matthias Erzberger, who was later murdered for his alleged treason. In his autobiography, Ludendorff\'s successor Groener stated, \"It suited me just fine when the army and the Supreme Command remained as guiltless as possible in these wretched truce negotiations, from which nothing good could be expected\".
Given that the heavily censored German press had carried nothing but news of victories throughout the war, and that Germany itself was unoccupied while occupying a great deal of foreign territory, it was no wonder that the German public was mystified by the request for an armistice, especially as they did not know that their military leaders had asked for it, nor did they know that the German Army had been in full retreat after their last offensive had failed.
Thus the conditions were set for the \"stab-in-the-back myth\", in which Hindenburg and Ludendorff were held to be blameless, the German Army was seen as undefeated on the battlefield, and the republican politicians -- especially the Socialists -- were accused of betraying Germany. Further blame was laid at their feet after they signed the Treaty of Versailles in 1919, which led to territorial losses and serious financial pain for the shaky new republic, including a crippling schedule of reparation payments.
Conservatives, nationalists, and ex-military leaders began to speak critically about the peace and Weimar politicians, socialists, communists, and Jews. Even Catholics were viewed with suspicion by some due to supposed fealty to the Pope and their presumed lack of national loyalty and patriotism. It was claimed that these groups had not sufficiently supported the war and had played a role in selling out Germany to its enemies. These *November Criminals*, or those who seemed to benefit from the newly formed Weimar Republic, were seen to have \"stabbed them in the back\" on the home front, by either criticising German nationalism, instigating unrest and mounting strikes in the critical military industries, or by profiteering. These actions were believed to have deprived Germany of almost certain victory at the eleventh hour.
| 1,272 |
Stab-in-the-back myth
| 1 |
8,594 |
# Stab-in-the-back myth
## Assessments of Germany\'s situation in late 1918 {#assessments_of_germanys_situation_in_late_1918}
thumb\|upright=1.35\|alt=\|Map showing the Western Front as it stood on 11 November 1918. The German frontier of 1914 had been crossed in the vicinities of Mulhouse, Château-Salins, and Marieulles in Alsace-Lorraine.
### Contemporary
When consulted on terms for an armistice in October 1918, Douglas Haig, commander of the British and Commonwealth forces on the western front, stated that \"Germany is not broken in the military sense. During the last weeks her forces have withdrawn fighting very bravely and in excellent order\". Ferdinand Foch, Supreme Allied Commander, agreed with this assessment, stating that \"the German army could undoubtedly take up a new position, and we could not prevent it\". When asked about how long he believed it would take for German forces to be pushed across the Rhine, Foch responded \"Maybe three, maybe four or five months, who knows?\".
In private correspondence Haig was more sanguine. In a mid-October letter to his wife he stated that \"I think we have their army beaten now\". Haig noted in his diary for 11 November 1918 that the German army was in \"very bad\" condition due to insubordination and indiscipline in the ranks.
British army intelligence in October 1918 assessed the German reserves as being very limited, with only 20 divisions for the whole western front of which only five were rated as \"fresh\". However, they also highlighted that the German Class of 1920 (i.e., the class of young men due to be conscripted in 1920 under normal circumstances, but called up early) was being held back as an additional reserve and would be absorbed into German divisions in the winter of 1918 if the war continued. Aerial reconnaissance also highlighted the lack of any prepared fortified positions beyond the Hindenburg line. A report from the retired German general Montgelas, who had previously contacted British intelligence to discuss peace overtures, stated that \"The military situation is desperate, if not hopeless, but it is nothing compared to the interior condition due to the rapid spread of Bolshevism\".
### Post-war {#post_war}
Writing in 1930, the British military theorist Basil Liddell Hart wrote that:
> The German acceptance of these severe terms \[i.e., the Armistice terms\] was hastened less by the existing situation on the western front than by the collapse of the \"home front,\" coupled to exposure to a new thrust in the rear through Austria.
Analysing the role that developments on the western front had played in the German decision to capitulate, Hart emphasised particularly the importance of new military threats to Germany that they were ill-equipped to meet, alongside developments within Germany, stating that:
> More truly significant was the decision on November 4, after Austria's surrender, to prepare a concentric advance on Munich by three Allied armies, which would be assembled on the Austro-German frontier within five weeks. In addition Trenchard's Independent Air Force was about to bomb Berlin: on a scale hitherto unattempted in air warfare. And the number of American troops in Europe had now risen to 2,085,000, and the number of divisions to forty-two, of which thirty-two were ready for battle.
German historian Imanuel Geiss also emphasised the importance of the Austro-Hungarian collapse, alongside internal factors affecting Germany, in the final decision by Germany to make peace:
> Whatever doubts may have lingered in German minds about the necessity of laying down arms they were definitely destroyed by events inside and outside Germany. On 27th October Emperor Karl threw up the sponge \[\...\] Germany lay practically open to invasion through Bohemia and Tyrol into Silesia, Saxony, and Bavaria. To wage war on foreign soil was one thing, to have the destructions of modern warfare on German soil was another.
Geiss further linked this threat to Germany\'s borders with the fact that the German revolutionary movement emerged first in the lands that were most threatened by the new invasion threat`{{snd}}`{=mediawiki}Bavaria and Saxony. In Geiss\'s account, this led to the two competing movements for peace`{{snd}}`{=mediawiki}one \"from above\" of establishment figures that wished to use the peace to preserve the status quo, and one \"from below\" that wished to use the peace to establish a socialist, democratic state.
Naval historian and first world war Royal Navy veteran Captain S.W. Roskill assessed the situation at sea as follows:
> There is no doubt at all that in 1918 Allied anti-submarine forces inflicted a heavy defeat on the U-boats \... the so-called \'stab in the back\' by the civil population\'s collapse is a fiction of German militaristic imagination
Although Roskill also balanced this by saying that what he characterised as \"the triumph of unarmed forces\" (i.e., pressure from the German civilian population for peace under the influence of the Allied blockade) was a factor in Allied victory alongside that of armed forces including naval, land, and air forces.
| 804 |
Stab-in-the-back myth
| 2 |
8,594 |
# Stab-in-the-back myth
## Development of the myth {#development_of_the_myth}
According to historian Richard Steigmann-Gall, the stab-in-the-back concept can be traced back to a sermon preached on 3 February 1918, by Protestant Court Chaplain Bruno Doehring, nine months before the war had ended. German scholar Boris Barth, in contrast to Steigmann-Gall, implies that Doehring did not actually use the term, but spoke only of \'betrayal\'. Barth traces the first documented use to a centrist political meeting in the Munich Löwenbräukeller on 2 November 1918, in which Ernst Müller-Meiningen, a member of the Progressive People\'s Party in the *Reichstag*, used the term to exhort his listeners to hold out after Kurt Eisner of the radical left Independent Social Democratic Party had predicted an imminent revolution:
> As long as the front holds, we damned well have the duty to hold out in the homeland. We would have to be ashamed of ourselves in front of our children and grandchildren if we attacked the battle front from the rear and gave it a dagger-stab (*wenn wir der Front in den Rücken fielen und ihr den Dolchstoß versetzten*).
However, the widespread dissemination and acceptance of the \"stab-in-the-back\" myth came about through its use by Germany\'s highest military echelon. In Spring 1919, Max Bauer -- an army colonel who had been the primary adviser to Ludendorff on politics and economics -- published *Could We Have Avoided, Won, or Broken Off the War?*, in which he wrote that \"\[The war\] was lost only and exclusively through the failure of the homeland.\" The birth of the specific term \"stab-in-the-back\" itself can possibly be dated to the autumn of 1919, when Ludendorff was dining with the head of the British Military Mission in Berlin, British general Sir Neill Malcolm. Malcolm asked Ludendorff why he thought Germany lost the war. Ludendorff replied with his list of excuses, including that the home front failed the army.
thumb\|upright=0.8\|Friedrich Ebert contributed to the myth when he told returning veterans that \"No enemy has vanquished you.\"
> Malcolm asked him: \"Do you mean, General, that you were stabbed in the back?\" Ludendorff\'s eyes lit up and he leapt upon the phrase like a dog on a bone. \"Stabbed in the back?\" he repeated. \"Yes, that\'s it, exactly, we were stabbed in the back\". And thus was born a legend which has never entirely perished.
The phrase was to Ludendorff\'s liking, and he let it be known among the general staff that this was the \"official\" version, which led to it being spread throughout German society. It was picked up by right-wing political factions, and was even used by Kaiser Wilhelm II in the memoirs he wrote in the 1920s. Right-wing groups used it as a form of attack against the early Weimar Republic government, led by the Social Democratic Party (SPD), which had come to power with the abdication of the Kaiser. However, even the SPD had a part in furthering the myth when *Reichspräsident* Friedrich Ebert, the party leader, told troops returning to Berlin on 10 November 1918 that \"No enemy has vanquished you,\" (*kein Feind hat euch überwunden!*) and \"they returned undefeated from the battlefield\" (*sie sind vom Schlachtfeld unbesiegt zurückgekehrt*). The latter quote was shortened to *im Felde unbesiegt* (undefeated on the battlefield) as a semi-official slogan of the *Reichswehr*. Ebert had meant these sayings as a tribute to the German soldier, but it only contributed to the prevailing feeling.
Further \"proof\" of the myth\'s validity was found in British general Frederick Barton Maurice\'s book *The Last Four Months*, published in 1919. German reviews of the book misrepresented it as proving that the German Army had been betrayed on the home front by being \"dagger-stabbed from behind by the civilian populace\" (*von der Zivilbevölkerung von hinten erdolcht*), an interpretation that Maurice disavowed in the German press, to no effect. According to William L. Shirer, Ludendorff used the reviews of the book to convince Hindenburg about the validity of the myth.
On 18 November 1919, Ludendorff and Hindenburg appeared before the Committee of Inquiry into Guilt for World War I (*Untersuchungsausschuss für Schuldfragen des Weltkrieges*) of the newly elected Weimar National Assembly, which was investigating the causes of the war and Germany\'s defeat. The two generals appeared in civilian clothing, explaining publicly that to wear their uniforms would show too much respect to the commission. Hindenburg refused to answer questions from the chairman, and instead read a statement that had been written by Ludendorff. In his testimony he cited what Maurice was purported to have written, which provided his testimony\'s most memorable part. Hindenburg declared at the end of his -- or Ludendorff\'s -- speech: \"As an English general has very truly said, the German Army was \'stabbed in the back\'\".
Furthering, the specifics of the stab-in-the-back myth are mentioned briefly by Kaiser Wilhelm II in his memoir:
> I immediately summoned Field Marshal von Hindenburg and the Quartermaster General, General Groener. General Groener again announced that the army could fight no longer and wished rest above all else, and that, therefore, any sort of armistice must be unconditionally accepted; that the armistice must be concluded as soon as possible, since the army had supplies for only six to eight days more and was cut off from all further supplies by the rebels, who had occupied all the supply storehouses and Rhine bridges; that, for some unexplained reason, the armistice commission sent to France--consisting of Erzberger, Ambassador Count Oberndorff, and General von Winterfeldt--which had crossed the French lines two evenings before, had sent no report as to the nature of the conditions.
Hindenburg, Chief of the German General Staff at the time of the Ludendorff Offensive, also mentioned this event in a statement explaining the Kaiser\'s abdication:
> The conclusion of the armistice was directly impending. At moment of the highest military tension revolution broke out in Germany, the insurgents seized the Rhine bridges, important arsenals, and traffic centres in the rear of the army, thereby endangering the supply of ammunition and provisions, while the supplies in the hands of the troops were only enough to last for a few days. The troops on the lines of communication and the reserves disbanded themselves, and unfavourable reports arrived concerning the reliability of the field army proper.
It was particularly this testimony of Hindenburg that led to the widespread acceptance of the *Dolchstoßlegende* in post-World War I Germany.
| 1,065 |
Stab-in-the-back myth
| 3 |
8,594 |
# Stab-in-the-back myth
## Antisemitic aspects {#antisemitic_aspects}
thumb\|upright=0.8\|left\|Nazi theorist Alfred Rosenberg was one of many on the far-right who spread the stab-in-the-back myth. The antisemitic instincts of the German Army were revealed well before the stab-in-the-back myth became the military\'s excuse for losing the war. In October 1916, in the middle of the war, the army ordered a Jewish census of the troops, with the intent to show that Jews were under-represented in the *Heer* (army), and that they were over-represented in non-fighting positions. Instead, the census showed just the opposite, that Jews were over-represented both in the army as a whole and in fighting positions at the front. The Imperial German Army then suppressed the results of the census.
Charges of a Jewish conspiratorial element in Germany\'s defeat drew heavily upon figures such as Kurt Eisner, a Berlin-born German Jew who lived in Munich. He had written about the illegal nature of the war from 1916 onward, and he also had a large hand in the Munich revolution until he was assassinated in February 1919. The Weimar Republic under Friedrich Ebert violently suppressed workers\' uprisings with the help of Gustav Noske and *Reichswehr* general Wilhelm Groener, and tolerated the paramilitary *Freikorps* forming all across Germany. In spite of such tolerance, the Republic\'s legitimacy was constantly attacked with claims such as the stab-in-the-back. Many of its representatives such as Matthias Erzberger and Walther Rathenau were assassinated, and the leaders were branded as \"criminals\" and Jews by the right-wing press dominated by Alfred Hugenberg.
Anti-Jewish sentiment was intensified by the Bavarian Soviet Republic (6 April -- 3 May 1919), a communist government which briefly ruled the city of Munich before being crushed by the *Freikorps*. Many of the Bavarian Soviet Republic\'s leaders were Jewish, allowing antisemitic propagandists to connect Jews with communism, and thus treason.`{{fact|date=December 2024}}`{=mediawiki}
thumb\|upright=1.35\|1924 right-wing German political cartoon showing Philipp Scheidemann, the German Social Democratic politician who proclaimed the Weimar Republic and was its second chancellor, and Matthias Erzberger, an anti-war politician from the Centre Party, who ended World War I by signing the armistice with the Allied Powers, as stabbing the German Army in the backIn 1919, *Deutschvölkischer Schutz und Trutzbund* (German Nationalist Protection and Defiance Federation) leader Alfred Roth, writing under the pseudonym \"Otto Arnim\", published the book *The Jew in the Army* which he said was based on evidence gathered during his participation on the *Judenzählung*, a military census which had in fact shown that German Jews had served in the front lines proportionately to their numbers. Roth\'s work claimed that most Jews involved in the war were only taking part as profiteers and spies, while he also blamed Jewish officers for fostering a defeatist mentality which impacted negatively on their soldiers. As such, the book offered one of the earliest published versions of the stab-in-the-back legend.
thumb\|upright=1.35\|*\"12,000 Jewish soldiers died on the field of honor for the fatherland.\"* A leaflet published in 1920 by German Jewish veterans in response to accusations of the lack of patriotism A version of the stab-in-the-back myth was publicised in 1922 by the anti-Semitic Nazi theorist Alfred Rosenberg in his primary contribution to Nazi theory on Zionism, *Der Staatsfeindliche Zionismus* (Zionism, the Enemy of the State). Rosenberg accused German Zionists of working for a German defeat and supporting Britain and the implementation of the Balfour Declaration.
| 557 |
Stab-in-the-back myth
| 4 |
8,594 |
# Stab-in-the-back myth
## Aftermath
The *Dolchstoß* was a central image in propaganda produced by the many right-wing and traditionally conservative political parties that sprang up in the early days of the Weimar Republic, including Adolf Hitler\'s Nazi Party. For Hitler himself, this explanatory model for World War I was of crucial personal importance. He had learned of Germany\'s defeat while being treated for temporary blindness following a gas attack on the front. In *Mein Kampf*, he described a vision at this time which drove him to enter politics. Throughout his career, he railed against the \"November criminals\" of 1918, who had stabbed the German Army in the back.`{{fact|date=December 2024}}`{=mediawiki}
German historian Friedrich Meinecke attempted to trace the roots of the expression \"stab-in-the-back\" in a 11 June 1922 article in the Viennese newspaper *Neue Freie Presse*. In the 1924 national election, the Munich cultural journal *Süddeutsche Monatshefte* published a series of articles blaming the SPD and trade unions for Germany\'s defeat in World War I, which came out during the trial of Hitler and Ludendorff for high treason following the Beer Hall Putsch in 1923. The editor of an SPD newspaper sued the journal for defamation, giving rise to what is known as the *Munich Dolchstoßprozess* from 19 October to 20 November 1925. Many prominent figures testified in that trial, including members of the parliamentary committee investigating the reasons for the defeat, so some of its results were made public long before the publication of the committee report in 1928. `{{Quote box
| quote = "And which was more certain dishonor for our people: the occupation of German areas by the enemy, or the cowardice with which our bourgeoisie surrendered the German Reich to an organization of pimps, pickpockets, deserters, black marketeers and hack journalists? Let not the gentlemen prattle now about German honor, as long as they bow under the rule of dishonor....Whoever wants to act in the name of German honor today must first launch a merciless war against the infernal defilers of German honor. They are the not the enemies of yore, but they are the representatives of the November crime. That collection of Marxist, democratic-pacifistic, destructive traitors of our country who pushed our people into its present state of powerlessness."
| source = Adolf Hitler, [[Hitlers Zweites Buch|''Zweites Buch'']], Chapter 8: "Military Power and Fallacy of Border Restoration as Goal"<ref>{{cite book |last=Hitler |first=Adolf |title=Hitler's Secret Book |translator=[[Salvator Attanasio]] |publisher=[[Grove Press]] |location=New York |year=1961 |orig-year=1928 |pages=89-90}}</ref>
| align = right
| width = 40%
| bgcolor = #f9f9f9
}}`{=mediawiki}
### World War II {#world_war_ii}
The Allied policy of unconditional surrender was devised in 1943 in part to avoid a repetition of the stab-in-the-back myth. According to historian John Wheeler-Bennett, speaking from the British perspective,
> It was necessary for the Nazi régime and/or the German Generals to surrender unconditionally in order to bring home to the German people that they had lost the War by themselves; so that their defeat should not be attributed to a \"stab in the back\".
## Wagnerian allusions {#wagnerian_allusions}
thumb\|upright=1.5\|Hagen takes aim at Siegfried\'s back with a spear in an 1847 painting by Julius Schnorr von Carolsfeld of a scene from the epic poem *Nibelungenlied* (\"Song of the Nibelungs\") -- which was the basis for Richard Wagner\'s opera *Götterdämmerung*. To some Germans, the idea of a \"stab in the back\" was evocative of Richard Wagner\'s 1876 opera *Götterdämmerung*, in which Hagen murders his enemy Siegfried -- the hero of the story -- with a spear in his back. In Hindenburg\'s memoirs, he compared the collapse of the German Army to Siegfried\'s death.
## Psychology of belief {#psychology_of_belief}
Historian Richard McMasters Hunt argues in a 1958 article that the myth was an irrational belief which commanded the force of irrefutable emotional convictions for millions of Germans. He suggests that behind these myths was a sense of communal shame, not for causing the war, but for losing it. Hunt argues that it was not the guilt of wickedness, but the shame of weakness that seized Germany\'s national psychology, and \"served as a solvent of the Weimar democracy and also as an ideological cement of Hitler\'s dictatorship\".
| 692 |
Stab-in-the-back myth
| 5 |
8,594 |
# Stab-in-the-back myth
## Equivalents in other countries {#equivalents_in_other_countries}
### United States {#united_states}
Parallel interpretations of national trauma after military defeat appear in other countries. For example, it was applied to the United States\' involvement in the Vietnam War and in the mythology of the Lost Cause of the Confederacy
| 50 |
Stab-in-the-back myth
| 6 |
8,612 |
# Declination
In astronomy, **declination** (abbreviated **dec**; symbol ***δ***) is one of the two angles that locate a point on the celestial sphere in the equatorial coordinate system, the other being hour angle. The declination angle is measured north (positive) or south (negative) of the celestial equator, along the hour circle passing through the point in question.
The root of the word *declination* (Latin, *declinatio*) means \"a bending away\" or \"a bending down\". It comes from the same root as the words *incline* (\"bend forward\") and *recline* (\"bend backward\").
In some 18th and 19th century astronomical texts, declination is given as *North Pole Distance* (N.P.D.), which is equivalent to 90 -- (declination). For instance an object marked as declination −5 would have an N.P.D. of 95, and a declination of −90 (the south celestial pole) would have an N.P.D. of 180.
## Explanation
Declination in astronomy is comparable to geographic latitude, projected onto the celestial sphere, and right ascension is likewise comparable to longitude. Points north of the celestial equator have positive declinations, while those south have negative declinations. Any units of angular measure can be used for declination, but it is customarily measured in the degrees (°), minutes (′), and seconds (″) of sexagesimal measure, with 90° equivalent to a quarter circle. Declinations with magnitudes greater than 90° do not occur, because the poles are the northernmost and southernmost points of the celestial sphere.
An object at the
- celestial equator has a declination of 0°
- north celestial pole has a declination of +90°
- south celestial pole has a declination of −90°
The sign is customarily included whether positive or negative.
## Effects of precession {#effects_of_precession}
The Earth\'s axis rotates slowly westward about the poles of the ecliptic, completing one circuit in about 26,000 years. This effect, known as precession, causes the coordinates of stationary celestial objects to change continuously, if rather slowly. Therefore, equatorial coordinates (including declination) are inherently relative to the year of their observation, and astronomers specify them with reference to a particular year, known as an epoch. Coordinates from different epochs must be mathematically rotated to match each other, or to match a standard epoch.
The currently used standard epoch is J2000.0, which is January 1, 2000 at 12:00 TT. The prefix \"J\" indicates that it is a Julian epoch. Prior to J2000.0, astronomers used the successive Besselian Epochs B1875.0, B1900.0, and B1950.0.
| 400 |
Declination
| 0 |
8,612 |
# Declination
## Stars
A star\'s direction remains nearly fixed due to its vast distance, but its right ascension and declination do change gradually due to precession of the equinoxes and proper motion, and cyclically due to annual parallax. The declinations of Solar System objects change very rapidly compared to those of stars, due to orbital motion and close proximity.
As seen from locations in the Earth\'s Northern Hemisphere, celestial objects with declinations greater than 90° − `{{math|''φ''}}`{=mediawiki} (where `{{math|''φ''}}`{=mediawiki} = observer\'s latitude) appear to circle daily around the celestial pole without dipping below the horizon, and are therefore called circumpolar stars. This similarly occurs in the Southern Hemisphere for objects with declinations less (i.e. more negative) than −90° − `{{math|''φ''}}`{=mediawiki} (where `{{math|''φ''}}`{=mediawiki} is always a negative number for southern latitudes). An extreme example is the pole star which has a declination near to +90°, so is circumpolar as seen from anywhere in the Northern Hemisphere except very close to the equator.
Circumpolar stars never dip below the horizon. Conversely, there are other stars that never rise above the horizon, as seen from any given point on the Earth\'s surface (except extremely close to the equator. Upon flat terrain, the distance has to be within approximately 2 km, although this varies based upon the observer\'s altitude and surrounding terrain). Generally, if a star whose declination is `{{math|''δ''}}`{=mediawiki} is circumpolar for some observer (where `{{math|''δ''}}`{=mediawiki} is either positive or negative), then a star whose declination is −`{{math|''δ''}}`{=mediawiki} never rises above the horizon, as seen by the same observer. (This neglects the effect of atmospheric refraction.) Likewise, if a star is circumpolar for an observer at latitude `{{math|''φ''}}`{=mediawiki}, then it never rises above the horizon as seen by an observer at latitude −`{{math|''φ''}}`{=mediawiki}.
Neglecting atmospheric refraction, for an observer at the equator, declination is always 0° at east and west points of the horizon. At the north point, it is 90° − \|`{{math|''φ''}}`{=mediawiki}\|, and at the south point, −90° + \|`{{math|''φ''}}`{=mediawiki}\|. From the poles, declination is uniform around the entire horizon, approximately 0°.
----------------------------------- ------------------------------------
**Observer\'s latitude (°)** **Declination**
**of circumpolar stars (°)**
\+ for north latitude, − for south
90 (Pole) 90 to 0
66.5 (Arctic/Antarctic Circle) 90 to 23.5
45 (midpoint) 90 to 45
23.5 (Tropic of Cancer/Capricorn) 90 to 66.5
0 (Equator)
----------------------------------- ------------------------------------
: **Stars visible by latitude**
Non-circumpolar stars are visible only during certain days or seasons of the year.
## Sun
The Sun\'s declination varies with the seasons. As seen from arctic or antarctic latitudes, the Sun is circumpolar near the local summer solstice, leading to the phenomenon of it being above the horizon at midnight, which is called midnight sun. Likewise, near the local winter solstice, the Sun remains below the horizon all day, which is called polar night.
## Relation to latitude {#relation_to_latitude}
When an object is directly overhead its declination is almost always within 0.01 degrees of the observer\'s latitude; it would be exactly equal except for two complications.
The first complication applies to all celestial objects: the object\'s declination equals the observer\'s astronomical latitude, but the term \"latitude\" ordinarily means geodetic latitude, which is the latitude on maps and GPS devices. In the continental United States and surrounding area, the difference (the vertical deflection) is typically a few arcseconds (1 arcsecond = `{{sfrac|3600}}`{=mediawiki} of a degree) but can be as great as 41 arcseconds.
The second complication is that, assuming no deflection of the vertical, \"overhead\" means perpendicular to the ellipsoid at observer\'s location, but the perpendicular line does not pass through the center of the Earth; almanacs provide declinations measured at the center of the Earth. (An ellipsoid is an approximation to sea level that is mathematically manageable)
| 617 |
Declination
| 1 |
8,620 |
# Dianic Wicca
**Dianic Wicca**, also known as **Dianic Witchcraft**, is a modern pagan goddess tradition focused on female experience and empowerment. Leadership is by women, who may be ordained as priestesses, or in less formal groups that function as collectives. While some adherents identify as Wiccan, it differs from most traditions of Wicca in that only goddesses are honored (whereas most Wiccan traditions honor both female and male deities).
While there is more than one tradition known as *Dianic*, the most widely known is the female-only variety, with the most prominent tradition thereof founded by Zsuzsanna Budapest in the United States in the 1970s. It is notable for its worship of a single, monotheistic Great Goddess (with all other goddesses---of all cultures worldwide---seen as \"aspects\" of this goddess) and a focus on egalitarian matriarchy. While the tradition is named after the Roman goddess Diana, Dianics worship goddesses from many cultures, within the Dianic Wiccan ritual framework. Diana (considered correlate to the Greek Artemis) \"is seen as representing a central mythic theme of woman-identified cosmology. She is the protector of women and of the wild, untamed spirit of nature.\"
The Dianic Wiccan belief and ritual structure is an eclectic combination of elements from British Traditional Wicca, Italian folk-magic as recorded by Charles Leland in *Aradia*, New Age beliefs, and folk magic and healing practices from a variety of different cultures.
## Beliefs and practices {#beliefs_and_practices}
Dianic Wiccans of the Budapest lineage worship the Goddess, who they see as containing all goddesses, from all cultures; she is seen as the source of all living things and containing all that is within her. `{{Blockquote|While Diana does have a triple aspect, it is in Her aspect as Virgin Huntress that She guides Her daughters to wholeness. She is "virgin" in the ancient sense of "She Who Is Whole Unto Herself." The ancient meaning of "virgin" described a woman who was unmarried, autonomous, belonging solely to herself. The original meaning of this word was not attached to a sexual act with a man. Diana/Artemis did not associate herself or consort with men, which is why these Goddesses are often understood to be lesbian.<ref name="Barrett"/>}}`{=mediawiki}
Dianic covens practice magic in the form of meditation and visualization in addition to spell work. They focus especially on healing themselves from the wounds of the patriarchy while affirming their own womanhood.
Rituals can include reenacting religious and spiritual lore from a female-centered standpoint, celebrating the female body, and mourning society\'s abuses of women. The practice of magic is rooted in the belief that energy or \'life force\' can be directed to enact change. However it is important to note that rituals are often improvised to suit individual or group needs and vary from coven to coven. Some Dianic Wiccans eschew manipulative spellwork and hexing because it goes against the Wiccan Rede. However, many other Dianic witches (notably Budapest) do not consider hexing or binding of those who attack women to be wrong, and actively encourage the binding of rapists.
| 501 |
Dianic Wicca
| 0 |
8,620 |
# Dianic Wicca
## Beliefs and practices {#beliefs_and_practices}
### Differences from mainstream Wicca {#differences_from_mainstream_wicca}
Like other Wiccans, Dianics may form covens, attend festivals, celebrate the eight major Wiccan holidays, and gather on Esbats. They use many of the same altar tools, rituals, and vocabulary as other Wiccans. Dianics may also gather in less formal Circles. The most noticeable difference between the two are that Dianic covens of Budapest lineage are composed entirely of women. Central to feminist Dianic focus and practice are embodied Women\'s Mysteries---the celebrations and honoring of the female life cycle and its correspondences to the Earth\'s seasonal cycle, healing from internalized oppression, female sovereignty and agency. Another marked difference in cosmology from other Wiccan traditions is rejecting the concept of duality based in gender stereotypes.
When asked why \"men and gods\" are excluded from her rituals, Budapest stated:
Sociological studies have shown that there is therapeutic value inherent in Dianic ritual. Healing rituals to overcome personal trauma and raise awareness about violence against women have earned comparisons to the female-centered consciousness-raising groups of the 1960s and 1970s. Some Dianic groups develop rituals specifically to confront gendered personal trauma, such as battery, rape, incest, and partner abuse. In one ethnographic study of such a ritual, women shifted their understanding of power from the hands of their abusers to themselves. It was found that this ritual had improved self-perception in participants in the short-term, and that the results could be sustained with ongoing practice.
Dianic Wicca developed from the Women\'s Liberation Movement and some covens traditionally compare themselves with radical feminism. Dianics pride themselves on the inclusion of lesbian and bisexual members in their groups and leadership. It is a goal within many covens to explore female sexuality and sensuality outside of male control, and many rituals function to affirm lesbian sexuality, making it a popular tradition for lesbians and bisexuals. Some covens exclusively consist of same-sex oriented women and advocate lesbian separatism. Ruth Barrett writes,
## History
Aradia, or the Gospel of the Witches claims that ancient Diana, Aphrodite, Aradia, and Herodias cults linked to the Sacred Mysteries are the origin of the all-female coven and many witch myths as we know them.
Z Budapest\'s branch of Dianic Wicca began on the Winter Solstice of 1971, when Budapest led a ceremony in Hollywood, California. Self-identifying as a \"hereditary witch,\" and claiming to have learned folk magic from her mother, Budapest is frequently considered the mother of modern Dianic Wiccan tradition. Dianic Wicca itself is named after the Roman goddess of the same name. Ruth Rhiannon Barrett was ordained by Z Budapest in 1980 and inherited Budapest\'s Los Angeles ministry. This community continues through Circle of Aradia, a grove of Temple of Diana, Inc.
| 455 |
Dianic Wicca
| 1 |
8,620 |
# Dianic Wicca
## Denominations and related traditions {#denominations_and_related_traditions}
- Traditions derived from Zsuzsanna Budapest -- Female-only covens run by priestesses trained and initiated by Budapest.
- Independent Dianic witches -- who may have been inspired by Budapest, her published work (such as *The Holy Book of Women\'s Mysteries*) or other woman\'s spirituality movements, and who emphasize independent study and self-initiation.
### McFarland Dianic {#mcfarland_dianic}
McFarland Dianic is a Neopagan tradition of goddess worship founded by Morgan McFarland and Mark Roberts which, despite the shared name, has a different theology and structure than the women-only groups. In most cases, the McFarland Dianics accept male participants. McFarland largely bases their tradition on the work of Robert Graves and his book *The White Goddess*. While some McFarland covens will initiate men, the leadership is limited to female priestesses. Like the women-only Dianic traditions, \"McFarland Dianic covens espouse feminism as an all-important concept.\" They consider the decision whether to include or exclude males as \"solely the choice of \[a member coven\'s\] individual High Priestess.\"
## Criticism for transphobia {#criticism_for_transphobia}
Dianic Wicca has been criticised by elements in the Neopagan community for being transphobic. In February 2011, Zsuzsanna Budapest conducted a ritual with the Circle of Cerridwen at PantheaCon for \"genetic women only\" from which she barred males. This caused a backlash that led many to criticize Dianic Wicca as an inherently transphobic lesbian-separatist movement
| 231 |
Dianic Wicca
| 2 |
8,640 |
# Database normalization
**Database normalization** is the process of structuring a relational database in accordance with a series of so-called ***normal forms*** in order to reduce data redundancy and improve data integrity. It was first proposed by British computer scientist Edgar F. Codd as part of his relational model.
Normalization entails organizing the columns (attributes) and tables (relations) of a database to ensure that their dependencies are properly enforced by database integrity constraints. It is accomplished by applying some formal rules either by a process of *synthesis* (creating a new database design) or *decomposition* (improving an existing database design).
## Objectives
A basic objective of the first normal form defined by Codd in 1970 was to permit data to be queried and manipulated using a \"universal data sub-language\" grounded in first-order logic. An example of such a language is SQL, though it is one that Codd regarded as seriously flawed.
The objectives of normalization beyond 1NF (first normal form) were stated by Codd as:
When an attempt is made to modify (update, insert into, or delete from) a relation, the following undesirable side effects may arise in relations that have not been sufficiently normalized:
Insertion anomaly: There are circumstances in which certain facts cannot be recorded at all. For example, each record in a \"Faculty and Their Courses\" relation might contain a Faculty ID, Faculty Name, Faculty Hire Date, and Course Code. Therefore, the details of any faculty member who teaches at least one course can be recorded, but a newly hired faculty member who has not yet been assigned to teach any courses cannot be recorded, except by setting the Course Code to null.\
Update anomaly: The same information can be expressed on multiple rows; therefore updates to the relation may result in logical inconsistencies. For example, each record in an \"Employees\' Skills\" relation might contain an Employee ID, Employee Address, and Skill; thus a change of address for a particular employee may need to be applied to multiple records (one for each skill). If the update is only partially successful -- the employee\'s address is updated on some records but not others -- then the relation is left in an inconsistent state. Specifically, the relation provides conflicting answers to the question of what this particular employee\'s address is.\
Deletion anomaly: Under certain circumstances, the deletion of data representing certain facts necessitates the deletion of data representing completely different facts. The \"Faculty and Their Courses\" relation described in the previous example suffers from this type of anomaly, for if a faculty member temporarily ceases to be assigned to any courses, the last of the records on which that faculty member appears must be deleted, effectively also deleting the faculty member, unless the Course Code field is set to null.
### Minimize redesign when extending the database structure {#minimize_redesign_when_extending_the_database_structure}
A fully normalized database allows its structure to be extended to accommodate new types of data without changing existing structure too much. As a result, applications interacting with the database are minimally affected.
Normalized relations, and the relationship between one normalized relation and another, mirror real-world concepts and their interrelationships.
| 519 |
Database normalization
| 0 |
8,640 |
# Database normalization
## Normal forms {#normal_forms}
Codd introduced the concept of normalization and what is now known as the first normal form (1NF) in 1970. Codd went on to define the second normal form (2NF) and third normal form (3NF) in 1971, and Codd and Raymond F. Boyce defined the Boyce--Codd normal form (BCNF) in 1974.
Ronald Fagin introduced the fourth normal form (4NF) in 1977 and the fifth normal form (5NF) in 1979. Christopher J. Date introduced the sixth normal form (6NF) in 2003.
Informally, a relational database relation is often described as \"normalized\" if it meets third normal form. Most 3NF relations are free of insertion, updation, and deletion anomalies.
The normal forms (from least normalized to most normalized) are:
<table>
<thead>
<tr class="header">
<th><p>Constraint<br />
</p></th>
<th scope="col"><p>UNF<br />
</p></th>
<th scope="col"><p>1NF<br />
</p></th>
<th scope="col"><p>2NF<br />
</p></th>
<th scope="col"><p>3NF<br />
</p></th>
<th scope="col"><p>EKNF<br />
</p></th>
<th scope="col"><p>BCNF<br />
</p></th>
<th scope="col"><p>4NF<br />
</p></th>
<th scope="col"><p>ETNF<br />
</p></th>
<th scope="col"><p>5NF<br />
</p></th>
<th scope="col"><p>DKNF<br />
</p></th>
<th scope="col"><p>6NF<br />
</p></th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td><p>Unique rows (no duplicate records)</p></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr class="even">
<td><p>Scalar columns (columns cannot contain relations or composite values)</p></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr class="odd">
<td><p>Every non-prime attribute has a full functional dependency on each candidate key (attributes depend on the <em>whole</em> of every key)</p></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr class="even">
<td><p>Every non-trivial functional dependency either begins with a superkey or ends with a prime attribute (attributes depend <em>only</em> on candidate keys)</p></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr class="odd">
<td><p>Every non-trivial functional dependency either begins with a superkey or ends with an elementary prime attribute (a stricter form of 3NF)</p></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr class="even">
<td><p>Every non-trivial functional dependency begins with a superkey (a stricter form of 3NF)</p></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr class="odd">
<td><p>Every non-trivial multivalued dependency begins with a superkey</p></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr class="even">
<td><p>Every join dependency has a superkey component</p></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr class="odd">
<td><p>Every join dependency has only superkey components</p></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr class="even">
<td><p>Every constraint is a consequence of domain constraints and key constraints</p></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr class="odd">
<td><p>Every join dependency is trivial</p></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
| 460 |
Database normalization
| 1 |
8,640 |
# Database normalization
## Example of a step-by-step normalization {#example_of_a_step_by_step_normalization}
Normalization is a database design technique, which is used to design a relational database table up to higher normal form. The process is progressive, and a higher level of database normalization cannot be achieved unless the previous levels have been satisfied.
That means that, having data in unnormalized form (the least normalized) and aiming to achieve the highest level of normalization, the first step would be to ensure compliance to first normal form, the second step would be to ensure second normal form is satisfied, and so forth in order mentioned above, until the data conforms to sixth normal form.
However, normal forms beyond 4NF are mainly of academic interest, as the problems they exist to solve rarely appear in practice.
*The data in the following example was intentionally designed to contradict most of the normal forms. In practice it is often possible to skip some of the normalization steps because the data is already normalized to some extent. Fixing a violation of one normal form also often fixes a violation of a higher normal form. In the example, one table has been chosen for normalization at each step, meaning that at the end, some tables might not be sufficiently normalized.*
### Initial data {#initial_data}
Let a database table exist with the following structure:
<table>
<thead>
<tr class="header">
<th><p>Title</p></th>
<th><p>Author</p></th>
<th><p>Author Nationality</p></th>
<th><p>Format</p></th>
<th><p>Price</p></th>
<th><p>Subject</p></th>
<th><p>Pages</p></th>
<th><p>Thickness</p></th>
<th><p>Publisher</p></th>
<th><p>Publisher Country</p></th>
<th><p>Genre ID</p></th>
<th><p>Genre Name</p></th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td><p>Beginning MySQL Database Design and Optimization</p></td>
<td><p>Chad Russell</p></td>
<td><p>American</p></td>
<td><p>Hardcover</p></td>
<td><p>49.99</p></td>
<td><table>
<tbody>
<tr class="odd">
<td><p>MySQL</p></td>
</tr>
<tr class="even">
<td><p>Database</p></td>
</tr>
<tr class="odd">
<td><p>Design</p></td>
</tr>
</tbody>
</table></td>
<td><p>520</p></td>
<td><p>Thick</p></td>
<td><p>Apress</p></td>
<td><p>USA</p></td>
<td><p>1</p></td>
<td><p>Tutorial</p></td>
</tr>
</tbody>
</table>
For this example it is assumed that each book has only one author.
A table that conforms to the relational model has a primary key which uniquely identifies a row. In our example, the primary key is a composite key of **{Title, Format}** (indicated by the underlining):
<table>
<thead>
<tr class="header">
<th><p>Title</p></th>
<th><p>Author</p></th>
<th><p>Author Nationality</p></th>
<th><p>Format</p></th>
<th><p>Price</p></th>
<th><p>Subject</p></th>
<th><p>Pages</p></th>
<th><p>Thickness</p></th>
<th><p>Publisher</p></th>
<th><p>Publisher Country</p></th>
<th><p>Genre ID</p></th>
<th><p>Genre Name</p></th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td><p>Beginning MySQL Database Design and Optimization</p></td>
<td><p>Chad Russell</p></td>
<td><p>American</p></td>
<td><p>Hardcover</p></td>
<td><p>49.99</p></td>
<td><table>
<tbody>
<tr class="odd">
<td><p>MySQL</p></td>
</tr>
<tr class="even">
<td><p>Database</p></td>
</tr>
<tr class="odd">
<td><p>Design</p></td>
</tr>
</tbody>
</table></td>
<td><p>520</p></td>
<td><p>Thick</p></td>
<td><p>Apress</p></td>
<td><p>USA</p></td>
<td><p>1</p></td>
<td><p>Tutorial</p></td>
</tr>
</tbody>
</table>
### Satisfying 1NF {#satisfying_1nf}
In the first normal form each field contains a single value. A field may not contain a set of values or a nested record. **Subject** contains a set of subject values, meaning it does not comply. To solve the problem, the subjects are extracted into a separate **Subject** table:
Title Author Author Nationality Format Price Pages Thickness Publisher Publisher Country Genre ID Genre Name \|-= Beginning MySQL Database Design and Optimization Chad Russell American Hardcover 49.99 520 Thick Apress USA 1 Tutorial
------- -------- -------------------- -------- ------- ------- ----------- ----------- ------------------- ---------- ----------------- -------------------------------------------------- -------------- ---------- ----------- ------- ----- ------- -------- ----- --- ----------
: Book
**Title** **Subject name**
-------------------------------------------------- ------------------
Beginning MySQL Database Design and Optimization MySQL
Beginning MySQL Database Design and Optimization Database
Beginning MySQL Database Design and Optimization Design
: **Title - Subject**
Instead of one table in unnormalized form, there are now two tables conforming to the 1NF.
| 549 |
Database normalization
| 2 |
8,640 |
# Database normalization
## Example of a step-by-step normalization {#example_of_a_step_by_step_normalization}
### Satisfying 2NF {#satisfying_2nf}
Recall that the **Book** table below has a composite key of **{Title, Format}**, which will not satisfy 2NF if some subset of that key is a determinant. At this point in our design the **key** is not finalized as the primary key, so it is called a candidate key. Consider the following table:
Title Format Author Author Nationality Price Pages Thickness Publisher Publisher Country Genre ID Genre Name
--------------------------------------------------------- ----------- -------------- -------------------- ------- ------- ----------- ---------------- ------------------- ---------- -----------------
Beginning MySQL Database Design and Optimization Hardcover Chad Russell American 49.99 520 Thick Apress USA 1 Tutorial
Beginning MySQL Database Design and Optimization E-book Chad Russell American 22.34 520 Thick Apress USA 1 Tutorial
The Relational Model for Database Management: Version 2 E-book E.F.Codd British 13.88 538 Thick Addison-Wesley USA 2 Popular science
The Relational Model for Database Management: Version 2 Paperback E.F.Codd British 39.99 538 Thick Addison-Wesley USA 2 Popular science
: Book
All of the attributes that are not part of the candidate key depend on *Title*, but only *Price* also depends on *Format*. To conform to 2NF and remove duplicates, every non-candidate-key attribute must depend on the whole candidate key, not just part of it.
To normalize this table, make **{Title}** a (simple) candidate key (the primary key) so that every non-candidate-key attribute depends on the whole candidate key, and remove *Price* into a separate table so that its dependency on *Format* can be preserved:
Title Author Author Nationality Pages Thickness Publisher Publisher Country Genre ID Genre Name
--------------------------------------------------------- -------------- -------------------- ------- ----------- ---------------- ------------------- ---------- -----------------
Beginning MySQL Database Design and Optimization Chad Russell American 520 Thick Apress USA 1 Tutorial
The Relational Model for Database Management: Version 2 E.F.Codd British 538 Thick Addison-Wesley USA 2 Popular science
: Book
Title Format Price
--------------------------------------------------------- ----------- -------
Beginning MySQL Database Design and Optimization Hardcover 49.99
Beginning MySQL Database Design and Optimization E-book 22.34
The Relational Model for Database Management: Version 2 E-book 13.88
The Relational Model for Database Management: Version 2 Paperback 39.99
: Price
Now, both the **Book** and **Price** tables conform to 2NF.
### Satisfying 3NF {#satisfying_3nf}
The **Book** table still has a transitive functional dependency ({Author Nationality} is dependent on {Author}, which is dependent on {Title}). Similar violations exist for publisher ({Publisher Country} is dependent on {Publisher}, which is dependent on {Title}) and for genre ({Genre Name} is dependent on {Genre ID}, which is dependent on {Title}). Hence, the **Book** table is not in 3NF. To resolve this, we can place {Author Nationality}, {Publisher Country}, and {Genre Name} in their own respective tables, thereby eliminating the transitive functional dependencies:
Title Author Pages Thickness Publisher Genre ID
--------------------------------------------------------- -------------- ------- ----------- ---------------- ----------
Beginning MySQL Database Design and Optimization Chad Russell 520 Thick Apress 1
The Relational Model for Database Management: Version 2 E.F.Codd 538 Thick Addison-Wesley 2
: Book
+---------------------------------------------------------------------------------+
| Title Format Price |
| --------------------------------------------------------- ----------- ------- |
| Beginning MySQL Database Design and Optimization Hardcover 49.99 |
| Beginning MySQL Database Design and Optimization E-book 22.34 |
| The Relational Model for Database Management: Version 2 E-book 13.88 |
| The Relational Model for Database Management: Version 2 Paperback 39.99 |
| |
| : Price |
+---------------------------------------------------------------------------------+
Author Author Nationality
-------------- --------------------
Chad Russell American
E.F.Codd British
: Author
Publisher Country
---------------- ---------
Apress USA
Addison-Wesley USA
: Publisher
Genre ID Name
---------- -----------------
1 Tutorial
2 Popular science
: Genre
### Satisfying EKNF {#satisfying_eknf}
The elementary key normal form (EKNF) falls strictly between 3NF and BCNF and is not much discussed in the literature. It is intended *\"to capture the salient qualities of both 3NF and BCNF\"* while avoiding the problems of both (namely, that 3NF is \"too forgiving\" and BCNF is \"prone to computational complexity\"). Since it is rarely mentioned in literature, it is not included in this example.
| 654 |
Database normalization
| 3 |
8,640 |
# Database normalization
## Example of a step-by-step normalization {#example_of_a_step_by_step_normalization}
### Satisfying 4NF {#satisfying_4nf}
Assume the database is owned by a book retailer franchise that has several franchisees that own shops in different locations. And therefore the retailer decided to add a table that contains data about availability of the books at different locations:
Franchisee ID Title Location
--------------- --------------------------------------------------------- ------------
1 Beginning MySQL Database Design and Optimization California
1 Beginning MySQL Database Design and Optimization Florida
1 Beginning MySQL Database Design and Optimization Texas
1 The Relational Model for Database Management: Version 2 California
1 The Relational Model for Database Management: Version 2 Florida
1 The Relational Model for Database Management: Version 2 Texas
2 Beginning MySQL Database Design and Optimization California
2 Beginning MySQL Database Design and Optimization Florida
2 Beginning MySQL Database Design and Optimization Texas
2 The Relational Model for Database Management: Version 2 California
2 The Relational Model for Database Management: Version 2 Florida
2 The Relational Model for Database Management: Version 2 Texas
3 Beginning MySQL Database Design and Optimization Texas
: align=\"top\" \|**Franchisee - Book - Location**
As this table structure consists of a compound primary key, it doesn\'t contain any non-key attributes and it\'s already in BCNF (and therefore also satisfies all the previous normal forms). However, assuming that all available books are offered in each area, the **Title** is not unambiguously bound to a certain **Location** and therefore the table doesn\'t satisfy 4NF.
That means that, to satisfy the fourth normal form, this table needs to be decomposed as well:
+-----------------------------------------------------------------------------+-------------------------------------------+
| Franchisee ID Title | Franchisee ID Location |
| --------------- --------------------------------------------------------- | --------------- ------------ |
| 1 Beginning MySQL Database Design and Optimization | 1 California |
| 1 The Relational Model for Database Management: Version 2 | 1 Florida |
| 2 Beginning MySQL Database Design and Optimization | 1 Texas |
| 2 The Relational Model for Database Management: Version 2 | 2 California |
| 3 Beginning MySQL Database Design and Optimization | 2 Florida |
| | 2 Texas |
| | 3 Texas |
| : align=\"top\" \|**Franchisee - Book** | |
| | |
| | : align=\"top\" \|Franchisee - Location |
+-----------------------------------------------------------------------------+-------------------------------------------+
Now, every record is unambiguously identified by a superkey, therefore 4NF is satisfied.
### Satisfying ETNF {#satisfying_etnf}
Suppose the franchisees can also order books from different suppliers. Let the relation also be subject to the following constraint:
- If a certain **supplier** supplies a certain **title**
- and the **title** is supplied to the **franchisee**
- and the **franchisee** is being supplied by the **supplier,**
- then the **supplier** supplies the **title** to the **franchisee**.
Supplier ID Title Franchisee ID
------------- --------------------------------------------------------- ---------------
1 Beginning MySQL Database Design and Optimization 1
2 The Relational Model for Database Management: Version 2 2
3 Learning SQL 3
: Supplier - Book - Franchisee
This table is in 4NF, but the Supplier ID is equal to the join of its projections: **{{Supplier ID, Title}, {Title, Franchisee ID}, {Franchisee ID, Supplier ID}}.** No component of that join dependency is a superkey (the sole superkey being the entire heading), so the table does not satisfy the ETNF and can be further decomposed:
+---------------------------------------------------------------------------+-----------------------------------------------------------------------------+---------------------------------+
| Supplier ID Title | Title Franchisee ID | Supplier ID Franchisee ID |
| ------------- --------------------------------------------------------- | --------------------------------------------------------- --------------- | ------------- --------------- |
| 1 Beginning MySQL Database Design and Optimization | Beginning MySQL Database Design and Optimization 1 | 1 1 |
| 2 The Relational Model for Database Management: Version 2 | The Relational Model for Database Management: Version 2 2 | 2 2 |
| 3 Learning SQL | Learning SQL 3 | 3 3 |
| | | |
| : Supplier - Book | : Book - Franchisee | : Franchisee - Supplier |
+---------------------------------------------------------------------------+-----------------------------------------------------------------------------+---------------------------------+
The decomposition produces ETNF compliance.
| 644 |
Database normalization
| 4 |
8,640 |
# Database normalization
## Example of a step-by-step normalization {#example_of_a_step_by_step_normalization}
### Satisfying 5NF {#satisfying_5nf}
To spot a table not satisfying the 5NF, it is usually necessary to examine the data thoroughly. Suppose the table from 4NF example with a little modification in data and let\'s examine if it satisfies 5NF:
Franchisee ID Title Location
--------------- --------------------------------------------------------- ------------
1 Beginning MySQL Database Design and Optimization California
1 Learning SQL California
1 The Relational Model for Database Management: Version 2 Texas
2 The Relational Model for Database Management: Version 2 California
: align=\"top\" \|**Franchisee - Book - Location**
Decomposing this table lowers redundancies, resulting in the following two tables:
+-----------------------------------------------------------------------------+-----------------------------------------------+
| Franchisee ID Title | Franchisee ID Location |
| --------------- --------------------------------------------------------- | --------------- ------------ |
| 1 Beginning MySQL Database Design and Optimization | 1 California |
| 1 Learning SQL | 1 Texas |
| 1 The Relational Model for Database Management: Version 2 | 2 California |
| 2 The Relational Model for Database Management: Version 2 | |
| | |
| | : align=\"top\" \|**Franchisee - Location** |
| : align=\"top\" \|**Franchisee - Book** | |
+-----------------------------------------------------------------------------+-----------------------------------------------+
The query joining these tables would return the following data:
Franchisee ID Title Location
--------------- --------------------------------------------------------- ------------
1 Beginning MySQL Database Design and Optimization California
1 Learning SQL California
1 The Relational Model for Database Management: Version 2 California
1 The Relational Model for Database Management: Version 2 Texas
1 Learning SQL Texas
1 Beginning MySQL Database Design and Optimization Texas
2 The Relational Model for Database Management: Version 2 California
: align=\"top\" \|**Franchisee - Book - Location JOINed**
The JOIN returns three more rows than it should; adding another table to clarify the relation results in three separate tables:\
{\| \|
Franchisee ID Title
--------------- ---------------------------------------------------------
1 Beginning MySQL Database Design and Optimization
1 Learning SQL
1 The Relational Model for Database Management: Version 2
2 The Relational Model for Database Management: Version 2
: align=\"top\" \|**Franchisee - Book**
\|
Franchisee ID Location
--------------- ------------
1 California
1 Texas
2 California
: align=\"top\" \|**Franchisee - Location**
\|
Location Title
------------ ---------------------------------------------------------
California Beginning MySQL Database Design and Optimization
California Learning SQL
California The Relational Model for Database Management: Version 2
Texas The Relational Model for Database Management: Version 2
: align=\"top\" \|**Location - Book**
\|}
What will the JOIN return now? It actually is not possible to join these three tables. That means it wasn\'t possible to decompose the **Franchisee - Book - Location** without data loss, therefore the table already satisfies 5NF.
**Disclaimer** - the data used demonstrates the principal, but fails to remain true. In this case the data would best be decomposed into the following, with a surrogate key which we will call \'Store ID\':
+------------------------------------------------------------------------+-------------------------------------------------------+---+
| Store ID Title | Store ID Franchisee ID Location | |
| ---------- --------------------------------------------------------- | ---------- --------------- ------------ | |
| 1 Beginning MySQL Database Design and Optimization | 1 1 California | |
| 1 Learning SQL | 2 1 Texas | |
| 2 The Relational Model for Database Management: Version 2 | 3 2 California | |
| 3 The Relational Model for Database Management: Version 2 | | |
| | | |
| | : align=\"top\" \|**Store - Franchisee - Location** | |
| : align=\"top\" \|**Store - Book** | | |
+------------------------------------------------------------------------+-------------------------------------------------------+---+
The JOIN will now return the expected result:
Store ID Title Franchisee ID Location
---------- --------------------------------------------------------- --------------- ------------
1 Beginning MySQL Database Design and Optimization 1 California
1 Learning SQL 1 California
2 The Relational Model for Database Management: Version 2 1 Texas
3 The Relational Model for Database Management: Version 2 2 California
: align=\"top\" \|**Store - Book - Franchisee - Location JOINed**
C.J. Date has argued that only a database in 5NF is truly \"normalized\".
### Satisfying DKNF {#satisfying_dknf}
Let\'s have a look at the **Book** table from previous examples and see if it satisfies the domain-key normal form:
Title **Pages** Thickness *Genre ID* *Publisher ID*
--------------------------------------------------------- ----------- ----------- ------------ ----------------
Beginning MySQL Database Design and Optimization 520 Thick *1* *1*
The Relational Model for Database Management: Version 2 538 Thick *2* *2*
Learning SQL 338 Slim *1* *3*
SQL Cookbook 636 Thick *1* *3*
: Book
Logically, **Thickness** is determined by number of pages. That means it depends on **Pages** which is not a key. Let\'s set an example convention saying a book up to 350 pages is considered \"slim\" and a book over 350 pages is considered \"thick\".
This convention is technically a constraint but it is neither a domain constraint nor a key constraint; therefore we cannot rely on domain constraints and key constraints to keep the data integrity.
In other words -- nothing prevents us from putting, for example, \"Thick\" for a book with only 50 pages -- and this makes the table violate DKNF.
To solve this, a table holding enumeration that defines the **Thickness** is created, and that column is removed from the original table:
+---------------------------------------------+---------------------------------------------------------------------------------------------------+
| Thickness Min pages Max pages | Title Pages *Genre ID* *Publisher ID* |
| ----------- ----------- ----------------- | --------------------------------------------------------- ------- ------------ ---------------- |
| Slim 1 350 | Beginning MySQL Database Design and Optimization 520 *1* *1* |
| Thick 351 999,999,999,999 | The Relational Model for Database Management: Version 2 538 *2* *2* |
| | Learning SQL 338 *1* *3* |
| : Thickness Enum | SQL Cookbook 636 *1* *3* |
| | |
| | : Book - Pages - Genre - Publisher |
+---------------------------------------------+---------------------------------------------------------------------------------------------------+
That way, the domain integrity violation has been eliminated, and the table is in DKNF.
### Satisfying 6NF {#satisfying_6nf}
A simple and intuitive definition of the sixth normal form is that *\"a table is in 6NF when **the row contains the Primary Key, and at most one other attribute\"*****.**
That means, for example, the **Publisher** table designed while creating the 1NF:
Publisher ID Name Country
-------------- -------- ---------
1 Apress USA
: Publisher
needs to be further decomposed into two tables:
+---------------------------+----------------------------+
| Publisher ID Name | Publisher ID Country |
| -------------- -------- | -------------- --------- |
| 1 Apress | 1 USA |
| | |
| : Publisher | : Publisher country |
+---------------------------+----------------------------+
The obvious drawback of 6NF is the proliferation of tables required to represent the information on a single entity. If a table in 5NF has one primary key column and N attributes, representing the same information in 6NF will require N tables; multi-field updates to a single conceptual record will require updates to multiple tables; and inserts and deletes will similarly require operations across multiple tables. For this reason, in databases intended to serve online transaction processing (OLTP) needs, 6NF should not be used.
However, in data warehouses, which do not permit interactive updates and which are specialized for fast query on large data volumes, certain DBMSs use an internal 6NF representation -- known as a columnar data store. In situations where the number of unique values of a column is far less than the number of rows in the table, column-oriented storage allow significant savings in space through data compression. Columnar storage also allows fast execution of range queries (e.g., show all records where a particular column is between X and Y, or less than X.)
In all these cases, however, the database designer does not have to perform 6NF normalization manually by creating separate tables. Some DBMSs that are specialized for warehousing, such as Sybase IQ, use columnar storage by default, but the designer still sees only a single multi-column table. Other DBMSs, such as Microsoft SQL Server 2012 and later, let you specify a \"columnstore index\" for a particular table
| 1,283 |
Database normalization
| 5 |
8,641 |
# Desmothoracid
Order **Desmothoracida**, the **desmothoracids**, are a group of heliozoan protists, usually sessile and found in freshwater environments. The adult is a spherical cell around 10-20 μm in diameter surrounded by a perforated organic lorica, or shell, with many radial pseudopods projecting through the holes to capture food. These are supported by small bundles of microtubules that arise near a point on the nuclear membrane. Unlike other heliozoans, the microtubules are not in any regular geometric array, there does not appear to be a microtubule organizing center, and there is no distinction between the outer and inner cytoplasm.
Reproduction takes place by the budding-off of small motile cells, usually with two flagella. Later these are lost, and the pseudopods and lorica are formed. Typically, a single lengthened pseudopod will secrete a hollow stalk that attaches the cell to the substrate. The form of the flagella, the tubular cristae within the mitochondria, and other characters have led to the suggestion that the desmothoracids belong among what is now the Cercozoa. This was later confirmed by genetic studies.
As of the year 2000, the order Desmothoracida contained five genera with a total of 10 species.
- Order **Desmothoracida** Hartwig & Lesser 1874 emend. Honigberg et al
| 205 |
Desmothoracid
| 0 |
8,643 |
# Molecular diffusion
**Molecular diffusion** is the motion of atoms, molecules, or other particles of a gas or liquid at temperatures above absolute zero. The rate of this movement is a function of temperature, viscosity of the fluid, size and density (or their product, mass) of the particles. This type of diffusion explains the net flux of molecules from a region of higher concentration to one of lower concentration.
Once the concentrations are equal the molecules continue to move, but since there is no concentration gradient the process of molecular diffusion has ceased and is instead governed by the process of self-diffusion, originating from the random motion of the molecules. The result of diffusion is a gradual mixing of material such that the distribution of molecules is uniform. Since the molecules are still in motion, but an equilibrium has been established, the result of molecular diffusion is called a \"dynamic equilibrium\". In a phase with uniform temperature, absent external net forces acting on the particles, the diffusion process will eventually result in complete mixing.
Consider two systems; S~1~ and S~2~ at the same temperature and capable of exchanging particles. If there is a change in the potential energy of a system; for example μ~1~\>μ~2~ (μ is Chemical potential) an energy flow will occur from S~1~ to S~2~, because nature always prefers low energy and maximum entropy.
Molecular diffusion is typically described mathematically using Fick\'s laws of diffusion.
## Applications
Diffusion is of fundamental importance in many disciplines of physics, chemistry, and biology. Some example applications of diffusion:
- Sintering to produce solid materials (powder metallurgy, production of ceramics)
- Chemical reactor design
- Catalyst design in chemical industry
- Steel can be diffused (e.g., with carbon or nitrogen) to modify its properties
- Doping during production of semiconductors.
## Significance
Diffusion is part of the transport phenomena. Of mass transport mechanisms, molecular diffusion is known as a slower one.
### Biology
In cell biology, diffusion is a main form of transport for necessary materials such as amino acids within cells. Diffusion of solvents, such as water, through a semipermeable membrane is classified as osmosis.
Metabolism and respiration rely in part upon diffusion in addition to bulk or active processes. For example, in the alveoli of mammalian lungs, due to differences in partial pressures across the alveolar-capillary membrane, oxygen diffuses into the blood and carbon dioxide diffuses out. Lungs contain a large surface area to facilitate this gas exchange process.
## Tracer, self- and chemical diffusion {#tracer_self__and_chemical_diffusion}
Fundamentally, two types of diffusion are distinguished:
- *Tracer diffusion* and *Self-diffusion*, which is a spontaneous mixing of molecules taking place in the absence of concentration (or chemical potential) gradient. This type of diffusion can be followed using isotopic tracers, hence the name. The tracer diffusion is usually assumed to be identical to self-diffusion (assuming no significant isotopic effect). This diffusion can take place under equilibrium. An excellent method for the measurement of self-diffusion coefficients is pulsed field gradient (PFG) NMR, where no isotopic tracers are needed. In a so-called NMR spin echo experiment this technique uses the nuclear spin precession phase, allowing to distinguish chemically and physically completely identical species e.g. in the liquid phase, as for example water molecules within liquid water. The self-diffusion coefficient of water has been experimentally determined with high accuracy and thus serves often as a reference value for measurements on other liquids. The self-diffusion coefficient of neat water is: 2.299·10^−9^ m^2^·s^−1^ at 25 °C and 1.261·10^−9^ m^2^·s^−1^ at 4 °C.
- *Chemical diffusion* occurs in a presence of concentration (or chemical potential) gradient and it results in net transport of mass. This is the process described by the diffusion equation. This diffusion is always a non-equilibrium process, increases the system entropy, and brings the system closer to equilibrium.
The diffusion coefficients for these two types of diffusion are generally different because the diffusion coefficient for chemical diffusion is binary and it includes the effects due to the correlation of the movement of the different diffusing species.
## Non-equilibrium system {#non_equilibrium_system}
Because chemical diffusion is a net transport process, the system in which it takes place is not an equilibrium system (i.e. it is not at rest yet). Many results in classical thermodynamics are not easily applied to non-equilibrium systems. However, there sometimes occur so-called quasi-steady states, where the diffusion process does not change in time, where classical results may locally apply. As the name suggests, this process is a not a true equilibrium since the system is still evolving.
Non-equilibrium fluid systems can be successfully modeled with Landau-Lifshitz fluctuating hydrodynamics. In this theoretical framework, diffusion is due to fluctuations whose dimensions range from the molecular scale to the macroscopic scale.
Chemical diffusion increases the entropy of a system, i.e. diffusion is a spontaneous and irreversible process. Particles can spread out by diffusion, but will not spontaneously re-order themselves (absent changes to the system, assuming no creation of new chemical bonds, and absent external forces acting on the particle).
| 831 |
Molecular diffusion
| 0 |
8,643 |
# Molecular diffusion
## Concentration dependent \"collective\" diffusion {#concentration_dependent_collective_diffusion}
*Collective diffusion* is the diffusion of a large number of particles, most often within a solvent.
Contrary to Brownian motion, which is the diffusion of a single particle, interactions between particles may have to be considered, unless the particles form an ideal mix with their solvent (ideal mix conditions correspond to the case where the interactions between the solvent and particles are identical to the interactions between particles and the interactions between solvent molecules; in this case, the particles do not interact when inside the solvent).
In case of an ideal mix, the particle diffusion equation holds true and the diffusion coefficient *D* the speed of diffusion in the particle diffusion equation is independent of particle concentration. In other cases, resulting interactions between particles within the solvent will account for the following effects:
- the diffusion coefficient *D* in the particle diffusion equation becomes dependent of concentration. For an attractive interaction between particles, the diffusion coefficient tends to decrease as concentration increases. For a repulsive interaction between particles, the diffusion coefficient tends to increase as concentration increases.
- In the case of an attractive interaction between particles, particles exhibit a tendency to coalesce and form clusters if their concentration lies above a certain threshold. This is equivalent to a precipitation chemical reaction (and if the considered diffusing particles are chemical molecules in solution, then it is a precipitation).
## Molecular diffusion of gases {#molecular_diffusion_of_gases}
Transport of material in stagnant fluid or across streamlines of a fluid in a laminar flow occurs by molecular diffusion. Two adjacent compartments separated by a partition, containing pure gases A or B may be envisaged. Random movement of all molecules occurs so that after a period molecules are found remote from their original positions. If the partition is removed, some molecules of A move towards the region occupied by B, their number depends on the number of molecules at the region considered. Concurrently, molecules of B diffuse toward regimens formerly occupied by pure A. Finally, complete mixing occurs. Before this point in time, a gradual variation in the concentration of A occurs along an axis, designated x, which joins the original compartments. This variation, expressed mathematically as −dC~A~/dx, where C~A~ is the concentration of A. The negative sign arises because the concentration of A decreases as the distance x increases. Similarly, the variation in the concentration of gas B is −dC~B~/dx. The rate of diffusion of A, N~A~, depend on concentration gradient and the average velocity with which the molecules of A moves in the x direction. This relationship is expressed by Fick\'s law
: $N_{A}= -D_{AB} \frac{dC_{A}}{dx}$ (only applicable for no bulk motion)
where D is the diffusivity of A through B, proportional to the average molecular velocity and, therefore dependent on the temperature and pressure of gases. The rate of diffusion N~A~ is usually expressed as the number of moles diffusing across unit area in unit time. As with the basic equation of heat transfer, this indicates that the rate of force is directly proportional to the driving force, which is the concentration gradient.
This basic equation applies to a number of situations. Restricting discussion exclusively to steady state conditions, in which neither dC~A~/dx or dC~B~/dx change with time, equimolecular counterdiffusion is considered first.
## Equimolecular counterdiffusion {#equimolecular_counterdiffusion}
If no bulk flow occurs in an element of length dx, the rates of diffusion of two ideal gases (of similar molar volume) A and B must be equal and opposite, that is $N_A=-N_B$.
The partial pressure of A changes by dP~A~ over the distance dx. Similarly, the partial pressure of B changes dP~B~. As there is no difference in total pressure across the element (no bulk flow), we have
$$\frac{dP_A}{dx}=-\frac{dP_B}{dx}$$.
For an ideal gas the partial pressure is related to the molar concentration by the relation
$$P_{A}V=n_{A}RT$$
where n~A~ is the number of moles of gas *A* in a volume *V*. As the molar concentration *C~A~* is equal to *n~A~/ V* therefore
$$P_{A}=C_{A}RT$$
Consequently, for gas A,
$$N_{A}=-D_{AB} \frac{1}{RT} \frac{dP_{A}}{dx}$$
where D~AB~ is the diffusivity of A in B. Similarly,
$$N_{B}=-D_{BA} \frac{1}{RT} \frac{dP_{B}}{dx}=D_{AB} \frac{1}{RT}\frac{dP_{A}}{dx}$$
Considering that dP~A~/dx = −dP~B~/dx, it therefore proves that D~AB~ = D~BA~ = D. If the partial pressure of A at x~1~ is P~A~1~~ and x~2~ is P~A~2~~, integration of above equation,
$$N_{A}=-\frac{D}{RT} \frac{(P_{A2}-P_{A1})}{x_{2}-x_{1}}$$
A similar equation may be derived for the counterdiffusion of gas B
| 735 |
Molecular diffusion
| 1 |
8,648 |
# Daffynition
A **daffynition** (a portmanteau blend of *daffy* and *definition*) is a form of pun involving the reinterpretation of an existing word, on the basis that it sounds like another word (or group of words). Presented in the form of dictionary definitions, they are similar to transpositional puns, but often much less complex and easier to create.
Under the name **Uxbridge English Dictionary**, making up daffynitions is a popular game on the BBC Radio 4 comedy quiz show *I\'m Sorry I Haven\'t a Clue*.
A lesser-known subclass of daffynition is the *goofinition*, which relies strictly on literal associations and correct spellings, such as \"lobster = a weak tennis player\". This play on words is similar to Cockney rhyming slang.
## Examples
- acrostic: An angry bloodsucking arachnid. (a-cross-tick)
- American: A happy cylindrical food container. (a-merry-can)
- apéritif: A set of dentures. (a-pair-of-teeth)
- avoidable: What a bullfighter tries to do. (avoid-a-bull)
- buccaneer: too much to pay for corn (\[a\]-buck-an-ear)
- dandelion: A fashionably dressed big cat (dandy-lion)
- decadent: Possessing only ten teeth. (deca-dent)
- denial: A river in Egypt. (the-Nile)
- devastation: Where people wait for buses. (the-bus-station)
- dilate: live long (die-late)
- euthanasia: Teenagers in the world\'s largest continent. (youth-in-Asia)
- fortunate: Consumption of an expensive meal. (fortune-ate)
- impolite: A flaming goblin. (imp-alight)
- indistinct: where one places dirty dishes (in-the-sink)
- information: how geese fly (in-formation)
- innuendoes: Italian suppositories. (`{{not a typo|in-you-end-os}}`{=mediawiki})
- insolent: Fallen off the Isle of Wight ferry. (in-Solent)
- isolate: Me not on time. (I-(am)-so-late)
- laburnum: French for barbecue. (la-burn-em)
- legend: A foot. (leg-end)
- oboe: A French tramp. (hobo)
- paradox: Two doctors. (pair-of-docs) or where one ties two boats. (Pair of docks)
- pasteurise: Too far to see. (`{{not a typo|past-your-eyes}}`{=mediawiki})
- protein: In favour of youth. (pro-teen)
- propaganda: A gentlemanly goose. (proper-gander) or to look at something very carefully (proper-gander, where gander is slang for looking)
- recycle: To repair a bicycle, or obtain a replacement bicycle. (re-cycle)
- relief: What trees do in Spring. (re-leaf)
- specimen: An Italian astronaut. (spaceman)
- symmetry: A South African or New Zealand graveyard
| 360 |
Daffynition
| 0 |
8,649 |
# List of football clubs in the Netherlands
The **Dutch Football League** is organized by the Royal Dutch Football Association (KNVB, Koninklijke Nederlandse Voetbalbond).The most successful teams are Ajax (36), PSV (24) and Feyenoord (16). Important teams of the past are HVV (10 titles), Sparta Rotterdam (6 titles) and Willem II (3 titles).
The annual match that marks the beginning of the season is called the Johan Cruijff Schaal (Johan Cruyff Shield). Contenders are the champions and the cup winners of the previous season.
## Dutch professional clubs {#dutch_professional_clubs}
Club Location Venue Capacity Manager
------------------- ------------------- -------------------------- ---------- --------------------
ADO Den Haag The Hague Cars Jeans Stadion 15,000 Darije Kalezić
Ajax Amsterdam Johan Cruyff Arena 53,490 Francesco Farioli
AZ Alkmaar AFAS Stadion 17,023 Maarten Martens
Excelsior Rotterdam Stadion Woudestein 4,400 Marinus Dijkhuizen
Feyenoord Rotterdam Stadion Feijenoord 51,177 Brian Priske
Go Ahead Eagles Deventer Adelaarshorst 10,400 René Hake
Groningen Groningen Noordlease Stadion 22,550 Dick Lukkien
Heerenveen Heerenveen Abe Lenstra Stadion 27,224 Robin van Persie
Heracles Almelo Almelo Polman Stadion 13,500 Erwin van de Looi
NEC Nijmegen Stadion de Goffert 12,500 Rogier Meijer
PEC Zwolle Zwolle MAC³PARK Stadion 13,250 Johnny Jansen
PSV Eindhoven Philips Stadion 36,500 Peter Bosz
Roda JC Kerkrade Parkstad Limburg Stadion 19,979 Bas Sibum
Sparta Rotterdam Rotterdam Het Kasteel 11,026 Jeroen Rijsdijk
Twente Enschede De Grolsch Veste 30,205 Joseph Oosting
Utrecht Utrecht Stadion Galgenwaard 23,750 Ron Jans
Vitesse Arnhem GelreDome 25,500 Edward Sturing
Willem II Tilburg Tilburg Koning Willem II Stadion 14,500 Peter Maes
Almere City Almere Yanmar Stadion 3,000 Hedwiges Maduro
Cambuur Leeuwarden Cambuur Stadion 10,500 Henk de Jong
De Graafschap Doetinchem Stadion De Vijverberg 12,600 Jan Vreman
Den Bosch \'s-Hertogenbosch De Vliert 9,000 David Nascimento
Dordrecht Dordrecht GN Bouw Stadion 4,235 Melvin Boel
FC Eindhoven Eindhoven Jan Louwers Stadion 4,200 Willem Weijs
Emmen Emmen Univé Stadion 8,600 Fred Grim
Fortuna Sittard Sittard Fortuna Sittard Stadion 12,500 Danny Buijs
Helmond Sport Helmond Stadion De Braak 4,100 Bob Peeters
MVV Maastricht De Geusselt 10,234 Maurice Verberne
NAC Breda Breda Rat Verlegh Stadion 19,000 Carl Hoefkens
TOP Oss Oss Heesen Yachts Stadion 4,700 Ruud Brood
RKC Waalwijk Waalwijk Mandemakers Stadion 7,508 Henk Fraser
Telstar Velsen TATA Steel Stadion 3,625 Anthony Correia
Volendam Volendam Kras Stadion 6,260 Regillio Simons
VVV-Venlo Venlo De Koel 8,000 Rick Kruys
| 375 |
List of football clubs in the Netherlands
| 0 |
8,649 |
# List of football clubs in the Netherlands
## Former Dutch league teams {#former_dutch_league_teams}
- Koninklijke HFC
- AVV RAP (of Amsterdam) were the first official champions of the Netherlands in 1899. The club however became a Cricket club in 1916 following a total of 5 national football titles.
- Fortuna 54 (of Geleen) and Sittardia (of Sittard) merged to form Fortuna Sittard in 1968.
- Blauw Wit, DWS and De Volewijckers merged to form FC Amsterdam in 1972, which ceased to exist in 1982.
- PEC and the Zwolsche Boys merged to form PEC Zwolle in 1971, which became FC Zwolle in 1990.
- Sportclub Enschede and the Enschedese Boys merged to form FC Twente in 1965.
- DOS, Elinkwijk and Velox merged to form FC Utrecht in 1970.
- GVAV became FC Groningen in 1971.
- Alkmaar 54 and FC Zaanstreek merged to form AZ in 1967.
- Roda Sport and Rapid JC merged to form Roda JC in 1962.
- BVC Rotterdam and BVC Flamingos merged to form Scheveningen Holland Sport in 1954, which merged with ADO in 1971 to form FC Den Haag, and became ADO Den Haag in 1996.
- SVV and Dordrecht \'90 merged to form SVV/Dordrecht \'90 in 1991. The club has since been renamed FC Dordrecht.
- VC Vlissingen (from Flushing) became a professional club in 1990, changed its name to VCV Zeeland a year later, and became an amateur club again in 1992.
- FC Wageningen (founded in 1911) won the Dutch cup in 1939 and 1948, joined the Dutch professional league when it was formed in 1954, and remained professional until the club went bankrupt in 1992.
- HVC of Amersfoort was formed in 1905, joined the league in 1954, was renamed to SC Amersfoort in 1973 and went bankrupt in 1982.
- Fortuna Vlaardingen (formed in 1904) joined the professional league in 1955, was renamed to FC Vlaardingen in 1974 and went bankrupt in 1981.
- HFC Haarlem (formed in 1889) joined the professional league in 1954 and remained professional until the club went bankrupt in 2010.
- RBC Roosendaal (formed in 1927) joined the professional league in 1955 till 1971 and 1983 and remained professional until the club went bankrupt in 2011.
- AGOVV Apeldoorn (formed in 1913) joined the professional league in 1954 till 1971, returned to professional soccer on 1 July 2003, and went bankrupt in 2013.
- SC Veendam (formed in 1894) joined the professional league in 1954, and went bankrupt in 2013.
- Zwart-Wit \'28 won the national amateur championship in 1971 and the national cup for women in 2000. Went bankrupt in 2004
| 440 |
List of football clubs in the Netherlands
| 1 |
8,650 |
# Dragon 32/64
The **Dragon 32** and **Dragon 64** are 8-bit home computers that were built in the 1980s. The Dragons are very similar to the TRS-80 Color Computer, and were produced for the European market by Dragon Data, Ltd., initially in Swansea, Wales, before moving to Port Talbot, Wales (until 1984), and by Eurohard S.A. in Casar de Cáceres, Spain (from 1984 to 1987), and for the US market by **Tano Corporation** of New Orleans, Louisiana. The model numbers reflect the primary difference between the two machines, which have 32 and 64 kilobytes of RAM, respectively.
Dragon Data introduced the Dragon 32 microcomputer in August 1982, followed by the Dragon 64 a year later. Despite initial success, the Dragon faced technical limitations in graphics capabilities and hardware-supported text modes, which restricted its appeal in the gaming and educational markets. Dragon Data collapsed in 1984 and was acquired by Spanish company Eurohard S.A. However, Eurohard filed for bankruptcy in 1987.
The Dragon computers were built around the Motorola MC6809E processor and featured a composite monitor port, allowing connection to (at the time) modern TVs. They used analog joysticks and had a range of peripherals and add-ons available. The Dragon had several high-resolution display modes, but limited graphics capabilities compared to other home computers of the time.
The Dragon came with a Microsoft BASIC interpreter in ROM, which allowed instant system start-up. The Dragon 32/64 was capable of running multiple disk operating systems, and a range of popular games were ported to the system.
Overall, the Dragon computers were initially well-received but faced limitations that hindered their long-term success.
## Dragon 32 vs. Dragon 64 {#dragon_32_vs._dragon_64}
Aside from the amount of RAM, the Dragon 64 also has a functional RS-232 serial port which was not included on the Dragon 32. A minor difference between the two Dragon models is the outer case colour; the Dragon 32 is beige and the Dragon 64 is light grey. Besides the case, branding and the Dragon 64\'s serial port, the two machines look the same. The Dragon 32 is upgradable to Dragon 64. In some cases, buyers of the Dragon 32 found that they actually received a Dragon 64 unit.
| 365 |
Dragon 32/64
| 0 |
8,650 |
# Dragon 32/64
## Product history {#product_history}
Dragon Data entered the market in August 1982 with the Dragon 32. The Dragon 64 followed a year later. The computers sold well initially and attracted the interest of independent software developers including Microdeal. A companion magazine, *Dragon User*, began publication shortly after the microcomputer\'s launch.
Despite this initial success, there were two technical impediments to the Dragon\'s acceptance. The graphics capabilities trailed behind other computers such as the ZX Spectrum and BBC Micro, a significant shortcoming for the games market. Additionally, as a cost-cutting measure, the hardware-supported text modes only included upper case characters; this restricted the system\'s appeal to the educational market.
Dragon Data collapsed in June 1984. It was acquired by the Spanish company Eurohard S.A., which moved the factory from Wales to Cáceres and released the **Dragon 200** (a Dragon 64 with a new case that allowed a monitor to be placed on top) and the **Dragon 200-E** (an enhanced Dragon 200 with both upper and lower case characters and a Spanish keyboard), but ultimately filed for bankruptcy in 1987. The remaining stock from Eurohard was purchased by a Spanish electronics hobbyist magazine and given away to those who paid for a three-year subscription, until 1992.
In the United States it was possible to purchase the Tano Dragon new in box until early 2017 from California Digital, a retailer that purchased the remaining stock.
<File:PIC> 0119 Dragon 32-2.jpg\|Dragon 32 front view <File:Dragon> 32 Trasera.jpg\|Dragon 32 back view <File:Dragon> 32 laterales-3.jpg\|Dragon 32 side views <File:Dragon> 64 top view.jpg\|Dragon 64 top view <File:Dragon> 64 trasera.JPG\|Dragon 64 back view <File:Dragon> 64 laterales.jpg\|Dragon 64 side views <File:Dragon> by Tano 64K computer.jpg\|Dragon by Tano <File:Dragon200Box.jpg%7CDragon> 200 box <File:Dragon200-4.jpg%7CDragon> 200 top view <File:Dragon200-5.jpg%7CDragon> 200 back view <File:Dragon200E> Top.jpg\|Dragon 200-E top view <File:Dragon200E> Back.jpg\|Dragon 200-E back view <File:Dragon200E> Left.jpg\|Dragon 200-E left side view <File:Dragon200E> Right.jpg\|Dragon 200-E right side view <File:MC6847> Dragon200E Charset.png\|Dragon 200-E upper and lower case Spanish character set
## Reception
*BYTE* wrote in January 1983 that the Dragon 32 \"offers more feature for the money than most of its competitors\", but \"there\'s nothing exceptional about it\". The review described it as a redesigned, less-expensive Color Computer with 32K RAM and better keyboard.
## Games
Initially, the Dragon was reasonably well supported by the major UK software companies, with versions of popular games from other systems being ported to the Dragon. Top-selling games available for the Dragon include *Arcadia* (Imagine), *Chuckie Egg* (A&F), *Manic Miner* and sequel *Jet Set Willy* (Software Projects), *Hunchback* (Ocean) and *Football Manager* (Addictive). There were also companies that concentrated on the Dragon, such as Microdeal. Their character Cuthbert appeared in several games, with *Cuthbert Goes Walkabout* also being converted for Atari 8-bit and Commodore 64 systems.
Due to the limited graphics modes of the Dragon, converted games had a distinctive appearance, with colour games being usually played on a green or white background (rather than the more common black on other systems) or games with high-definition graphics having to run in black and white.
When the system was discontinued, support from software companies also effectively ended. However, Microdeal continued supporting the Dragon until January 1988. Some of their final games developed for the Dragon in 1987 such as *Tanglewood* and *Airball* were also converted for 16-bit machines such as the Atari ST and Amiga.
## Differences from the TRS-80 Color Computer {#differences_from_the_trs_80_color_computer}
Both the Dragon and the TRS-80 Color Computer are based on a Motorola data sheet design for the MC6883 SAM (MMU) chip for memory management and peripheral control.
The systems are sufficiently similar that a significant fraction of the compiled software produced for one machine will run on the other. Software running via the built-in Basic interpreters also has a high level of compatibility, but only after they are re-tokenized, which can be achieved fairly easily by transferring via cassette tape with appropriate options.
It is possible to permanently convert a Color Computer into a Dragon by swapping the original Color Computer ROM and rewiring the keyboard cable.
The Dragon has additional circuitry to make the MC6847 VDG compatible with European 625-line PAL television standards, rather than the US 525-line NTSC standard, and a Centronics parallel printer port not present on the TRS-80. Some models were manufactured with NTSC video for the US and Canadian markets
| 718 |
Dragon 32/64
| 1 |
8,662 |
# David Rice Atchison
**David Rice Atchison** (August 11, 1807`{{spaced ndash}}`{=mediawiki}January 26, 1886) was a mid-19th-century Democratic United States Senator from Missouri. He served as president pro tempore of the United States Senate for six years. Atchison served as a major general in the Missouri State Militia in 1838 during Missouri\'s Mormon War and as a Confederate brigadier general during the American Civil War under Major General Sterling Price in the Missouri Home Guard. Some of Atchison\'s associates claimed that for 24 hours---Sunday, March 4, 1849, through noon on Monday---he may have been acting president of the United States. This belief, however, is dismissed by most scholars.
Atchison, owner of many slaves and a plantation, was a prominent pro-slavery activist and Border Ruffian leader, deeply involved with violence against abolitionists and other free-staters during the \"Bleeding Kansas\" events that preceded admission of the state to the Union.
## Early life {#early_life}
Atchison was born to William Atchison and his wife in Frogtown (later Kirklevington), which is now part of Lexington, Kentucky. He was educated at Transylvania University in Lexington. Classmates included five future Democratic senators (Solomon Downs of Louisiana, Jesse Bright of Indiana, George Wallace Jones of Iowa, Edward Hannegan of Indiana, and Jefferson Davis of Mississippi). Atchison completed law studies and was admitted to the Kentucky bar in 1829.
## Missouri lawyer and politician {#missouri_lawyer_and_politician}
In 1830, he moved to Liberty in Clay County in western Missouri, and set up practice there. He also acquired a farm or plantation, with labor provided by enslaved African Americans. Atchison\'s law practice flourished, and his best-known client was Joseph Smith, founder of the Latter Day Saint Movement. Atchison represented Smith in land disputes with non-Mormon settlers in Caldwell County and Daviess County.
Alexander William Doniphan joined Atchison\'s law practice in Liberty in May 1833. The two became fast friends and spent many leisure time hours playing cards, going to horse races, hunting, fishing, and attending social functions and political events. Atchison, already a member of the Liberty Blues, a volunteer militia in Missouri, got Doniphan to join.
Atchison was elected to the Missouri House of Representatives in 1834. He worked hard for the Platte Purchase, which required Native American tribes to cede land to the United States and extended the northwestern boundary of Missouri to the Missouri River in 1837.
When early disputes broke out into the Mormon War of 1838, Atchison was appointed a major general in the state militia. He took part in suppressing violence by both sides.
Active in the Democratic Party, Atchison was re-elected to the Missouri State House of Representatives in 1838. In 1841, he was appointed a circuit court judge for the six-county area of the Platte Purchase. In 1843, he was named a county commissioner in Platte County, where he then lived.
| 467 |
David Rice Atchison
| 0 |
8,662 |
# David Rice Atchison
## Senate career {#senate_career}
thumb\|upright=.83\|Statue in front of the Clinton County Courthouse, Plattsburg, Missouri In October 1843, Atchison was appointed to the U.S. Senate to fill the vacancy left by the death of Lewis F. Linn. He was the first senator from western Missouri to serve in this position. At age 36, he was the youngest senator from Missouri up to that time. Atchison was re-elected to a full term on his own account in 1849.
Atchison was very popular with his fellow Senate Democrats. When the Democrats took control of the US Senate in December 1845, they chose Atchison as president pro tempore, placing him second in succession for the presidency. He also was responsible for presiding over the Senate when the vice president was absent. At 38, he was a young man with low seniority in the Senate after two years to gain such a position.
In 1849, Atchison stepped down as president pro tempore in favor of William R. King. King, in turn, yielded the office back to Atchison in December 1852, after being elected Vice President of the United States. Atchison continued as president pro tempore until December 1854.
As a senator, Atchison was a fervent advocate of slavery and territorial expansion. He supported the annexation of Texas and the U.S.-Mexican War. Atchison and Thomas Hart Benton, Missouri\'s other senator, became rivals and finally enemies, although both were Democrats. Benton declared himself to be against slavery in 1849. In 1851 Atchison allied with the Whigs to defeat incumbent Benton for re-election.
Benton, intending to challenge Atchison in 1854, began to agitate for territorial organization of the area west of Missouri (now the states of Kansas and Nebraska) so that it could be opened to settlement. To counter this, Atchison proposed that the area be organized *and* that the section of the Missouri Compromise banning slavery there be repealed in favor of popular sovereignty. Under this plan, settlers in each territory would vote to decide whether they would allow slavery.
At Atchison\'s request, Senator Stephen Douglas of Illinois introduced the Kansas--Nebraska Act, which embodied this idea, in November 1853. The act was passed and became law in May 1854, establishing the Territories of Kansas and Nebraska.
| 373 |
David Rice Atchison
| 1 |
8,662 |
# David Rice Atchison
## Senate career {#senate_career}
### Border Ruffians {#border_ruffians}
Both Douglas and Atchison had believed that Nebraska would be settled by Free-State men from Iowa and Illinois, and Kansas by pro-slavery Missourians and other Southerners, thus preserving the numerical balance between free states and slave states in the nation. In 1854 Atchison helped found the town of Atchison, Kansas, as a pro-slavery settlement. The town (and county) were named for him.
While Southerners supported the idea of settling in Kansas, few migrated there. Most free-soilers preferred Kansas to Nebraska. Furthermore, anti-slavery activists throughout the North came to view Kansas as a battleground and formed societies to encourage free-soil settlers to go to Kansas, to ensure there would be enough voters in both Kansas and Nebraska to approve their entry as free states.
It appeared as if the Kansas Territorial legislature to be elected in March 1855 would be controlled by free-soilers and ban slavery. Atchison and his supporters viewed this as a breach of faith. An angry Atchison called on pro-slavery Missourians to uphold slavery by force and \"to kill every God-damned abolitionist in the district\" if necessary. He recruited an immense mob of heavily armed Missourians, the infamous \"Border Ruffians\". On election day, March 30, 1855, Atchison led 5,000 Border Ruffians into Kansas. They seized control of all polling places at gunpoint, cast tens of thousands of fraudulent votes for pro-slavery candidates, and elected a pro-slavery legislature.
The outrage was nonetheless accepted by the Federal government. When Territorial Governor Andrew Reeder objected, President Franklin Pierce fired him.
Despite this show of force, far more free-soilers than pro-slavery settlers migrated to Kansas. There were continual raids and ambushes by both sides in \"Bleeding Kansas\". In spite of the best efforts of Atchison and the Ruffians, Kansas rejected slavery and finally became a free state in 1861.
Charles Sumner, in the epic \"Crimes Against Kansas\" speech on May 19, 1856, exposed Atchison\'s role in the invasion, tortures, and killings in Kansas. Speaking in the flamboyant style he and others used, lacing his prose with references to Roman history, Sumner compared Atchison to Roman Senator Catiline, who betrayed his country in a plot to overthrow the existing order. For two days, Sumner listed crime, after crime, in detail, complete with documentation by newspapers and letters of the time, showing the tortures and violence by Atchison and his men.
Two days later, Atchison gave his own speech, totally unaware that he had been exposed on the Senate floor in such a fashion. Atchison\'s speech was to the Texas men he had just met, hired, and paid for, Atchison reveals in his speech, by \"authorities in Washington\". They are about to invade Lawrence, Kansas. Atchison makes the men promise to kill and \"draw blood,\" and boasts of his flag, which was red in color for \"Southern Rights\" and the color of blood. They would press \"to blood\" the spread of slavery into Kansas. He revealed in this speech that the immediate goal of the invasion was to stop the newspaper in Lawrence from publishing anti-slavery material. Atchison\'s men had made it a crime to publish anti-slavery newspapers in Kansas.
Atchison made it clear the men were to kill and draw blood, told the men they would be \"well paid,\" and encouraged them to plunder from the homes that they invaded. That was after the hundreds of dozens of tortures and killings that Sumner had detailed in his Crimes Against Kansas speech. In other words, things were about to get much worse since Atchison had his hired men from Texas.
### Defeated for re-election {#defeated_for_re_election}
Atchison\'s Senate term expired on March 3, 1855. He sought election to another term, but the Democrats in the Missouri legislature were split between him and Benton, while the Whig minority put forward their own man. No senator was elected until January 1857, when James S. Green was chosen.
### Railroad proposal {#railroad_proposal}
When the first transcontinental railroad was proposed in the 1850s, Atchison called for it to be built along the central route (from St. Louis through Missouri, Kansas, and Utah), rather than the southern route (from New Orleans through Texas and New Mexico). Naturally, his suggested route went through Atchison.
| 705 |
David Rice Atchison
| 2 |
8,662 |
# David Rice Atchison
## American Civil War {#american_civil_war}
Atchison and his law partner Doniphan fell out over politics in 1859--1861, disagreeing on how Missouri should proceed. Atchison favored secession, while Doniphan was torn and would remain, for the most part, non-committal. Privately, Doniphan favored the Union, but found it difficult to oppose his friends and associates.
During the secession crisis in Missouri at the beginning of the American Civil War, Atchison sided with Missouri\'s pro-Confederate governor, Claiborne Jackson. He was appointed a major general in the Missouri State Guard. Atchison actively recruited State Guardsmen in northern Missouri and served with Guard commander General Sterling Price in the summer campaign of 1861. In September 1861, Atchison led 3,500 State Guard recruits across the Missouri River to reinforce Price and defeated Union troops that tried to block his force in the Battle of Liberty.
Atchison served in the State Guard through the end of 1861. In March 1862, Union forces in the Trans-Mississippi theater won a decisive victory at Pea Ridge in Arkansas and secured Union control of Missouri. Atchison then resigned from the army over reported strategy arguments with Price and moved to Texas for the duration of the war. After the war, he retired to his farm near Gower. He denied many of his pro-slavery public statements made prior to the Civil War. Then, his retirement cottage outside of Plattsburg, Missouri burned to the ground before he died in 1886. This entailed the complete loss of his library containing books, documents, and letters documenting his role in the Mormon War, Indian affairs, pro-slavery activities, Civil War activities, and other legislation covering his career as a lawyer, senator, and soldier.
| 280 |
David Rice Atchison
| 3 |
8,662 |
# David Rice Atchison
## Purported one-day presidency {#purported_one_day_presidency}
Inauguration Day---March 4---fell on a Sunday in 1849, and so president-elect Zachary Taylor did not take the presidential oath of office until the next day out of religious concerns. Even so, the term of the outgoing president, James K. Polk, ended at noon on March 4. On March 2, outgoing vice president George M. Dallas relinquished his position as president of the Senate. Congress had previously chosen Atchison as president pro tempore. In 1849, according to the Presidential Succession Act of 1792, the Senate president pro tempore immediately followed the vice president in the presidential line of succession. As Dallas\'s term also ended at noon on the 4th, and as neither Taylor nor vice president-elect Millard Fillmore had been sworn into office on that day, it was claimed by some of Atchison\'s friends and colleagues that from March 4--5, 1849, Atchison was acting president of the United States.
Historians, constitutional scholars, and biographers dismiss the claim. They point out that Atchison\'s Senate term had also ended on March 4. When the Senate of the new Congress convened on March 5 to allow new senators and the new vice president to take the oath of office, the secretary of the Senate called members to order, as the Senate had no president pro tempore. Although an incoming president must take the oath of office before any official acts, the prevailing view is that presidential succession does not depend on the oath. Even supposing that an oath was necessary, Atchison never took it, so he was no more the president than Taylor.
In September 1872, Atchison, who never himself claimed that he was technically president, told a reporter for the *Plattsburg Lever*: `{{blockquote|It was in this way: Polk went out of office on March 3, 1849, on Saturday at 12 noon. The next day, the 4th, occurring on Sunday, Gen. Taylor was not inaugurated. He was not inaugurated till Monday, the 5th, at 12 noon. It was then canvassed among Senators whether there was an interregnum (a time during which a country lacks a government). It was plain that there was either an [[interregnum]] or I was the President of the United States being chairman of the Senate, having succeeded Judge [[Willie Person Mangum|Mangum]] of North Carolina. The judge waked me up at 3 o'clock in the morning and said jocularly that as I was President of the United States he wanted me to appoint him as secretary of state. I made no pretense to the office, but if I was entitled in it I had one boast to make, that not a woman or a child shed a tear on account of my removing any one from office during my incumbency of the place. A great many such questions are liable to arise under our form of government.<ref>{{cite web| url = http://www.rootsweb.com/~moclinto/histsoc/| title = Clinton Co. Historical Society}}</ref>}}`{=mediawiki}
## Death
thumb\|upright=.90\|David Rice Atchison\'s tombstone Atchison died on January 26, 1886, at his home near Gower, Missouri at the age of 78. He was buried at Greenlawn Cemetery in Plattsburg, Missouri. His grave marker reads \"President of the United States for One Day.\"
## Legacy
- Atchison, Kansas county seat of Atchison County, Kansas.
- The Atchison, Topeka, and Santa Fe Railroad utilized the town name.
- Atchison County, Missouri
- Atchison Township, Clinton County, Missouri
- Atchison Township, Nodaway County, Missouri
- USS *Atchison County* (LST-60) ship
- In 1991, Atchison was inducted into the Hall of Famous Missourians, and a bronze bust depicting him is on permanent display in the rotunda of the Missouri State Capitol.
- The Atchison County Historical Museum, in Atchison, Kansas, includes an exhibit titled the \"World\'s Smallest Presidential Library\".
- A historical marker designating the approximate site of Atchison\'s birth is located along Highway 1974 in the Landsdowne neighborhood of Lexington, Kentucky
| 645 |
David Rice Atchison
| 4 |
8,678 |
# Donn
In Irish mythology, **Donn** (\"the dark one\", from *Dhuosnos*) is an ancestor of the Gaels and is believed to have been a god of the dead. Donn is said to dwell in **Tech Duinn** (the \"house of Donn\" or \"house of the dark one\"), where the souls of the dead gather. He may have originally been an aspect of the Dagda. Folklore about Donn survived into the modern era in parts of Ireland, in which he is said to be a phantom horseman riding a white horse
| 89 |
Donn
| 0 |
8,681 |
# Data compression ratio
**Data compression ratio**, also known as **compression power**, is a measurement of the relative reduction in size of data representation produced by a data compression algorithm. It is typically expressed as the division of uncompressed size by compressed size.
## Definition
Data compression ratio is defined as the ratio between the *uncompressed size* and *compressed size*:
$${\rm Compression\;Ratio} = \frac{\rm Uncompressed\;Size}{\rm Compressed\;Size}$$
Thus, a representation that compresses a file\'s storage size from 10 MB to 2 MB has a compression ratio of 10/2 = 5, often notated as an explicit ratio, 5:1 (read \"five\" to \"one\"), or as an implicit ratio, 5/1. This formulation applies equally for compression, where the uncompressed size is that of the original; and for decompression, where the uncompressed size is that of the reproduction.
Sometimes the *space saving* is given instead, which is defined as the reduction in size relative to the uncompressed size:
$${\rm Space\;Saving} = 1 - \frac{\rm Compressed\;Size}{\rm Uncompressed\;Size}$$
Thus, a representation that compresses the storage size of a file from 10 MB to 2 MB yields a space saving of 1 - 2/10 = 0.8, often notated as a percentage, 80%.
For signals of indefinite size, such as streaming audio and video, the compression ratio is defined in terms of uncompressed and compressed data rates instead of data sizes:
$${\rm Compression\;Ratio} = \frac{\rm Uncompressed\;Data\;Rate}{\rm Compressed\;Data\;Rate}$$
and instead of space saving, one speaks of **data-rate saving**, which is defined as the data-rate reduction relative to the uncompressed data rate:
$${\rm Data\;Rate\;Saving} = 1 - \frac{\rm Compressed\;Data\;Rate}{\rm Uncompressed\;Data\;Rate}$$
For example, uncompressed songs in CD format have a data rate of 16 bits/channel x 2 channels x 44.1 kHz ≅ 1.4 Mbit/s, whereas AAC files on an iPod are typically compressed to 128 kbit/s, yielding a compression ratio of 10.9, for a data-rate saving of 0.91, or 91%.
When the uncompressed data rate is known, the compression ratio can be inferred from the compressed data rate.
## Lossless vs. Lossy {#lossless_vs._lossy}
Lossless compression of digitized data such as video, digitized film, and audio preserves all the information, but it does not generally achieve compression ratio much better than 2:1 because of the intrinsic entropy of the data. Compression algorithms which provide higher ratios either incur very large overheads or work only for specific data sequences (e.g. compressing a file with mostly zeros). In contrast, lossy compression (e.g. JPEG for images, or MP3 and Opus for audio) can achieve much higher compression ratios at the cost of a decrease in quality, such as Bluetooth audio streaming, as visual or audio compression artifacts from loss of important information are introduced. A compression ratio of at least 50:1 is needed to get 1080i video into a 20 Mbit/s MPEG transport stream.
## Uses
The data compression ratio can serve as a measure of the complexity of a data set or signal. In particular it is used to approximate the algorithmic complexity. It is also used to see how much of a file is able to be compressed without increasing its original size
| 509 |
Data compression ratio
| 0 |
8,693 |
# Diets of Nuremberg
The **Diets of Nuremberg**, also called the Imperial Diets of Nuremberg, took place at different times between the Middle Ages and the 17th century.
The first Diet of Nuremberg, in 1211, elected the future emperor Frederick II of Hohenstaufen as German king.
At the Diet of 1356 the Emperor Charles IV issued the Golden Bull of 1356, which required each Holy Roman Emperor to summon the first Imperial Diet after his election at Nuremberg. Apart from that, a number of other diets were held there.
Important to Protestantism were the Diets of 1522 (\"First Diet of Nuremberg\"), 1524 (\"Second Diet of Nuremberg\") and 1532 (\"Third Diet of Nuremberg\").
## The 1522 Diet of Nuremberg {#the_1522_diet_of_nuremberg}
This Diet has become known mostly for the reaction of the papacy to the decision made on Luther at the Diet of Worms the previous year. The new pope, Adrian VI, sent his nuncio Francesco Chieregati to the Diet, to insist both that the Edict of Worms be executed and that action be taken promptly against Luther. This demand, however, was coupled with a promise of thorough reform in the Roman hierarchy, frankly admitting the partial guilt of the Vatican in the decline of the Church.
In the recess drafted on 9 February 1523, however, the German princes rejected this appeal. Using Adrian\'s admissions, they declared that they could not have it appear \'as though they wished to oppress evangelical truth and assist unchristian and evil abuses.\'
## The 1524 Diet of Nuremberg {#the_1524_diet_of_nuremberg}
This Diet generally took the same line as the previous one. The Estates reiterated their decision from the previous Diet. The Cardinal-legate, Campeggio, who was present, showed his disgust at the behaviour of the Estates. On 18 April, the Estates decided to call \'a general gathering of the German nation\', to meet at Speyer the following year and to decide what would be done until the meeting of the general council of the Church which they demanded. This resulted in the Diet of Speyer (1526), which in turn was followed by the Diet of Speyer (1529). The latter included the Protestation at Speyer
| 356 |
Diets of Nuremberg
| 0 |
8,704 |
# Outline of dance
The following outline is provided as an overview of and topical guide to dance:
Dance -- human movement either used as a form of expression or presented in a social, spiritual or performance setting. Choreography is the art of making dances, and the person who does this is called a choreographer. Definitions of what constitutes dance are dependent on social, cultural, aesthetic, artistic and moral constraints and range from functional movement (such as Folk dance) to codified, virtuoso techniques such as ballet. A great many dances and dance styles are performed to dance music.
## What type of thing is dance? {#what_type_of_thing_is_dance}
Dance (also called \"dancing\") can fit the following categories:
- an activity or behavior
- one of the arts -- a creative endeavor or discipline.
- one of the performing arts.
- Hobby -- regular activity or interest that is undertaken for pleasure, typically done during one\'s leisure time.
- Exercise -- bodily activity that enhances or maintains physical fitness and overall health and wellness.
- Sport---bodily activity that displays physical exertion
- Recreation -- leisure time activity
- Ritual
Some other things can be named \"dance\" metaphorically; see dance (disambiguation)
## Types of dance {#types_of_dance}
Type of dance -- a particular dance or dance style. There are many varieties of dance. Dance categories are not mutually exclusive. For example, tango is traditionally a *partner dance*. While it is mostly *social dance*, its ballroom form may be *competitive dance*, as in DanceSport. At the same time it is enjoyed as *performance dance*, whereby it may well be a *solo dance*.
- List of dances
- List of dance style categories
- List of ethnic, regional, and folk dances by origin
- List of folk dances sorted by origin
- List of national dances
- List of DanceSport dances
### Dance genres {#dance_genres}
- Acro dance
- B-boying
- Ballet
- Bollywood dance
- Ballroom dance
- Baroque dance
- Belly dance
- Glossary of belly dance terms
- Bharatanatyam
- Casino (Cuban salsa)
- Cha-cha-cha
- Chicago stepping
- Circle dance
- Competitive dance
- Dance squad
- Contemporary dance
- Contra dance
- Country-western dance
- Disco
- Hustle
- Erotic dancing
- Fandango
- Flamenco
- Folk dance
- Hip-hop dance
- Indian classical dance
- Jazz dance
- Jig
- Jive
- Krumping
- Lambada
- Lap dance
- Limbo
- Line dance
- Mambo
- Modern dance
- Pole dance
- Polka
- Quickstep
- Salsa
- Sequence dance
- Street dance
- Swing
- Tango
- Tap dance
- Twist
- Two-step
- Thai classical dance
- Waltz
- War dance
- Zamba
### Dance styles by number of interacting dancers {#dance_styles_by_number_of_interacting_dancers}
- Solo dance -- a dance danced by an individual dancing alone.
- Partner dance -- dance with just 2 dancers, dancing together. In most partner dances, one, typically a man, is the leader; the other, typically a woman, is the follower. As a rule, they maintain connection with each other. In some dances the connection is loose and called dance handhold. In other dances the connection involves body contact.
- Glossary of partner dance terms
- Group dance -- dance danced by a group of people simultaneously. Group dances are generally, but not always, coordinated or standardized in such a way that all the individuals in the group are dancing the same steps at the same time. Alternatively, various groups within the larger group may be dancing different, but complementary, parts of the larger dance.
### Dance styles by main purpose {#dance_styles_by_main_purpose}
- Competitive dance --
- Erotic dance --
- Participation dance --
- Performance dance --
- Social dance --
- Concert dance --
## Geography of dance (by region) {#geography_of_dance_by_region}
: **Africa**
```{=html}
<!-- -->
```
:
: **West Africa**
```{=html}
<!-- -->
```
:
: Benin • Burkina Faso • Cape Verde • Côte d\'Ivoire • Gambia • Ghana • Guinea • Guinea-Bissau • Liberia • Mali • Mauritania • Niger • Nigeria • Senegal • Sierra Leone • Togo
```{=html}
<!-- -->
```
:
: **North Africa**
```{=html}
<!-- -->
```
:
: Algeria • Egypt (Ancient Egypt) • Libya • Mauritania • Morocco • Sudan • South Sudan •Tunisia • Western Sahara
```{=html}
<!-- -->
```
:
: **Central Africa**
```{=html}
<!-- -->
```
:
: Angola • Burundi • Cameroon • Central African Republic • Chad • The Democratic Republic of the Congo • Equatorial Guinea • Gabon • Republic of the Congo • Rwanda • São Tomé and Príncipe
```{=html}
<!-- -->
```
:
: **East Africa**
```{=html}
<!-- -->
```
:
: Burundi • Comoros • Djibouti • Eritrea • Ethiopia • Kenya • Madagascar • Malawi • Mauritius • Mozambique • Rwanda • Seychelles • Somalia • Tanzania • Uganda • Zambia • Zimbabwe
```{=html}
<!-- -->
```
:
: **Southern Africa**
```{=html}
<!-- -->
```
:
: Botswana • Eswatini • Lesotho • Namibia • South Africa
```{=html}
<!-- -->
```
:
: **Dependencies**
```{=html}
<!-- -->
```
:
: Mayotte (France) • St
| 849 |
Outline of dance
| 0 |
8,714 |
# December 10
| 3 |
December 10
| 0 |
8,727 |
# ΔT (timekeeping)
In precise timekeeping, **Δ*T*** (**Delta *T***, **delta-*T***, **delta*T***, or **D*T***) is a measure of the cumulative effect of the departure of the Earth\'s rotation period from the fixed-length day of International Atomic Time (86,400 seconds). Formally, Δ*T* is the time difference `{{math|1=Δ''T'' = TT − UT}}`{=mediawiki} between Universal Time (UT, defined by Earth\'s rotation) and Terrestrial Time (TT, independent of Earth\'s rotation). The value of ΔT for the start of 1902 was approximately zero; for 2002 it was about 64 seconds. So Earth\'s rotations over that century took about 64 seconds longer than would be required for days of atomic time. As well as this long-term drift in the length of the day there are short-term fluctuations in the length of day (`{{math|Δ''τ''}}`{=mediawiki}) which are dealt with separately.
Since early 2017, the length of the day has happened to be very close to the conventional value, and ΔT has remained within half a second of 69 seconds.
## Calculation
Earth\'s rotational speed is `{{math|1=''ν'' = {{sfrac|1|2π}}{{thin space}}{{sfrac|''dθ''|''dt''}}}}`{=mediawiki}, and a day corresponds to one period `{{math|1=''P'' = {{sfrac|1|''ν''}}}}`{=mediawiki}. A rotational acceleration `{{math|{{sfrac|''dν''|''dt''}}}}`{=mediawiki} gives a rate of change of the period of `{{math|1={{sfrac|''dP''|''dt''}} = −{{sfrac|1|''ν''<sup>2</sup>}}{{thin space}}{{sfrac|''dν''|''dt''}}}}`{=mediawiki}, which is usually expressed as `{{math|1=''α'' = ''ν''{{thin space}}{{sfrac|''dP''|''dt''}} = −{{sfrac|1|''ν''}}{{thin space}}{{sfrac|''dν''|''dt''}}}}`{=mediawiki}. This has dimension of reciprocal time and is commonly reported in units of milliseconds-per-day per century, symbolized as ms/day/cy (understood as (ms/day)/cy). Integrating `{{math|1=''α''}}`{=mediawiki} gives an expression for Δ*T* against time.
### Universal time {#universal_time}
Universal Time is a time scale based on the Earth\'s rotation, which is somewhat irregular over short periods (days up to a century), thus any time based on it cannot have an accuracy better than 1 in 10^8^. However, a larger, more consistent effect has been observed over many centuries: Earth\'s rate of rotation is inexorably slowing down. This observed change in the rate of rotation is attributable to two primary forces, one decreasing and one increasing the Earth\'s rate of rotation. Over the long term, the dominating force is tidal friction, which is slowing the rate of rotation, contributing about `{{math|1=''α'' = +2.3}}`{=mediawiki} ms/day/cy or `{{math|1={{sfrac|''dP''|''dt''}} = +2.3}}`{=mediawiki} ms/cy, which is equal to the very small fractional change `{{val|+7.3e-13}}`{=mediawiki} day/day. The most important force acting in the opposite direction, to speed up the rate, is believed to be a result of the melting of continental ice sheets at the end of the last glacial period. This removed their tremendous weight, allowing the land under them to begin to rebound upward in the polar regions, an effect that is still occurring today and will continue until isostatic equilibrium is reached. This \"post-glacial rebound\" brings mass closer to the rotational axis of the Earth, which makes the Earth spin faster, according to the law of conservation of angular momentum, similar to an ice skater pulling their arms in to spin faster. Models estimate this effect to contribute about −0.6 ms/day/cy. Combining these two effects, the net acceleration (actually a deceleration) of the rotation of the Earth, or the change in the length of the mean solar day (LOD), is +1.7 ms/day/cy or +62 s/cy^2^ or +46.5 ns/day^2^. This matches the average rate derived from astronomical records over the past 27 centuries.
### Terrestrial time {#terrestrial_time}
Terrestrial Time is a theoretical uniform time scale, defined to provide continuity with the former Ephemeris Time (ET). ET was an independent time-variable, proposed (and its adoption agreed) in the period 1948--1952 with the intent of forming a gravitationally uniform time scale as far as was feasible at that time, and depending for its definition on Simon Newcomb\'s *Tables of the Sun* (1895), interpreted in a new way to accommodate certain observed discrepancies. Newcomb\'s tables formed the basis of all astronomical ephemerides of the Sun from 1900 through 1983: they were originally expressed (and published) in terms of Greenwich Mean Time and the mean solar day, but later, in respect of the period 1960--1983, they were treated as expressed in terms of ET, in accordance with the adopted ET proposal of 1948--52. ET, in turn, can now be seen (in light of modern results) as close to the average mean solar time between 1750 and 1890 (centered on 1820), because that was the period during which the observations on which Newcomb\'s tables were based were performed. While TT is strictly uniform (being based on the SI second, every second is the same as every other second), it is in practice realised by International Atomic Time (TAI) with an accuracy of about 1 part in 10^14^.
| 753 |
ΔT (timekeeping)
| 0 |
8,727 |
# ΔT (timekeeping)
## Earth\'s rate of rotation {#earths_rate_of_rotation}
Earth\'s rate of rotation must be integrated to obtain time, which is Earth\'s angular position (specifically, the orientation of the meridian of Greenwich relative to the fictitious mean sun). Integrating +1.7 ms/d/cy and centering the resulting parabola on the year 1820 yields (to a first approximation) `{{nowrap|32 × <big>(</big>{{sfrac|year − 1820|100}}<big>)</big>{{su|p=2}} - 20}}`{=mediawiki} seconds for Δ*T*. Smoothed historical measurements of Δ*T* using total solar eclipses are about +17190 s in the year −500 (501 BC), +10580 s in 0 (1 BC), +5710 s in 500, +1570 s in 1000, and +200 s in 1500. After the invention of the telescope, measurements were made by observing occultations of stars by the Moon, which allowed the derivation of more closely spaced and more accurate values for Δ*T*. Δ*T* continued to decrease until it reached a plateau of +11 ± 6 s between 1680 and 1866. For about three decades immediately before 1902 it was negative, reaching −6.64 s. Then it increased to +63.83 s in January 2000 and +68.97 s in January 2018 and +69.361 s in January 2020, after even a slight decrease from 69.358 s in July 2019 to 69.338 s in September and October 2019 and a new increase in November and December 2019. This will require the addition of an ever-greater number of leap seconds to UTC as long as UTC tracks UT1 with one-second adjustments. (The SI second as now used for UTC, when adopted, was already a little shorter than the current value of the second of mean solar time.) Physically, the meridian of Greenwich in Universal Time is almost always to the east of the meridian in Terrestrial Time, both in the past and in the future. +17190 s or about `{{frac|4|3|4}}`{=mediawiki} h corresponds to 71.625°E. This means that in the year −500 (501 BC), Earth\'s faster rotation would cause a total solar eclipse to occur 71.625° to the east of the location calculated using the uniform TT.
| 332 |
ΔT (timekeeping)
| 1 |
8,727 |
# ΔT (timekeeping)
## Values prior to 1955 {#values_prior_to_1955}
All values of Δ*T* before 1955 depend on observations of the Moon, either via eclipses or occultations. The angular momentum lost by the Earth due to friction induced by the Moon\'s tidal effect is transferred to the Moon, increasing its angular momentum, which means that its moment arm (approximately its distance from the Earth, i.e. precisely the semi-major axis of the Moon\'s orbit) is increased (for the time being about +3.8 cm/year), which via Kepler\'s laws of planetary motion causes the Moon to revolve around the Earth at a slower rate. The cited values of Δ*T* assume that the lunar acceleration (actually a deceleration, that is a negative acceleration) due to this effect is `{{math|{{sfrac|''d'''''n'''|''dt''}}}}`{=mediawiki} = −26″/cy^2^, where `{{math|'''n'''}}`{=mediawiki} is the mean sidereal angular motion of the Moon. This is close to the best estimate for `{{math|{{sfrac|''d'''''n'''|''dt''}}}}`{=mediawiki} as of 2002 of −25.858 ± 0.003″/cy^2^, so Δ*T* need not be recalculated given the uncertainties and smoothing applied to its current values. Nowadays, UT is the observed orientation of the Earth relative to an inertial reference frame formed by extra-galactic radio sources, modified by an adopted ratio between sidereal time and solar time. Its measurement by several observatories is coordinated by the International Earth Rotation and Reference Systems Service (IERS).
## Current values {#current_values}
Recall `{{math|1=Δ''T'' = TT − UT1}}`{=mediawiki} by definition. While TT is only theoretical, it is commonly realized as TAI + 32.184 seconds where TAI is UTC plus the current leap seconds, so `{{math|1=Δ''T'' = UTC − UT1 + (leap seconds) + 32.184 s}}`{=mediawiki}.
This can be rewritten as `{{math|1=Δ''T'' = (leap seconds) + 32.184 s − DUT1}}`{=mediawiki}, where DUT1 is UT1 − UTC. The value of DUT1 is sent out in the weekly IERS [Bulletin A](https://datacenter.iers.org/data/latestVersion/bulletinA.txt), as well as several time signal services, and by extension serve as a source of the current `{{math|1=Δ''T''}}`{=mediawiki}.
## Geological evidence {#geological_evidence}
Tidal deceleration rates have varied over the history of the Earth-Moon system. Analysis of layering in fossil mollusc shells from 70 million years ago, in the Late Cretaceous period, shows that there were 372 days a year, and thus that the day was about 23.5 hours long then. Based on geological studies of tidal rhythmites, the day was 21.9±0.4 hours long 620 million years ago and there were 13.1±0.1 synodic months/year and 400±7 solar days/year. The average recession rate of the Moon between then and now has been 2.17±0.31 cm/year, which is about half the present rate. The present high rate may be due to near resonance between natural ocean frequencies and tidal frequencies
| 433 |
ΔT (timekeeping)
| 2 |
8,728 |
# December 22
| 3 |
December 22
| 0 |
8,729 |
# David Deutsch
**David Elieser Deutsch** (`{{IPAc-en|d|ɔɪ|tʃ}}`{=mediawiki} `{{respell|DOYTCH}}`{=mediawiki}; *דוד דויטש*; born 18 May 1953) is a British physicist at the University of Oxford, often described as the \"father of quantum computing\". He is a visiting professor in the Department of Atomic and Laser Physics at the Centre for Quantum Computation (CQC) in the Clarendon Laboratory of the University of Oxford. He pioneered the field of quantum computation by formulating a description for a quantum Turing machine, as well as specifying an algorithm designed to run on a quantum computer. He is a proponent of the many-worlds interpretation of quantum mechanics.
## Early life and education {#early_life_and_education}
Deutsch was born to a Jewish family in Haifa, Israel on 18 May 1953, the son of Oskar and Tikva Deutsch. In London, David attended Geneva House school in Cricklewood (his parents owned and ran the Alma restaurant on Cricklewood Broadway), followed by William Ellis School in Highgate before reading Natural Sciences at Clare College, Cambridge and taking Part III of the Mathematical Tripos. He went on to Wolfson College, Oxford for his doctorate in theoretical physics, about quantum field theory in curved space-time, supervised by Dennis Sciama and Philip Candelas.
## Career and research {#career_and_research}
His work on quantum algorithms began with a 1985 paper, later expanded in 1992 along with Richard Jozsa, to produce the Deutsch--Jozsa algorithm, one of the first examples of a quantum algorithm that is exponentially faster than any possible deterministic classical algorithm. In his nomination for election as a Fellow of the Royal Society (FRS) in 2008, his contributions were described as:
Since 2012, he has been working on constructor theory, an attempt at generalizing the quantum theory of computation to cover not just computation but all physical processes. Together with Chiara Marletto, he published a paper in December 2014 entitled *Constructor theory of information*, that conjectures that information can be expressed solely in terms of which transformations of physical systems are possible and which are impossible.
### *The Fabric of Reality* {#the_fabric_of_reality}
In his 1997 book *The Fabric of Reality*, Deutsch details his \"Theory of Everything\". It aims not at the reduction of everything to particle physics, but rather mutual support among multiversal, computational, epistemological, and evolutionary principles. His theory of everything is somewhat emergentist rather than reductive. There are four strands to his theory:
1. Hugh Everett\'s many-worlds interpretation of quantum physics, \"the first and most important of the four strands.\"
2. Karl Popper\'s epistemology, especially its anti-inductivism and requiring a realist (non-instrumental) interpretation of scientific theories, as well as its emphasis on taking seriously those bold conjectures that resist falsification.
3. Alan Turing\'s theory of computation, especially as developed in Deutsch\'s Turing principle, in which the Universal Turing machine is replaced by Deutsch\'s universal quantum computer. (\"*The* theory of computation is now the quantum theory of computation.\")
4. Richard Dawkins\' refinement of Darwinian evolutionary theory and the modern evolutionary synthesis, especially the ideas of replicator and meme as they integrate with Popperian problem-solving (the epistemological strand).
### Invariants
In a 2009 TED talk, Deutsch expounded a criterion for scientific explanation, which is to formulate invariants: \"State an explanation \[publicly, so that it can be dated and verified by others later\] that remains invariant \[in the face of apparent change, new information, or unexpected conditions\]\".
: \"A bad explanation is easy to vary.\"
: \"The search for hard-to-vary explanations is the origin of all progress\"
: \"That `{{em|the truth consists of hard-to-vary assertions about reality}}`{=mediawiki} is the most important fact about the physical world.\"
Invariance as a fundamental aspect of a scientific account of reality has long been part of philosophy of science: for example, Friedel Weinert\'s book *The Scientist as Philosopher* (2004) noted the presence of the theme in many writings from around 1900 onward, such as works by Henri Poincaré (1902), Ernst Cassirer (1920), Max Born (1949 and 1953), Paul Dirac (1958), Olivier Costa de Beauregard (1966), Eugene Wigner (1967), Lawrence Sklar (1974), Michael Friedman (1983), John D. Norton (1992), Nicholas Maxwell (1993), Alan Cook (1994), Alistair Cameron Crombie (1994), Margaret Morrison (1995), Richard Feynman (1997), Robert Nozick (2001), and Tim Maudlin (2002).
### *The Beginning of Infinity* {#the_beginning_of_infinity}
Deutsch\'s second book, *The Beginning of Infinity: Explanations that Transform the World*, was published on 31 March 2011. In this book, he views the European Enlightenment of the 17th and 18th centuries as near the beginning of a potentially unending sequence of purposeful knowledge creation. He examines the nature of knowledge, memes, and how and why creativity evolved in humans.
### Awards and honours {#awards_and_honours}
*The Fabric of Reality* was shortlisted for the Rhone-Poulenc science book award in 1998. Deutsch was awarded the Dirac Prize of the Institute of Physics in 1998, and the Edge of Computation Science Prize in 2005. In 2017, he received the Dirac Medal of the International Centre for Theoretical Physics (ICTP). Deutsch is linked to Paul Dirac through his doctoral advisor Dennis Sciama, whose doctoral advisor was Dirac. Deutsch was elected a Fellow of the Royal Society (FRS) in 2008. In 2018, he received the Micius Quantum Prize. In 2021, he was awarded the Isaac Newton Medal and Prize. On September 22, 2022, he was awarded the Breakthrough Prize in Fundamental Physics, shared with Charles H. Bennet, Gilles Brassard and Peter Shor.
## Personal life {#personal_life}
Deutsch is a founding member of the parenting and educational method Taking Children Seriously.
| 900 |
David Deutsch
| 0 |
8,729 |
# David Deutsch
## Personal life {#personal_life}
### Views on Brexit {#views_on_brexit}
Deutsch supported Brexit, with his advocacy quoted by then-government adviser, Dominic Cummings, and reported by The New Yorker magazine in January 2020.
Michael Gove mentioned Deutsch\'s viewpoint during a BBC Brexit debate. Regarding the debate, Deutsch later commented:
> \"In Britain there is a clear path if you have a grievance, you can join a pressure-group, the pressure-group will pressure the government, or you can see your MP, and the MP will see the grievance building up, and so-on. Whereas, Europe is structured in such a way that it\'s very difficult to know whom to address your grievance to, or what they could do about it.\"
Deutsch was not involved in any campaign advocacy for Brexit. His public remarks on the subject were quoted by Cummings and Gove on their own initiative, as Deutsch later made clear
| 149 |
David Deutsch
| 1 |
8,731 |
# Director's cut
In public use, a **director\'s cut** is the director\'s preferred version of a film (or video game, television episode, music video, commercial, *etc.*). It is generally considered a marketing term to represent the version of a film the director prefers, and is usually used in contrast to a theatrical release of that film where the director did not have final cut privilege and did not agree with what was released. The word \"cut\" is used in this context as a synecdoche to refer to the entire film editing process and the resulting product. Traditionally, films were edited by literally cutting strips of film and splicing them together.
Most of the time, film directors do not have the \"final cut\" (final say on the version released to the public). Those with money invested in the film, such as the production companies, distributors, or studios, may make changes intended to make the film more profitable at the box office. In extreme cases that can sometimes mean a different ending, less ambiguity, or excluding scenes that would earn a more audience-restricting rating, but more often means that the film is simply shortened to provide more screenings per day.
With the rise of home video, the phrase became more generically used as a marketing term to communicate to consumers that this is the director\'s preferred edit of a film, and it implies the director was not happy with the version that was originally released. Sometimes there are big disagreements between the director\'s vision and the producer\'s vision, and the director\'s preferred edit is sought after by fans (for example Terry Gilliam\'s *Brazil*).
Not all films have separate \"director\'s cuts\" (often the director is happy with the theatrical release, even if they didn\'t have final cut privilege), and sometimes separate versions of films are released as \"director\'s cuts\" even if the director doesn\'t prefer them. Once such example is Ridley Scott\'s *Alien*, which had a \"director\'s cut\" released in 2003, even though the director said it was purely for \"marketing purposes\" and didn\'t represent his preferred vision for the film.
Sometimes alternate edits are released, which are not necessarily director\'s preferred cuts, but which showcase different visions for the project for fans to enjoy. Examples include James Cameron\'s *Avatar*, which was released as both a \"Special Edition\" and \"Extended\" cuts, and Peter Jackson\'s *Lord of the Rings*, which were released on home video as \"Extended Editions\". These versions do not represent the director\'s preferred visions.
The term since expanded to include media such as video games, comic books and music albums (the latter two of which don\'t actually have directors).
## Original use of the phrase {#original_use_of_the_phrase}
Within the industry itself, a \"director\'s cut\" refers to a stage in the editing process, and is not usually what a director wants to release to the public, due to the fact it is unfinished. The editing process of a film is broken into stages: First is the assembly/rough cut, where all selected takes are put together in the order in which they should appear in the film. Next, the editor\'s cut is reduced from the rough cut; the editor may be guided by their own choices or following notes from the director or producers. Eventually is the final cut, which actually gets released or broadcast. In between the editor\'s cut and the final cut can come any number of fine cuts, including the director\'s cut. The director\'s cut may include unsatisfactory takes, a preliminary soundtrack, a lack of desired pick-up shots etc., which the director would not like to be shown but uses as a placeholder until satisfactory replacements can be inserted. This is still how the term is used within the film industry, as well as commercials, television, and music videos.
| 628 |
Director's cut
| 0 |
8,731 |
# Director's cut
## Inception
The trend of releasing alternate cuts of films for artistic reasons became prominent in the 1970s; in 1974, the \"director\'s cut\" of *The Wild Bunch* was shown theatrically in Los Angeles to sold-out audiences. The theatrical release of the film had cut 10 minutes to get an R rating, but this cut was hailed as superior and has now become the definitive one. Other early examples include George Lucas\'s first two films being re-released following the success of *Star Wars*, in cuts which more closely resembled his vision, or Peter Bogdanovich re-cutting *The Last Picture Show* several times. Charlie Chaplin also re-released all of his films in the 1970s, several of which were re-cut (Chaplin\'s re-release of *The Gold Rush* in the 1940s is almost certainly the earliest prominent example of a director\'s re-cut film being released to the public). A theatrical re-release of *Close Encounters of the Third Kind* used the phrase \"Special Edition\" to describe a cut which was closer to Spielberg\'s intent but had a compromised ending demanded by the studio.
As the home video industry rose in the early 1980s, video releases of director\'s cuts were sometimes created for the small but dedicated cult fan market. Los Angeles cable station Z Channel is also cited as significant in the popularization of alternate cuts. Early examples of films released in this manner include Michael Cimino\'s *Heaven\'s Gate*, where a longer cut was recalled from theatres but subsequently shown on cable and eventually released to home video; James Cameron\'s *Aliens*, where a video release restored 20 minutes the studio had insisted on cutting; Cameron also voluntarily made cuts to the theatrical version of *The Abyss* for pacing but restored them for a video release, and most famously, Ridley Scott\'s *Blade Runner*, where an alternate workprint version was released to fan acclaim, ultimately resulting in the 1992 recut. Scott later recut the film once more, releasing a version dubbed \"The Final Cut\" in 2007. This was the final re-cut and the first in which Scott maintained creative control over the final product, leading to The Final Cut being considered the definitive version of the film.
| 361 |
Director's cut
| 1 |
8,731 |
# Director's cut
## Criticism
Once distributors discovered that consumers would buy alternate versions of films, it became more common for films to have alternative versions released. And the original public meaning of a director\'s preferred vision has become ignored, leading to so-called \"director\'s cuts\" of films where the director prefers the theatrically released version (or when the director had actual final cut privilege in the first place). Such versions are often marketing ploys, assembled by simply restoring deleted scenes, sometimes adding as much as a half-hour to the length of the film without regard to pacing and storytelling.
As a result, the \"director\'s cut\" is often considered a misnomer. Some directors deliberately try to avoid labelling alternate versions as such (e.g. Peter Jackson and James Cameron; each using the phrases \"Special Edition\" or \"Extended Edition\" for alternate versions of their films).
Sometimes the term is used a marketing ploy. For example, Ridley Scott states on the director\'s commentary track of *Alien* that the original theatrical release was his \"director\'s cut\", and that the new version was released as a marketing ploy. Director Peter Bogdanovich, no stranger to director\'s cuts himself, cites *Red River* as an example where `{{Blockquote|MGM have a version of Howard Hawks's ''Red River'' that they're calling the Director's Cut and it is absolutely not the director's cut. It's a cut the director didn't want, an earlier cut that was junked. They assume because it was longer that it's a director's cut. Capra cut two reels off ''Lost Horizon'' because it didn't work and then someone tried to put it back. There are certainly mistakes and stupidities in reconstructing pictures.<ref>{{cite web |last1=Jones |first1=Ellen |title=Is a 'director's cut' ever a good idea? |website=[[TheGuardian.com]] |date=7 April 2011 |url=https://www.theguardian.com/film/2011/apr/07/rise-of-the-directors-cut}}</ref>}}`{=mediawiki}
Another way that released director\'s cuts can be compromised is when directors were never allowed to even shoot their vision, and thus when the film is re-cut, they must make do with the footage that exists. Examples of this include Terry Zwigoff\'s *Bad Santa*, Brian Helgeland\'s *Payback*, and most notably the Richard Donner re-cut of *Superman II*. Donner completed about 75 per cent of the shooting of the sequel during the shooting of the first one but was fired from the project. His director\'s cut of the film includes, among other things, screen test footage of stars Christopher Reeve and Margot Kidder, footage used in the first film, and entire scenes that were shot by replacement director Richard Lester which Donner dislikes but were required for story purposes.
On the other side, some critics (such as Roger Ebert) have approved of the use of the label in unsuccessful films that had been tampered with by studio executives, such as Sergio Leone\'s original cut of *Once Upon a Time in America*, and the moderately successful theatrical version of *Daredevil*, which were altered by studio interference for their theatrical release. Other well-received director\'s cuts include Ridley Scott\'s *Kingdom of Heaven* (with *Empire* magazine stating: \"The added 45 minutes in the Director's Cut are like pieces missing from a beautiful but incomplete puzzle\"), or Sam Peckinpah\'s *Pat Garrett and Billy the Kid*, where the restored 115-minute cut is closer to the director\'s intent than the theatrical 105-minute cut (the actual director\'s cut was 122 minutes; it was never completed to Peckinpah\'s satisfaction, but was used as a guide for the restoration that was done after his death).
In some instances, such as Peter Weir\'s *Picnic at Hanging Rock*, Robert Wise\'s *Star Trek: The Motion Picture*, John Cassavetes\'s *The Killing of a Chinese Bookie*, Blake Edwards\'s *Darling Lili* and Francis Ford Coppola\'s *The Godfather Coda*, changes made to a director\'s cut resulted in a very similar runtime or a shorter, more compact cut. This generally happens when a distributor insists that a film be completed to meet a release date, but sometimes it is the result of removing scenes that the distributor insisted on inserting, as opposed to restoring scenes they insisted on cutting.
| 659 |
Director's cut
| 2 |
8,731 |
# Director's cut
## Extended cuts and special editions {#extended_cuts_and_special_editions}
(See Changes in *Star Wars* re-releases and *E.T. the Extra-Terrestrial: The 20th Anniversary*)
Separate to director\'s cuts are alternate cuts released as \"special editions\" or \"extended cuts\". These versions are often put together for home video for fans, and should not be confused with \'director\'s cuts\'. For example, despite releasing extended versions of his *The Lord of the Rings* trilogy, Peter Jackson told IGN in 2019 that "the theatrical versions are the definitive versions, I regard the extended cuts as being a novelty for the fans that really want to see the extra material."
James Cameron has shared similar sentiments regarding the special editions of his films, \"What I put into theaters is the Director\'s Cut. Nothing was cut that I didn\'t want cut. All the extra scenes we\'ve added back in are just a bonus for the fans.\" Similar statements were made by Ridley Scott for the 2003 \'director\'s cut\' of *Alien*.
Such alternate versions sometimes include changes to the special effects in addition to different editing, such as George Lucas\'s *Star Wars* films, and Steven Spielberg\'s *E.T. the Extra-Terrestrial*.
Extended or special editions can also apply to films that have been extended for television or cut out to fill time slots and long advertisement breaks, against the explicit wishes of the director, such as the TV versions of *Dune* (1984), *The Warriors* (1979), *Superman* (1978) and the *Harry Potter* films.
### Examples of alternate cuts {#examples_of_alternate_cuts}
*The Lord of the Rings* film series directed by Peter Jackson saw an \"Extended Edition\" release for each of the three films *The Fellowship of the Ring* (2001), *The Two Towers* (2002), and *The Return of the King* (2003) featuring an additional 30 minutes, 47 minutes and 51 minutes respectively of new scenes, special effects and music alongside fan-club credits. These versions of the films were not Jackson\'s preferred edit, however, they were simply extended versions for fans to enjoy at home.
*Batman v Superman: Dawn of Justice* directed by Zack Snyder had an \"Ultimate Edition,\" which added back 31 minutes of footage cut for the theatrical release and received an R rating, released digitally on 28 June 2016, and on Blu-ray on 19 July 2016.
The film *Justice League* which suffered a very troubled production, was begun by Snyder, who completed a pre-postproduction director\'s cut but had to step down before completing the project due to his daughter\'s death. Joss Whedon was hired by the films\' distributor Warner Bros. Pictures to complete the film, which was however heavily re-shot, re-edited and released in 2017 with Snyder retaining the directorial credit, to negative reception from general audience, fans and critics alike and a box office failure. Following a global fan campaign to which the director and members of the cast and crew showed support, Snyder was allowed to return and complete the project the way he intended it and a 4-hour version of the film dubbed *Zack Snyder\'s Justice League* with some additionally shot scenes at the end was released on March 18, 2021, on HBO Max to more favorable reviews than the original version. Snyder originally teased a 214-minute cut of the film that was supposed to be the theatrical version released in 2017 if he did not step down from the project. Snyder has also confirmed that his Netflix distributed sci-fi film *Rebel Moon -- Part One: A Child of Fire* (2023) and its sequel *Rebel Moon -- Part Two: The Scargiver* (2024) would receive R-rated director\'s cuts with its new titles *Rebel Moon -- Chapter One: Chalice of Blood*, and the sequel *Rebel Moon -- Chapter Two: Curse of Forgiveness* (both 2024). The PG-13 initial versions of those films having been critically panned.
The film *Caligula* exists in at least 10 different officially released versions, ranging from a sub-90-minute television edit version of TV-14 (later TV-MA) for cable television to an unrated full pornographic version exceeding 3.5 hours. This is believed to be the largest amount of distinct versions of a single film. Among major studio films, the record is believed to be held by *Blade Runner*; the magazine *Video Watchdog* counted no less than seven distinct versions in a 1993 issue, before director Ridley Scott later released a \"Final Cut\" in 2007 to acclaim from critics including Roger Ebert who included it on his great movies list, The release of *Blade Runner: The Final Cut* brings the supposed grand total to eight differing versions of *Blade Runner*.
Upon its release on DVD and Blu-ray in 2019, *Fantastic Beasts: The Crimes of Grindelwald* featured an extended cut with seven minutes of additional footage. This is the first time since *Harry Potter and the Chamber of Secrets* that a Wizarding World film has had one.
An animated example of an extended cut without the approval of the director was 1983\'s *Twice Upon a Time*, which was extended to have more profanity (supervised by co-writer and producer Bill Couturié) as opposed to co-director John Korty\'s original.
The Coen Brothers\' *Blood Simple* is one of few examples that demonstrate director\'s cuts are not necessarily longer.
| 853 |
Director's cut
| 3 |
8,731 |
# Director's cut
## Music videos {#music_videos}
The music video for the 2006 Academy Award-nominated song \"Listen\", performed by Beyoncé, received a director\'s cut by Diane Martel. This version of the video was later included on Knowles\' *B\'Day Anthology Video Album* (2007). Linkin Park has a director\'s cut version for their music video \"Faint\" (directed by Mark Romanek) in which one of the band members spray paints the words \"En Proceso\" on a wall, as well as Hoobastank also having one for 2004\'s \"The Reason\" which omits the woman getting hit by the car. Britney Spears\' music video for 2007\'s \"Gimme More\" was first released as a director\'s cut on iTunes, with the official video released 3 days later. Many other director\'s cut music videos contain sexual content that can\'t be shown on TV thus creating alternative scenes, such as Thirty Seconds to Mars\'s \"Hurricane\", and in some cases, alternative videos, such as in the case of Spears\' 2008 video for \"Womanizer\".
## Expanded usage in pop culture {#expanded_usage_in_pop_culture}
As the trend became more widely recognized, the term *director\'s cut* became increasingly used as a colloquialism to refer to an expanded version of other things, including video games, music, and comic books. This confusing usage only served to further reduce the artistic value of a director\'s cut, and it is currently rarely used in those ways.
### Video games {#video_games}
For video games, these expanded versions, also referred as \"complete editions\", will have additions to the gameplay or additional game modes and features outside the main portion of the game.
As is the case with certain high-profile Japanese-produced games, the game designers may take the liberty to revise their product for the overseas market with additional features during the localization process. These features are later added back to the native market in a re-release of a game in what is often referred as the international version of the game. This was the case with the overseas versions of *Final Fantasy VII*, *Metal Gear Solid* and *Rogue Galaxy*, which contained additional features (such as new difficulty settings for *Metal Gear Solid*), resulting in re-released versions of those respective games in Japan (*Final Fantasy VII International*, *Metal Gear Solid: Integral* and *Rogue Galaxy: Director\'s Cut*). In the case of *Metal Gear Solid 2: Sons of Liberty* and *Metal Gear Solid 3: Snake Eater*, the American versions were released first, followed by the Japanese versions and then the European versions, with each regional release offering new content not found in the previous one. All of the added content from the Japanese and European versions of those games were included in the expanded editions titled *Metal Gear Solid 2: Substance* and *Metal Gear Solid 3: Subsistence*.
They also, similar to movies, will occasionally include extra, uncensored or alternate versions of cutscenes, as was the case with *Resident Evil: Code Veronica X*. In markets with strict censorship, a later relaxing of those laws occasional will result in the game being rereleased with the \"Special/Uncut Edition\" tag added to differentiate between the originally released censored version and the current uncensored edition.
Several of the *Pokémon* games have also received director\'s cuts and have used the term \"extension\", though \"remake\" and \"third version\" are also often used by many fans. These include *Pocket Monsters: Blue* (Japan only), *Pokémon Yellow* (for *Pokémon Red* and *Green*/*Blue*), *Pokémon Crystal* (for *Pokémon Gold* and *Silver*), *Pokémon Emerald* (for *Pokémon Ruby* and *Sapphire*), *Pokémon Platinum* (for **Pokémon Diamond* and *Pearl**) and *Pokémon Ultra Sun* and *Ultra Moon*.
For their PlayStation 5 \"Director\'s Cut\" releases of the PlayStation 4 games *Ghost of Tsushima* and *Death Stranding* both received expanded features on both games.
### Music
\"Director\'s cuts\" in music are rarely released. A few exceptions include Guided by Voices\' 1994 album *Bee Thousand*, which was re-released as a three disc vinyl LP director\'s cut in 2004, and Fall Out Boy\'s 2003 album *Take This to Your Grave*, which was re-released as a Director\'s cut in 2005 with two extra tracks.
In 2011 British singer Kate Bush released the album titled *Director\'s Cut*. It is made up of songs from her earlier albums *The Sensual World* and *The Red Shoes* which have been remixed and restructured, three of which were re-recorded completely
| 708 |
Director's cut
| 4 |
8,736 |
# Djbdns
The **djbdns** software package is a DNS implementation. It was created by Daniel J. Bernstein in response to his frustrations with repeated security holes in the widely used BIND DNS software. As a challenge, Bernstein offered a \$1000 prize for the first person to find a security hole in djbdns, which was awarded in March 2009 to Matthew Dempsky.
, djbdns\'s tinydns component was the second most popular DNS server in terms of the number of domains for which it was the authoritative server, and third most popular in terms of the number of DNS hosts running it.
djbdns has never been vulnerable to the widespread cache poisoning vulnerability reported in July 2008, but it has been discovered that it is vulnerable to a related attack.
The source code has not been centrally managed since its release in 2001, and was released into the public domain in 2007. As of March 2009, there are a number of forks, one of which is dbndns (part of the Debian Project), and more than a dozen patches to modify the released version.
While djbdns does not directly support DNSSEC, there are third party patches to add DNSSEC support to djbdns\' authoritative-only tinydns component.
## Components
The djbdns software consists of servers, clients, and miscellaneous configuration tools.
### Servers
- dnscache --- the DNS resolver and cache.
- tinydns --- a database-driven DNS server.
- walldns --- a \"reverse DNS wall\", providing IP address-to-domain name lookup only.
- rbldns --- a server designed for DNS blacklisting service.
- pickdns --- a database-driven server that chooses from matching records depending on the requestor\'s location. (This feature is now a standard part of tinydns.)
- axfrdns --- a zone transfer server.
### Client tools {#client_tools}
- axfr-get --- a zone-transfer client.
- dnsip --- simple address from name lookup.
- dnsipq --- address from name lookup with rewriting rules.
- dnsname --- simple name from address lookup.
- dnstxt --- simple text record from name lookup.
- dnsmx --- mail exchanger lookup.
- dnsfilter --- looks up names for addresses read from stdin, in parallel.
- dnsqr --- recursive general record lookup.
- dnsq --- non-recursive general record lookup, useful for debugging.
- dnstrace (and dnstracesort) --- comprehensive testing of the chains of authority over DNS servers and their names.
## Design
In djbdns, different features and services are split off into separate programs. For example, zone transfers, zone file parsing, caching, and recursive resolving are implemented as separate programs. The result of these design decisions is a reduction in code size and complexity of the daemon program that provides the core function of answering lookup requests. Bernstein asserts that this is true to the spirit of the Unix operating system, and makes security verification much simpler.
## Copyright status {#copyright_status}
On December 28, 2007, Bernstein released djbdns into the public domain. Previously the package was distributed free of charge as license-free software. However this did not permit the distribution of modified versions of djbdns, which was one of the core principles of open-source software. Consequently, it was not included in those Linux distributions which required all components to be open-source
| 527 |
Djbdns
| 0 |
8,742 |
# Dublin Core
thumb\|140px\|Logo of DCMI, maintenance agency for Dublin Core Terms
The **Dublin Core vocabulary**, also known as the **Dublin Core Metadata Terms** (**DCMT**), is a general purpose metadata vocabulary for describing resources of any type. It was first developed for describing web content in the early days of the World Wide Web. The **Dublin Core Metadata Initiative** (**DCMI**) is responsible for maintaining the Dublin Core vocabulary.
Initially developed as fifteen terms in 1998 the set of elements has grown over time and in 2008 was redefined as a Resource Description Framework (RDF) vocabulary.
Designed with minimal constraints, each Dublin Core element is optional and may be repeated. There is no prescribed order in Dublin Core for presenting or using the elements.
## Milestones
- 1995 - In 1995 an invitational meeting hosted by the OCLC Online Computer Library Center and the National Center for Supercomputing Applications (NCSA) takes place at Dublin, Ohio, the headquarters of OCLC.
- 1998, September - RFC 2413 \"Dublin Core Metadata for Resource Discovery\" details the original 15-element vocabulary.
- 2000 - Issuance of Qualified Dublin Core.
- 2001 - Publication of the Dublin Core Metadata Element Set as ANSI/NISO Z39.85.
- 2008 - Publication of Dublin Core Metadata Initiative Terms in RDF.
## Evolution of the Dublin Core vocabulary {#evolution_of_the_dublin_core_vocabulary}
The Dublin Core Element Set was a response to concern about accurate finding of resources on the Web, with some early assumptions that this would be a library function. In particular it anticipated a future in which scholarly materials would be searchable on the World Wide Web. Whereas HTML was being used to mark-up the structure of documents, metadata was needed to mark-up the contents of documents. Given the great number of documents on the World Wide Web and those soon to be added to it, it was proposed that \"self-identifying\" documents would be necessary.
To this end, the Dublin Core Metadata Workshop met beginning in 1995 to develop a vocabulary that could be used to insert consistent metadata into Web documents. Originally defined as 15 metadata elements, the Dublin Core Element Set allowed authors of web pages a vocabulary and method for creating simple metadata for their works. It provided a simple, flat element set that could be used
Qualified Dublin Core was developed in the late 1990s to provide an extension mechanism to the vocabulary of 15 elements. This was a response to communities whose metadata needs required additional detail.
In 2012, the *DCMI Metadata Terms* was created using a RDF data model. This expanded element set incorporates the original 15 elements and many of the qualifiers of the qualified Dublin Core as RDF properties. The full set of elements is found under the namespace `http://purl.org/dc/terms/`. There is a separate namespace for the original 15 elements as previously defined: `http://purl.org/dc/elements/1.1/`.
### Dublin Core Metadata Element Set, 1995 {#dublin_core_metadata_element_set_1995}
The Dublin Core vocabulary published in 1999 consisted of 15 terms:
The vocabulary was commonly expressed in HTML \'meta\' tagging in the \"
\" section of an HTML-encoded page.
The vocabulary could be used in any metadata serialization including key/value pairs and XML.
### Qualified Dublin Core, 2000 {#qualified_dublin_core_2000}
Subsequent to the specification of the original 15 elements, Qualified Dublin Core was developed to provide an extension mechanism to be used when the primary 15 terms were not sufficient. A set of common refinements was provided in the documentation. These schemes include controlled vocabularies and formal notations or parsing rules. Qualified Dublin Core was not limited to these specific refinements, allowing communities to create extended metadata terms to meet their needs.
The guiding principle for the qualification of Dublin Core elements, colloquially known as the *Dumb-Down Principle*, states that an application that does not understand a specific element refinement term should be able to ignore the qualifier and treat the metadata value as if it were an unqualified (broader) element. While this may result in some loss of specificity, the remaining element value (without the qualifier) should continue to be generally correct and useful for discovery.
Qualified Dublin Core added qualifiers to these elements:
Element Qualifier
-------------- -------------------
Title Alternative
Description Table Of Contents
\" Abstract
DateCreated Valid
\" Available
\" Issued
\" Modified
FormatExtent Medium
Relation Is Version Of
\" Has Version
\" Is Replaced By
\" Replaces
\" Is Required By
\" Requires
\" Is Part Of
\" Has Part
\" Is Referenced By
\" References
\" Is Format Of
\" Has Format
Coverage Spatial
\" Temporal
: Qualified Dublin Core Elements
And added three elements not in the base 15:
- Audience
- Provenance
- RightsHolder
Qualified Dublin Core is often used with a \"dot syntax\", with a period separating the element and the qualifier(s). This is shown in this excerpted example provided by Chan and Hodges:
> **Title:** D-Lib Magazine\
> **Title.alternative:** Digital Library Magazine\
> **Identifier.ISSN:** 1082-9873\
> **Publisher:** Corporation for National Research Initiatives\
> **Publisher.place:** Reston, VA.\
> **Subject.topical.LCSH:** Digital libraries - Periodicals
### DCMI Metadata Terms, 2008 {#dcmi_metadata_terms_2008}
The DCMI Metadata Terms lists the current set of the Dublin Core vocabulary. This set includes the fifteen terms of the DCMES (in *italic*), as well as many of the qualified terms. Each term has a unique URI in the namespace `http://purl.org/dc/terms`, and all are defined as RDF properties.
It also includes these RDF classes which are used as domains and ranges of some properties:
## Maintenance of the standard {#maintenance_of_the_standard}
Changes that are made to the Dublin Core standard are reviewed by a DCMI Usage Board within the context of a DCMI Namespace Policy. This policy describes how terms are assigned and also sets limits on the amount of editorial changes allowed to the labels, definitions, and usage comments.
| 952 |
Dublin Core
| 0 |
8,742 |
# Dublin Core
## Dublin Core as standards {#dublin_core_as_standards}
The Dublin Core Metadata Terms vocabulary has been formally standardized internationally as **ISO 15836** by the International Organization for Standardization (ISO) and as **IETF RFC 5013** by the Internet Engineering Task Force (IETF), as well as in the U.S. as **ANSI/NISO Z39.85** by the National Information Standards Organization (NISO).
## Syntax
Syntax choices for metadata expressed with the Dublin Core elements depend on context. Dublin Core concepts and semantics are designed to be syntax independent`{{clarify|date=May 2019}}`{=mediawiki} and apply to a variety of contexts, as long as the metadata is in a form suitable for interpretation by both machines and people.
## Notable applications {#notable_applications}
One Document Type Definition based on Dublin Core is the Open Source Metadata Framework (OMF) specification. OMF is in turn used by Rarian (superseding ScrollKeeper), which is used by the GNOME desktop and KDE help browsers and the ScrollServer documentation server.
PBCore is also based on Dublin Core. The Zope CMF\'s Metadata products, used by the Plone, ERP5, the Nuxeo CPS Content management systems, SimpleDL, and Fedora Commons also implement Dublin Core. The EPUB e-book format uses Dublin Core metadata in the OPF file. Qualified Dublin Core is used in the DSpace archival management software.
The Australian Government Locator Service (AGLS) metadata standard is an application profile of Dublin Core
| 223 |
Dublin Core
| 1 |
8,743 |
# Document Object Model
The **Document Object Model** (**DOM**) is a cross-platform and language-independent API that treats an HTML or XML document as a tree structure wherein each node is an object representing a part of the document. The DOM represents a document with a logical tree. Each branch of the tree ends in a node, and each node contains objects. DOM methods allow programmatic access to the tree; with them one can change the structure, style or content of a document. Nodes can have event handlers (also known as event listeners) attached to them. Once an event is triggered, the event handlers get executed.
The principal standardization of the DOM was handled by the World Wide Web Consortium (W3C), which last developed a recommendation in 2004. WHATWG took over the development of the standard, publishing it as a living document. The W3C now publishes stable snapshots of the WHATWG standard.
In HTML DOM (Document Object Model), every element is a node:
- A document is a document node.
- All HTML elements are element nodes.
- All HTML attributes are attribute nodes.
- Text inserted into HTML elements are text nodes.
- Comments are comment nodes.
## History
The history of the Document Object Model is intertwined with the history of the \"browser wars\" of the late 1990s between Netscape Navigator and Microsoft Internet Explorer, as well as with that of JavaScript and JScript, the first scripting languages to be widely implemented in the JavaScript engines of web browsers.
JavaScript was released by Netscape Communications in 1995 within Netscape Navigator 2.0. Netscape\'s competitor, Microsoft, released Internet Explorer 3.0 the following year with a reimplementation of JavaScript called JScript. JavaScript and JScript let web developers create web pages with client-side interactivity. The limited facilities for detecting user-generated events and modifying the HTML document in the first generation of these languages eventually became known as \"DOM Level 0\" or \"Legacy DOM.\" No independent standard was developed for DOM Level 0, but it was partly described in the specifications for HTML 4.
Legacy DOM was limited in the kinds of elements that could be accessed. Form, link and image elements could be referenced with a hierarchical name that began with the root document object. A hierarchical name could make use of either the names or the sequential index of the traversed elements. For example, a form input element could be accessed as either `document.myForm.myInput` or `document.forms[0].elements[0]`.
The Legacy DOM enabled client-side form validation and simple interface interactivity like creating tooltips.
In 1997, Netscape and Microsoft released version 4.0 of Netscape Navigator and Internet Explorer respectively, adding support for Dynamic HTML (DHTML) functionality enabling changes to a loaded HTML document. DHTML required extensions to the rudimentary document object that was available in the Legacy DOM implementations. Although the Legacy DOM implementations were largely compatible since JScript was based on JavaScript, the DHTML DOM extensions were developed in parallel by each browser maker and remained incompatible. These versions of the DOM became known as the \"Intermediate DOM\".
After the standardization of ECMAScript, the W3C DOM Working Group began drafting a standard DOM specification. The completed specification, known as \"DOM Level 1\", became a W3C Recommendation in late 1998. By 2005, large parts of W3C DOM were well-supported by common ECMAScript-enabled browsers, including Internet Explorer 6 (from 2001), Opera, Safari and Gecko-based browsers (like Mozilla, Firefox, SeaMonkey and Camino).
## Standards
The W3C DOM Working Group published its final recommendation and subsequently disbanded in 2004. Development efforts migrated to the WHATWG, which continues to maintain a living standard. In 2009, the Web Applications group reorganized DOM activities at the W3C. In 2013, due to a lack of progress and the impending release of HTML5, the DOM Level 4 specification was reassigned to the HTML Working Group to expedite its completion. Meanwhile, in 2015, the Web Applications group was disbanded and DOM stewardship passed to the Web Platform group. Beginning with the publication of DOM Level 4 in 2015, the W3C creates new recommendations based on snapshots of the WHATWG standard.
- DOM Level 1 provided a complete model for an entire HTML or XML document, including the means to change any portion of the document.
- DOM Level 2 was published in late 2000. It introduced the `getElementById` function as well as an event model and support for XML namespaces and CSS.
- DOM Level 3, published in April 2004, added support for XPath and keyboard event handling, as well as an interface for serializing documents as XML.
- HTML5 was published in October 2014. Part of HTML5 had replaced DOM Level 2 HTML module.
- DOM Level 4 was published in 2015 and retired in November 2020.
- [DOM 2020-06](https://dom.spec.whatwg.org/review-drafts/2020-06/) was published in September 2021 as a W3C Recommendation. It is a snapshot of the WHATWG living standard.
| 804 |
Document Object Model
| 0 |
8,743 |
# Document Object Model
## Applications
### Web browsers {#web_browsers}
To render a document such as a HTML page, most web browsers use an internal model similar to the DOM. The nodes of every document are organized in a tree structure, called the *DOM tree*, with the topmost node named as \"Document object\". When an HTML page is rendered in browsers, the browser downloads the HTML into local memory and automatically parses it to display the page on screen. However, the DOM does not necessarily need to be represented as a tree, and some browsers have used other internal models.
### JavaScript
When a web page is loaded, the browser creates a Document Object Model of the page, which is an object oriented representation of an HTML document that acts as an interface between JavaScript and the document itself. This allows the creation of dynamic web pages, because within a page JavaScript can:
- add, change, and remove any of the HTML elements and attributes
- change any of the CSS styles
- react to all the existing events
- create new events
## DOM tree structure {#dom_tree_structure}
A Document Object Model (DOM) tree is a hierarchical representation of an HTML or XML document. It consists of a root node, which is the document itself, and a series of child nodes that represent the elements, attributes, and text content of the document. Each node in the tree has a parent node, except for the root node, and can have multiple child nodes.
### Elements as nodes {#elements_as_nodes}
Elements in an HTML or XML document are represented as nodes in the DOM tree. Each element node has a tag name and attributes, and can contain other element nodes or text nodes as children. For example, an HTML document with the following structure:
``` html
<html>
<head>
<title>My Website</title>
</head>
<body>
<h1>Welcome to DOM</h1>
<p>This is my website.</p>
</body>
</html>
```
will be represented in the DOM tree as:
``` text
- Document (root)
- html
- head
- title
- "My Website"
- body
- h1
- "Welcome to DOM"
- p
- "This is my website."
```
### Text nodes {#text_nodes}
Text content within an element is represented as a text node in the DOM tree. Text nodes do not have attributes or child nodes, and are always leaf nodes in the tree. For example, the text content \"My Website\" in the title element and \"Welcome\" in the h1 element in the above example are both represented as text nodes.
### Attributes as properties {#attributes_as_properties}
Attributes of an element are represented as properties of the element node in the DOM tree. For example, an element with the following HTML:
``` html
<a href="https://example.com">Link</a>
```
will be represented in the DOM tree as:
``` text
- a
- href: "https://example.com"
- "Link"
```
| 472 |
Document Object Model
| 1 |
8,743 |
# Document Object Model
## Manipulating the DOM tree {#manipulating_the_dom_tree}
The DOM tree can be manipulated using JavaScript or other programming languages. Common tasks include navigating the tree, adding, removing, and modifying nodes, and getting and setting the properties of nodes. The DOM API provides a set of methods and properties to perform these operations, such as `getElementById`, `createElement`, `appendChild`, and `innerHTML`.
``` javascript
// Create the root element
var root = document.createElement("root");
// Create a child element
var child = document.createElement("child");
// Add the child element to the root element
root.appendChild(child);
```
Another way to create a DOM structure is using the innerHTML property to insert HTML code as a string, creating the elements and children in the process. For example:
``` javascript
document.getElementById("root").innerHTML = "<child></child>";
```
Another method is to use a JavaScript library or framework such as jQuery, AngularJS, React, Vue.js, etc. These libraries provide a more convenient, eloquent and efficient way to create, manipulate and interact with the DOM.
It is also possible to create a DOM structure from an XML or JSON data, using JavaScript methods to parse the data and create the nodes accordingly.
Creating a DOM structure does not necessarily mean that it will be displayed in the web page, it only exists in memory and should be appended to the document body or a specific container to be rendered.
In summary, creating a DOM structure involves creating individual nodes and organizing them in a hierarchical structure using JavaScript or other programming languages, and it can be done using several methods depending on the use case and the developer\'s preference.
## Implementations
Because the DOM supports navigation in any direction (e.g., parent and previous sibling) and allows for arbitrary modifications, implementations typically buffer the document. However, a DOM need not originate in a serialized document at all, but can be created in place with the DOM API. And even before the idea of the DOM originated, there were implementations of equivalent structure with persistent disk representation and rapid access, for example DynaText\'s model disclosed in and various database approaches.
### Layout engines {#layout_engines}
Web browsers rely on layout engines to parse HTML into a DOM. Some layout engines, such as Trident/MSHTML, are associated primarily or exclusively with a particular browser, such as Internet Explorer. Others, including Blink, WebKit, and Gecko, are shared by a number of browsers, such as Google Chrome, Opera, Safari, and Firefox. The different layout engines implement the DOM standards to varying degrees of compliance.
### Libraries
DOM implementations:
- libxml2
- MSXML
- Xerces is a collection of DOM implementations written in C++, Java and Perl
- [xml.dom](https://docs.python.org/3/library/xml.dom.html) for Python
- XML for \<SCRIPT\> is a JavaScript-based DOM implementation
- [PHP.Gt DOM](https://github.com/PhpGt/Dom) is a server-side DOM implementation based on libxml2 and brings DOM level 4 compatibility to the PHP programming language
- [Domino](https://github.com/fgnass/domino/) is a Server-side (Node.js) DOM implementation based on Mozilla\'s dom.js. Domino is used in the MediaWiki stack with Visual Editor.
- [SimpleHtmlDom](https://github.com/wooly905/SimpleHtmlDom/) is a simple HTML document object model in C#, which can generate HTML string programmatically
| 511 |
Document Object Model
| 2 |
8,745 |
# Design pattern
A **design pattern** is the re-usable form of a solution to a design problem. The idea was introduced by the architect Christopher Alexander and has been adapted for various other disciplines, particularly software engineering.
## Details
An organized collection of design patterns that relate to a particular field is called a pattern language. This language gives a common terminology for discussing the situations designers are faced with.
Documenting a pattern requires explaining why a particular situation causes problems, and how the components of the pattern relate to each other to give the solution. Christopher Alexander describes common design problems as arising from \"conflicting forces\"---such as the conflict between wanting a room to be sunny and wanting it not to overheat on summer afternoons. A pattern would not tell the designer how many windows to put in the room; instead, it would propose a set of values to guide the designer toward a decision that is best for their particular application. Alexander, for example, suggests that enough windows should be included to direct light all around the room. He considers this a good solution because he believes it increases the enjoyment of the room by its occupants. Other authors might come to different conclusions, if they place higher value on heating costs, or material costs. These values, used by the pattern\'s author to determine which solution is \"best\", must also be documented within the pattern.
Pattern documentation should also explain when it is applicable. Since two houses may be very different from one another, a design pattern for houses must be broad enough to apply to both of them, but not so vague that it doesn\'t help the designer make decisions. The range of situations in which a pattern can be used is called its context. Some examples might be \"all houses\", \"all two-story houses\", or \"all places where people spend time\".
For instance, in Christopher Alexander\'s work, bus stops and waiting rooms in a surgery center are both within the context for the pattern \"A PLACE TO WAIT\".
## Examples
- Software design pattern, in software design
- Architectural pattern, for software architecture
- Interaction design pattern, used in interaction design / human--computer interaction
- Pedagogical patterns, in teaching
- Pattern gardening, in gardening
Business models also have design patterns. See `{{slink|Business model#Examples}}`{=mediawiki}
| 386 |
Design pattern
| 0 |
8,750 |
# Da capo
**Da capo** (`{{IPAc-en|d|ɑː|_|ˈ|k|ɑː|p|oʊ}}`{=mediawiki} `{{respell|dah|_|KAH|poh}}`{=mediawiki}, `{{IPAc-en|USalso|d|ə|_|-}}`{=mediawiki} `{{respell|də|_-}}`{=mediawiki}, `{{IPA|it|da (k)ˈkaːpo|lang}}`{=mediawiki}; often abbreviated as **D.C.**) is an Italian musical term that means \"from the beginning\" (literally, \"from the head\"). The term is a directive to repeat the previous part of music, often used to save space, and thus is an easier way of saying to repeat the music from the beginning.
In small pieces, this might be the same thing as a repeat. But in larger works, D.C. might occur after one or more repeats of small sections, indicating a return to the very beginning. The resulting structure of the piece is generally in ternary form. Sometimes, the composer describes the part to be repeated, for example: *Menuet da capo*.`{{Explanation needed|date=January 2018}}`{=mediawiki} In opera, where an aria of this structure is called a *da capo aria*, the repeated section is often adorned with grace notes.
The word *Fine* (Ital. \'end\') is generally placed above the stave at the point where the movement ceases after a \'Da capo\' repetition. Its place is occasionally taken by a pause (see fermata).\"
## Variations
- **Da Capo al Fine**`{{anchor|Da Capo al Fine}}`{=mediawiki} (often abbreviated as **D.C. al Fine**): Repeat from beginning to the end, or up to the word *Fine* (should that appear at the end of the passage)---the word *Fine* itself signifying the end.
```{=html}
<!-- -->
```
- **Da Capo al Coda**`{{anchor|Da Capo al Coda}}`{=mediawiki} (often abbreviated as **D.C. al Coda**): Repeat from beginning to an indicated place and then play the tail part (the \"Coda\"). It directs the musician to go back and repeat the music from the beginning (\"Capo\"), and to continue playing until one reaches the first coda symbol. Upon reaching the first coda symbol, skip to the second coda symbol and continue playing until the end. The portion of the piece from the second coda to the end is often referred to as the \"coda\" of the piece, or quite literally as \"the tail\". This may also be instructed by simply using the words *al Coda* after which the musician is to skip to the written word *Coda*.
- **Da Capo al Segno**`{{anchor|Da Capo al Segno}}`{=mediawiki} (often abbreviated as **D.C. al Segno**): It means \"From the beginning to the sign (𝄋)\"
| 373 |
Da capo
| 0 |
8,756 |
# Daniel Dennett
**Daniel Clement Dennett III** (March 28, 1942 -- April 19, 2024) was an American philosopher and cognitive scientist. His research centered on the philosophy of mind, the philosophy of science, and the philosophy of biology, particularly as those fields relate to evolutionary biology and cognitive science.
Dennett was the co-director of the Center for Cognitive Studies and the Austin B. Fletcher Professor of Philosophy at Tufts University in Massachusetts. Dennett was a member of the editorial board for *The Rutherford Journal* and a co-founder of The Clergy Project.
A vocal atheist and secularist, Dennett has been described as \"one of the most widely read and debated American philosophers\". He was referred to as one of the \"Four Horsemen\" of New Atheism, along with Richard Dawkins, Sam Harris, and Christopher Hitchens.
## Early life and education {#early_life_and_education}
Daniel Clement Dennett III was born on March 28, 1942, in Boston, Massachusetts, the son of Ruth Marjorie (née Leck; 1903--1971) and Daniel Clement Dennett Jr. (1910--1947).
Dennett spent part of his childhood in Lebanon, where, during World War II, his father, who had a PhD in Islamic studies from Harvard University, was a covert counter-intelligence agent with the Office of Strategic Services posing as a cultural attaché to the American Embassy in Beirut. His mother, an English major at Carleton College, went for a master\'s degree at the University of Minnesota before becoming an English teacher at the American Community School in Beirut. In 1947, his father was killed in a plane crash in Ethiopia. Shortly after, his mother took him back to Massachusetts. Dennett\'s sister is the investigative journalist Charlotte Dennett.
Dennett said that he was first introduced to the notion of philosophy while attending Camp Mowglis in Hebron, New Hampshire, at age 11, when a camp counselor said to him, \"You know what you are, Daniel? You\'re a philosopher.\"
Dennett graduated from Phillips Exeter Academy in 1959, and spent one year at Wesleyan University before receiving his BA degree in philosophy at Harvard University in 1963. There, he was a student of Willard Van Orman Quine. He had decided to transfer to Harvard after reading Quine\'s *From a Logical Point of View* and, thinking that Quine was wrong about some things, decided, as he said \"as only a freshman could, that I had to go to Harvard and confront this man with my corrections to his errors!\"
## Academic career {#academic_career}
In 1965, Dennett received his DPhil in philosophy at the University of Oxford, where he studied under Gilbert Ryle and was a member of Hertford College. His doctoral dissertation was entitled *The Mind and the Brain: Introspective Description in the Light of Neurological Findings; Intentionality*.
From 1965 to 1971, Dennett taught at the University of California, Irvine, before moving to Tufts University where he taught for many decades. He also spent periods visiting at Harvard University and several other universities. Dennett described himself as \"an autodidact---or, more properly, the beneficiary of hundreds of hours of informal tutorials on all the fields that interest me, from some of the world\'s leading scientists\".
Throughout his career, he was an interdisciplinarian who argued for \"breaking the silos of knowledge\", and he collaborated widely with computer scientists, cognitive scientists, and biologists.
Dennett was the recipient of a Fulbright Fellowship and two Guggenheim Fellowships.
| 552 |
Daniel Dennett
| 0 |
8,756 |
# Daniel Dennett
## Philosophical views {#philosophical_views}
### Free will vs Determinism {#free_will_vs_determinism}
While he was a confirmed compatibilist on free will, in \"On Giving Libertarians What They Say They Want\"---chapter 15 of his 1978 book *Brainstorms*---Dennett articulated the case for a two-stage model of decision making in contrast to libertarian views.
While other philosophers have developed two-stage models, including William James, Henri Poincaré, Arthur Compton, and Henry Margenau, Dennett defended this model for the following reasons:
Leading libertarian philosophers such as Robert Kane have rejected Dennett\'s model, specifically that random chance is directly involved in a decision, on the basis that they believe this eliminates the agent\'s motives and reasons, character and values, and feelings and desires. They claim that, if chance is the primary cause of decisions, then agents cannot be liable for resultant actions. Kane says:
### Mind
Dennett is a proponent of materialism in the philosophy of mind. He argues that mental states, including consciousness, are entirely the result of physical processes in the brain. In his book *Consciousness Explained* (1991), Dennett presents his arguments for a materialist understanding of consciousness, rejecting Cartesian dualism in favor of a physicalist perspective.
Dennett remarked in several places (such as \"Self-portrait\", in *Brainchildren*) that his overall philosophical project remained largely the same from his time at Oxford onwards. He was primarily concerned with providing a philosophy of mind that is grounded in empirical research. In his original dissertation, *Content and Consciousness*, he broke up the problem of explaining the mind into the need for a theory of content and for a theory of consciousness. His approach to this project also stayed true to this distinction. Just as *Content and Consciousness* has a bipartite structure, he similarly divided *Brainstorms* into two sections. He would later collect several essays on content in *The Intentional Stance* and synthesize his views on consciousness into a unified theory in *Consciousness Explained*. These volumes respectively form the most extensive development of his views.
In chapter 5 of *Consciousness Explained,* Dennett described his multiple drafts model of consciousness. He stated that, \"all varieties of perception---indeed all varieties of thought or mental activity---are accomplished in the brain by parallel, multitrack processes of interpretation and elaboration of sensory inputs. Information entering the nervous system is under continuous \'editorial revision.\'\" (p. 111). Later he asserts, \"These yield, over the course of time, something *rather like* a narrative stream or sequence, which can be thought of as subject to continual editing by many processes distributed around the brain, \...\" (p. 135, emphasis in the original).
In this work, Dennett\'s interest in the ability of evolution to explain some of the content-producing features of consciousness is already apparent, and this later became an integral part of his program. He stated his view is materialist and scientific, and he presents an argument against qualia; he argued that the concept of qualia is so confused that it cannot be put to any use or understood in any non-contradictory way, and therefore does not constitute a valid refutation of physicalism.
This view is rejected by neuroscientists Gerald Edelman, Antonio Damasio, Vilayanur Ramachandran, Giulio Tononi, and Rodolfo Llinás, all of whom state that qualia exist and that the desire to eliminate them is based on an erroneous interpretation on the part of some philosophers regarding what constitutes science.
Dennett\'s strategy mirrored his teacher Ryle\'s approach of redefining first-person phenomena in third-person terms, and denying the coherence of the concepts which this approach struggles with.
Dennett self-identified with a few terms: `{{blockquote|[Others] note that my "avoidance of the standard philosophical terminology for discussing such matters" often creates problems for me; philosophers have a hard time figuring out what I am saying and what I am denying. My refusal to play ball with my colleagues is deliberate, of course, since I view the standard philosophical terminology as worse than useless—a major obstacle to progress since it consists of so many errors.<ref>Daniel Dennett, ''The Message is: There is no Medium''</ref>}}`{=mediawiki}
In *Consciousness Explained*, he affirmed \"I am a sort of \'teleofunctionalist\', of course, perhaps the original teleofunctionalist\". He went on to say, \"I am ready to come out of the closet as some sort of verificationist.\" (pp. 460--61).
Dennett was credited with inspiring false belief tasks used in developmental psychology. He noted that when four-year-olds watch the Punch and Judy puppet show, they laugh because they know that they know more about what\'s going on than one of the characters does:
### Evolutionary debate {#evolutionary_debate}
Much of Dennett\'s work from the 1990s onwards was concerned with fleshing out his previous ideas by addressing the same topics from an evolutionary standpoint, from what distinguishes human minds from animal minds (*Kinds of Minds*), to how free will is compatible with a naturalist view of the world (*Freedom Evolves*).
Dennett saw evolution by natural selection as an algorithmic process (though he spelt out that algorithms as simple as long division often incorporate a significant degree of randomness). This idea is in conflict with the evolutionary philosophy of paleontologist Stephen Jay Gould, who preferred to stress the \"pluralism\" of evolution (i.e., its dependence on many crucial factors, of which natural selection is only one).
Dennett\'s views on evolution are identified as being strongly adaptationist, in line with his theory of the intentional stance, and the evolutionary views of biologist Richard Dawkins. In *Darwin\'s Dangerous Idea*, Dennett showed himself even more willing than Dawkins to defend adaptationism in print, devoting an entire chapter to a criticism of the ideas of Gould. This stems from Gould\'s long-running public debate with E. O. Wilson and other evolutionary biologists over human sociobiology and its descendant evolutionary psychology, which Gould and Richard Lewontin opposed, but which Dennett advocated, together with Dawkins and Steven Pinker. Gould argued that Dennett overstated his claims and misrepresented Gould\'s, to reinforce what Gould describes as Dennett\'s \"Darwinian fundamentalism\".
Dennett\'s theories have had a significant influence on the work of evolutionary psychologist Geoffrey Miller.
| 990 |
Daniel Dennett
| 1 |
8,756 |
# Daniel Dennett
## Philosophical views {#philosophical_views}
### Religion and morality {#religion_and_morality}
Dennett was a vocal atheist and secularist, a member of the Secular Coalition for America advisory board, and a member of the Committee for Skeptical Inquiry, as well as an outspoken supporter of the Brights movement. Dennett was referred to as one of the \"Four Horsemen of New Atheism\", along with Richard Dawkins, Sam Harris, and the late Christopher Hitchens.
In *Darwin\'s Dangerous Idea*, Dennett wrote that evolution can account for the origin of morality. He rejected the idea that morality being natural to us implies that we should take a skeptical position regarding ethics, noting that what is fallacious in the naturalistic fallacy is not to support values per se, but rather to *rush* from facts to values.
In his 2006 book, *Breaking the Spell: Religion as a Natural Phenomenon*, Dennett attempted to account for religious belief naturalistically, explaining possible evolutionary reasons for the phenomenon of religious adherence. In this book he declared himself to be \"a bright\", and defended the term.
He did research into clerics who are secretly atheists and how they rationalize their works. He found what he called a \"don\'t ask, don\'t tell\" conspiracy because believers did not want to hear of loss of faith. This made unbelieving preachers feel isolated, but they did not want to lose their jobs and church-supplied lodgings. Generally, they consoled themselves with the belief that they were doing good in their pastoral roles by providing comfort and required ritual. The research, with Linda LaScola, was further extended to include other denominations and non-Christian clerics. The research and stories Dennett and LaScola accumulated during this project were published in their 2013 co-authored book, *Caught in the Pulpit: Leaving Belief Behind*.
### Memetics, postmodernism and deepity {#memetics_postmodernism_and_deepity}
Dennett wrote about and advocated the notion of memetics as a philosophically useful tool, his last work on this topic being his \"Brains, Computers, and Minds\", a three-part presentation through Harvard\'s MBB 2009 Distinguished Lecture Series.
Dennett was critical of postmodernism, having said: `{{blockquote|Postmodernism, the school of "thought" that proclaimed "There are no truths, only interpretations" has largely played itself out in absurdity, but it has left behind a generation of academics in the humanities disabled by their distrust of the very idea of truth and their disrespect for evidence, settling for "conversations" in which nobody is wrong and nothing can be confirmed, only asserted with whatever style you can muster.<ref>Dennett, Daniel (October 19, 2013). [http://edge.org/conversation/dennett-on-wieseltier-v-pinker-in-the-new-republic "Dennett on Wieseltier V. Pinker in The New Republic: Let's Start With A Respect For Truth."] {{Webarchive|url=https://web.archive.org/web/20180805021650/https://www.edge.org/conversation/dennett-on-wieseltier-v-pinker-in-the-new-republic |date=August 5, 2018 }} ''Edge.org''. Retrieved August 4, 2018.</ref>}}`{=mediawiki}
Dennett adopted and somewhat redefined the term \"deepity\", originally coined by Miriam Weizenbaum. Dennett used \"deepity\" for a statement that is apparently profound, but is actually trivial on one level and meaningless on another. Generally, a deepity has two (or more) meanings: one that is true but trivial, and another that sounds profound and would be important if true, but is actually false or meaningless. Examples are \"Que será será!\", \"Beauty is only skin deep!\", \"The power of intention can transform your life.\" The term has been cited many times.
### Artificial intelligence {#artificial_intelligence}
While approving of the increase in efficiency that humans reap by using resources such as expert systems in medicine or GPS in navigation, Dennett saw a danger in machines performing an ever-increasing proportion of basic tasks in perception, memory, and algorithmic computation because people may tend to anthropomorphize such systems and attribute intellectual powers to them that they do not possess. He believed the relevant danger from artificial intelligence (AI) is that people will misunderstand the nature of basically \"parasitic\" AI systems, rather than employing them constructively to challenge and develop the human user\'s powers of comprehension.
In the 1990s, Dennett collaborated with a group of computer scientists at MIT to attempt to develop a humanoid, conscious robot, named \"Cog\". The project did not produce a conscious robot, but Dennett argued that in principle it could have.
As given in his penultimate book, *From Bacteria to Bach and Back*, Dennett\'s views were contrary to those of Nick Bostrom. Although acknowledging that it is \"possible in principle\" to create AI with human-like comprehension and agency, Dennett maintained that the difficulties of any such \"strong AI\" project would be orders of magnitude greater than those raising concerns have realized. Dennett believed, as of the book\'s publication in 2017, that the prospect of superintelligence (AI massively exceeding the cognitive performance of humans in all domains) was at least 50 years away, and of far less pressing significance than other problems the world faces.
### Realism
Dennett was known for his nuanced stance on realism. While he supported scientific realism, advocating that entities and phenomena posited by scientific theories exist independently of our perceptions, he leant towards instrumentalism concerning certain theoretical entities, valuing their explanatory and predictive utility, as showing in his discussion of real patterns. Dennett\'s pragmatic realism underlines the entanglement of language, consciousness, and reality. He posited that our discourse about reality is mediated by our cognitive and linguistic capacities, marking a departure from Naïve realism.
#### Realism and instrumentalism {#realism_and_instrumentalism}
Dennett\'s philosophical stance on realism was intricately connected to his views on instrumentalism and the theory of real patterns. He drew a distinction between illata, which are genuine theoretical entities like electrons, and abstracta, which are \"calculation bound entities or logical constructs\" such as centers of gravity and the equator, placing beliefs and the like among the latter. One of Dennett\'s principal arguments was an instrumentalistic construal of intentional attributions, asserting that such attributions are environment-relative.
In discussing intentional states, Dennett posited that they should not be thought of as resembling theoretical entities, but rather as logical constructs, avoiding the pitfalls of intentional realism without lapsing into pure instrumentalism or even eliminativism. His instrumentalism and anti-realism were crucial aspects of his view on intentionality, emphasizing the centrality and indispensability of the intentional stance to our conceptual scheme.
| 1,000 |
Daniel Dennett
| 2 |
8,756 |
# Daniel Dennett
## Recognition
Dennett was the recipient of a Fellowship at the Center for Advanced Study in the Behavioral Sciences. He was a Fellow of the Committee for Skeptical Inquiry and a Humanist Laureate of the International Academy of Humanism. He was named 2004 Humanist of the Year by the American Humanist Association. In 2006, Dennett received the Golden Plate Award of the American Academy of Achievement. He became a Fellow of the American Association for the Advancement of Science in 2009.
In February 2010, he was named to the Freedom From Religion Foundation\'s Honorary Board of distinguished achievers. In 2012, he was awarded the Erasmus Prize, an annual award for a person who has made an exceptional contribution to European culture, society or social science, \"for his ability to translate the cultural significance of science and technology to a broad audience\". In 2018, he was awarded an honorary doctorate (Dr.h.c.) by the Radboud University in Nijmegen, Netherlands, for his contributions to and influence on cross-disciplinary science.
## Personal life {#personal_life}
In 1962, Dennett married Susan Bell. They lived in North Andover, Massachusetts, and had a daughter, a son, and six grandchildren. He was an avid sailor who loved sailing *Xanthippe*, his 13-meter sailboat. He also played many musical instruments and sang at glee clubs.
Dennett died of interstitial lung disease at Maine Medical Center on April 19, 2024, at the age of 82.
## Selected works {#selected_works}
- *Brainstorms: Philosophical Essays on Mind and Psychology* (MIT Press 1981) (`{{ISBN|0-262-54037-1}}`{=mediawiki})
- *Elbow Room: The Varieties of Free Will Worth Wanting* (MIT Press 1984) -- on free will and determinism (`{{ISBN|0-262-04077-8}}`{=mediawiki})
- *Content and Consciousness* (Routledge & Kegan Paul Books Ltd; 2nd ed. 1986) (`{{ISBN|0-7102-0846-4}}`{=mediawiki})
- (First published 1987)
-
- *Darwin\'s Dangerous Idea: Evolution and the Meanings of Life* (Simon & Schuster; reprint edition 1996) (`{{ISBN|0-684-82471-X}}`{=mediawiki})
- *Kinds of Minds: Towards an Understanding of Consciousness* (Basic Books 1997) (`{{ISBN|0-465-07351-4}}`{=mediawiki})
- *Brainchildren: Essays on Designing Minds (Representation and Mind)* (MIT Press 1998) (`{{ISBN|0-262-04166-9}}`{=mediawiki}) -- A Collection of Essays 1984--1996
-
- *Freedom Evolves* (Viking Press 2003) (`{{ISBN|0-670-03186-0}}`{=mediawiki})
- *Sweet Dreams: Philosophical Obstacles to a Science of Consciousness* (MIT Press 2005) (`{{ISBN|0-262-04225-8}}`{=mediawiki})
- *Breaking the Spell: Religion as a Natural Phenomenon* (Penguin Group 2006) (`{{ISBN|0-670-03472-X}}`{=mediawiki}).
- *Neuroscience and Philosophy: Brain, Mind, and Language* (Columbia University Press 2007) (`{{ISBN|978-0-231-14044-7}}`{=mediawiki}), co-authored with Max Bennett, Peter Hacker, and John Searle
- *Science and Religion: Are They Compatible?* (Oxford University Press 2010) (`{{ISBN|0-199-73842-4}}`{=mediawiki}), co-authored with Alvin Plantinga
- *Intuition Pumps and Other Tools for Thinking* (W. W. Norton & Company 2013) (`{{ISBN|0-393-08206-7}}`{=mediawiki})
- *Caught in the Pulpit: Leaving Belief Behind* (Pitchstone Publishing -- 2013) (`{{ISBN|978-1634310208}}`{=mediawiki}) co-authored with Linda LaScola
- *Inside Jokes: Using Humor to Reverse-Engineer the Mind* (MIT Press -- 2011) (`{{ISBN|978-0-262-01582-0}}`{=mediawiki}), co-authored with Matthew M. Hurley and Reginald B. Adams Jr.
- *From Bacteria to Bach and Back: The Evolution of Minds* (W. W
| 482 |
Daniel Dennett
| 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.