sentence1
stringlengths 1
133k
| sentence2
stringlengths 1
131k
|
---|---|
of the world. Even in Eastern Europe, industrialization lagged far behind. Russia, for example, remained largely rural and agricultural, and its autocratic rulers kept the peasants in serfdom. The concept of Central Europe was already known at the beginning of the 19th century, but its real life began in the 20th century and immediately became an object of intensive interest. However, the very first concept mixed science, politics and economy – it was strictly connected with the intensively growing German economy and its aspirations to dominate a part of European continent called Mitteleuropa. The German term denoting Central Europe was so fashionable that other languages started referring to it when indicating territories from Rhine to Vistula, or even Dnieper, and from the Baltic Sea to the Balkans. An example of that-time vision of Central Europe may be seen in Joseph Partsch's book of 1903. On 21 January 1904, Mitteleuropäischer Wirtschaftsverein (Central European Economic Association) was established in Berlin with economic integration of Germany and Austria–Hungary (with eventual extension to Switzerland, Belgium and the Netherlands) as its main aim. Another time, the term Central Europe became connected to the German plans of political, economic and cultural domination. The "bible" of the concept was Friedrich Naumann's book Mitteleuropa in which he called for an economic federation to be established after World War I. Naumann's idea was that the federation would have at its centre Germany and the Austro-Hungarian Empire but would also include all European nations outside the Triple Entente. The concept failed after the German defeat in World War I and the dissolution of Austria-Hungary. The revival of the idea may be observed during the Hitler era. Interwar period According to Emmanuel de Martonne, in 1927 the Central European countries included: Austria, Czechoslovakia, Germany, Hungary, Poland, Romania and Switzerland. The author uses both Human and Physical Geographical features to define Central Europe, but he doesn't take into account the legal development or the social, cultural, economic, infrastructural developments in these countries. The interwar period (1918–1938) brought a new geopolitical system, as well as economic and political problems, and the concept of Central Europe took on a different character. The centre of interest was moved to its eastern part – the countries that have (re)appeared on the map of Europe: Czechoslovakia, Hungary and Poland. Central Europe ceased to be the area of German aspiration to lead or dominate and became a territory of various integration movements aiming at resolving political, economic and national problems of "new" states, being a way to face German and Soviet pressures. However, the conflict of interests was too big and neither Little Entente nor Intermarium (Międzymorze) ideas succeeded. These matters were not helped by the fact that Czechoslovakia appeared alone as the only multicultural, democratic, and liberal state among its neighbors. The events preceding World War II in Europe -- including the so-called Western betrayal/ Munich Agreement were very much enabled by the rising nationalism and ethnocentrism that typified that time period. The interwar period brought new elements to the concept of Central Europe. Before World War I, it embraced mainly German states (Germany, Austria), non-German territories being an area of intended German penetration and domination – German leadership position was to be the natural result of economic dominance. After the war, the Eastern part of Central Europe was placed at the centre of the concept. At that time the scientists took an interest in the idea: the International Historical Congress in Brussels in 1923 was committed to Central Europe, and the 1933 Congress continued the discussions. Hungarian historian Magda Ádám wrote in her study Versailles System and Central Europe (2006): "Today we know that the bane of Central Europe was the Little Entente, military alliance of Czechoslovakia, Romania and Kingdom of Serbs, Croats and Slovenes (later Yugoslavia), created in 1921 not for Central Europe's cooperation nor to fight German expansion, but in a wrong perceived notion that a completely powerless Hungary must be kept down". The avant-garde movements of Central Europe were an essential part of modernism's evolution, reaching its peak throughout the continent during the 1920s. The Sourcebook of Central European avantgards (Los Angeles County Museum of Art) contains primary documents of the avant-gardes in Austria, Czechoslovakia, Germany, Hungary, and Poland from 1910 to 1930. The manifestos and magazines of Central European radical art circles are well known to Western scholars and are being taught at primary universities of their kind in the western world. Mitteleuropa Mitteleuropa may refer to an historical concept, or to a contemporary German definition of Central Europe. As an historical concept, the German term Mitteleuropa (or alternatively its literal translation into English, Middle Europe) is an ambiguous German concept. It is sometimes used in English to refer to an area somewhat larger than most conceptions of 'Central Europe'; it refers to territories under Germanic cultural hegemony until World War I (encompassing Austria–Hungary and Germany in their pre-war formations but usually excluding the Baltic countries north of East Prussia). According to Fritz Fischer Mitteleuropa was a scheme in the era of the Reich of 1871–1918 by which the old imperial elites had allegedly sought to build a system of German economic, military and political domination from the northern seas to the Near East and from the Low Countries through the steppes of Russia to the Caucasus. Later on, professor Fritz Epstein argued the threat of a Slavic "Drang nach Westen" (Western expansion) had been a major factor in the emergence of a Mitteleuropa ideology before the Reich of 1871 ever came into being. In Germany the connotation was also sometimes linked to the pre-war German provinces east of the Oder-Neisse line. The term "Mitteleuropa" conjures up negative historical associations among some elderly people, although the Germans have not played an exclusively negative role in the region. Most Central European Jews embraced the enlightened German humanistic culture of the 19th century. German-speaking Jews from turn of the 20th century Vienna, Budapest and Prague became representatives of what many consider to be Central European culture at its best, though the Nazi version of "Mitteleuropa" destroyed this kind of culture instead. However, the term "Mitteleuropa" is now widely used again in German education and media without negative meaning, especially since the end of communism. In fact, many people from the new states of Germany do not identify themselves as being part of Western Europe and therefore prefer the term "Mitteleuropa". Central Europe during World War II During World War II, Central Europe was largely occupied by Nazi Germany. Many areas were a battle area and were devastated. The mass murder of the Jews depopulated many of their centuries-old settlement areas or settled other people there and their culture was wiped out. Both Adolf Hitler and Joseph Stalin diametrically opposed the centuries-old Habsburg principles of "live and let live" with regard to ethnic groups, peoples, minorities, religions, cultures and languages and tried to assert their own ideologies and power interests in Central Europe. There were various Allied plans for state order in Central Europe for post-war. While Stalin tried to get as many states under his control as possible, Winston Churchill preferred a Central European Danube Confederation to counter these countries against Germany and Russia. There were also plans to add Bavaria and Württemberg to an enlarged Austria. There were also various resistance movements around Otto von Habsburg that pursued this goal. The group around the Austrian priest Heinrich Maier also planned in this direction, which also successfully helped the Allies to wage war by, among other things, forwarding production sites and plans for V-2 rockets, Tiger tanks and aircraft to the USA. So Otto von Habsburg also tried to detach Hungary from its grasp by Nazi Germany and the USSR. There were various considerations to prevent German power in Europe after the war. Churchill's idea of reaching the area around Vienna and Budapest before the Russians via an operation from the Adriatic had not been approved by the Western Allied chiefs of staff. As a result of the military situation at the end of the war, Stalin's plans prevailed and much of Central Europe came under Russian control. Central Europe behind the Iron Curtain Following World War II, large parts of Europe that were culturally and historically Western became part of the Eastern bloc. Czech author Milan Kundera (emigrant to France) thus wrote in 1984 about the "Tragedy of Central Europe" in the New York Review of Books. The boundary between the two blocks was called the Iron Curtain. Consequently, the English term Central Europe was increasingly applied only to the westernmost former Warsaw Pact countries (East Germany, Poland, Czechoslovakia, Hungary) to specify them as communist states that were culturally tied to Western Europe. This usage continued after the end of the Warsaw Pact when these countries started to undergo transition. The post-World War II period brought blocking of research on Central Europe in the Eastern Bloc countries, as its every result proved the dissimilarity of Central Europe, which was inconsistent with the Stalinist doctrine. On the other hand, the topic became popular in Western Europe and the United States, much of the research being carried out by immigrants from Central Europe. At the end of communism, publicists and historians in Central Europe, especially the anti-communist opposition, returned to their research. According to Karl A. Sinnhuber (Central Europe: Mitteleuropa: Europe Centrale: An Analysis of a Geographical Term) most Central European states were unable to preserve their political independence and became Soviet Satellite Europe. Besides Austria, only the marginal European states of Finland and Yugoslavia preserved their political sovereignty to a certain degree, being left out of any military alliances in Europe. The opening of the Iron Curtain between Austria and Hungary at the Pan-European Picnic on 19 August 1989 then set in motion a peaceful chain reaction, at the end of which there was no longer an East Germany and the Eastern Bloc had disintegrated. It was the largest escape movement from East Germany since the Berlin Wall was built in 1961. After the picnic, which was based on an idea by Otto von Habsburg to test the reaction of the USSR and Mikhail Gorbachev to an opening of the border, tens of thousands of media-informed East Germans set off for Hungary. The leadership of the GDR in East Berlin did not dare to completely block the borders of their own country and the USSR did not respond at all. This broke the bracket of the Eastern Bloc and Central Europe subsequently became free from communism. Roles According to American professor Ronald Tiersky, the 1991 summit held in Visegrád, Hungary and attended by the Polish, Hungarian and Czechoslovak presidents was hailed at the time as a major breakthrough in Central European cooperation, but the Visegrád Group became a vehicle for coordinating Central Europe's road to the European Union, while development of closer ties within the region languished. American professor Peter J. Katzenstein described Central Europe as a way station in a Europeanization process that marks the transformation process of the Visegrád Group countries in different, though comparable ways. According to him, in Germany's contemporary public discourse "Central European identity" refers to the civilizational divide between Catholicism and Eastern Orthodoxy. He says there is no precise, uncontestable way to decide whether Lithuania, Latvia, Estonia, Serbia, Croatia, Slovenia, Romania, or Bulgaria are parts of Central Europe. Definitions Rather than a physical entity, Central Europe is a concept of shared history that contrasts with that of the surrounding regions. The issue of how to name and define the Central European area is subject to debates. Very often, the definition depends on the nationality and historical perspective of its author. Academic The main proposed regional definitions, gathered by Polish historian Jerzy Kłoczowski, include: West-Central and East-Central Europe – this conception, presented in 1950, distinguishes two regions in Central Europe: German West-Centre, with imperial tradition of the Reich, and the East-Centre covered by variety of nations from Finland to Greece, placed between great empires of Scandinavia, Germany, Italy and the Soviet Union. Central Europe as the area of cultural heritage of the Polish–Lithuanian Commonwealth – Ukrainian, Belarusian and Lithuanian historians, in cooperation (since 1990) with Polish historians, insist on the importance of the concept. Central Europe as a region connected to the Western civilisation for a very long time, including countries such as the Polish–Lithuanian Commonwealth, Kingdom of Croatia, Holy Roman Empire, later German Empire and the Habsburg Monarchy, the Kingdom of Hungary and the Crown of Bohemia. Central Europe understood in this way borders on Russia and South-Eastern Europe, but the exact frontier of the region is difficult to determine. Central Europe as the area of cultural heritage of the Habsburg Empire (later Austria-Hungary) – a concept which is popular in regions along the river Danube: Austria, the Czech Republic and Slovakia, Slovenia, large parts of Croatia, Romania and Serbia, also smaller parts of Poland and Ukraine. In Hungary, the narrowing of Central Europe into former Habsburg lands is not popular. A concept underlining the links connecting Belarus, Moldova and Ukraine with Russia and treating the Russian Empire together with the whole Slavic Orthodox population as one entity – this position is taken by the Russian historiography. A concept putting the accent on links with the West, especially from the 19th century and the grand period of liberation and formation of Nation-states – this idea is represented by the South-Eastern states, which prefer the enlarged concept of the "East Centre" expressing their links with Western culture. Former University of Vienna professor Lonnie R. Johnson points out criteria to distinguish Central Europe from Western, Eastern and Southeast Europe: One criterion for defining Central Europe is the frontiers of medieval empires and kingdoms that largely correspond to the religious frontiers between the Catholic West and the Orthodox East. The pagans of Central Europe were converted to Catholicism while in Southeastern and Eastern Europe they were brought into the fold of the Eastern Orthodox Church. Multinational empires were a characteristic of Central Europe. Hungary and Poland, small and medium-size states today, were empires during their early histories. The historical Kingdom of Hungary was until 1918 three times larger than Hungary is today, while Poland was the largest state in Europe in the 16th century. Both these kingdoms housed a wide variety of different peoples. He also thinks that Central Europe is a dynamic historical concept, not a static spatial one. For example, Lithuania, a fair share of Belarus and western Ukraine are in Eastern Europe today, but years ago they were in Polish–Lithuanian Commonwealth. Johnson's study on Central Europe received acclaim and positive reviews in the scientific community. However, according to Romanian researcher Maria Bucur this very ambitious project suffers from the weaknesses imposed by its scope (almost 1600 years of history). Encyclopedias, gazetteers, dictionaries The Columbia Encyclopedia defines Central Europe as: Germany, Switzerland, Liechtenstein, Austria, Poland, the Czech Republic, Slovakia, and Hungary. The World Factbook uses a similar definition and adds also Slovenia. Encarta Encyclopedia and Encyclopædia Britannica do not clearly define the region, but Encarta places the same countries into Central Europe in its individual articles on countries, adding Slovenia in "south central Europe". The German Encyclopaedia Meyers Grosses Taschenlexikon (Meyers Big Pocket Encyclopedia), 1999, defines Central Europe as the central part of Europe with no precise borders to the East and West. The term is mostly used to denominate the territory between the Schelde to Vistula and from the Danube to the Moravian Gate. Usually the countries considered to be Central European are Austria, Croatia, the Czech Republic, Germany, Hungary, Liechtenstein, Poland, Slovakia, Slovenia, Switzerland; in the broader sense Romania and Serbia too, occasionally also Belgium, the Netherlands, and Luxembourg. According to Meyers Enzyklopädisches Lexikon, Central Europe is a part of Europe composed of Austria, Belgium, Czechia, Slovakia, Germany, Hungary, Luxembourg, Netherlands, Poland, Romania and Switzerland, and northern marginal regions of Italy and Yugoslavia (northern states – Croatia, Serbia and Slovenia), as well as northeastern France. The German (Standing Committee on Geographical Names), which develops and recommends rules for the uniform use of geographical names, proposes two sets of boundaries. The first follows international borders of current countries. The second subdivides and includes some countries based on cultural criteria. In comparison to some other definitions, it is broader, including Luxembourg, Croatia, Estonia, Latvia, Lithuania, and in the second sense, parts of Russia, Belarus, Ukraine, Romania, Serbia, Italy, and France. Geographical There is no general agreement either on what geographic area constitutes Central Europe, nor on how to further subdivide it geographically. At times, the term "Central Europe" denotes a geographic definition as the Danube region in the heart of the continent, including the language and culture areas which are today included in the states of Croatia, the Czech Republic, Hungary, Poland, Romania, Serbia, Slovakia, Slovenia and usually also Austria and Germany, but never Russia and other countries of the former Soviet Union towards the Ural mountains. Governmental and standards organisations The terminology EU11 countries refer the Central, Eastern and Baltic European member states which accessed in 2004 and after: in 2004 the Czech Republic, Estonia, Latvia, Lithuania, Hungary, Poland, Slovenia, and the Slovak Republic; in 2007 Bulgaria, Romania; and in 2013 Croatia. Map gallery States The comprehension of the concept of Central Europe is an ongoing source of controversy, though the Visegrád Group constituents are almost always included as de facto Central European countries. Although views on which countries belong to Central Europe are vastly varied, according to many sources (see section Definitions) the region includes the states listed in the sections below. Austria Czech Republic Germany Hungary Liechtenstein Poland Slovakia Slovenia Switzerland Depending on context, Central European countries are sometimes grouped as Eastern or Western European countries, collectively or individually but some place them in Eastern Europe instead: for instance Austria can be referred to as Central European, as well as Eastern European or Western European and Slovenia can sometimes be placed in either Southeastern or Eastern Europe. Other countries and regions Some sources also add neighbouring countries for historical reasons (the former Austro-Hungarian and German Empires, and modern Estonia, Latvia and Lithuania), or based on geographical and/or cultural reasons: Croatia (alternatively placed in Southeast Europe) Romania (Transylvania, along with Banat, Crișana, and Maramureș as well as Bukovina) Russia (Kaliningrad Oblast) Serbia (primarily Vojvodina and Northern Belgrade) Ukraine (Transcarpathia, Galicia and Northern Bukovina) Luxembourg. The three Baltic countries (Lithuania, Latvia, and Estonia), geographically in Northern Europe, have been considered part of Central Europe in the German tradition of the term, Mitteleuropa. Benelux countries are generally considered a part of Western Europe, rather than Central Europe. Nevertheless, they are occasionally mentioned in the Central European context due to cultural, historical and linguistic ties. Some regions of neighbouring states may sometimes be included in Central Europe: Italy (South Tyrol, Trentino, Trieste and Gorizia, Friuli, Veneto full or in part, occasionally Lombardy or all of Northern Italy) France (Alsace, Franconian Lorraine, occasionally the whole of Lorraine, Franche-Comté, the Ardennes and Savoy) Belgium (the Ardennes). Geography Geography defines Central Europe's natural borders with the neighbouring regions to the north across the Baltic Sea, namely Northern Europe (or Scandinavia), and to the south across the Alps, the Apennine peninsula (or Italy), and the Balkan peninsula across the Soča-Krka-Sava-Danube line. The borders to Western Europe and Eastern Europe are geographically less defined, and for this reason the cultural and historical boundaries migrate more easily west–east than south–north. The river Rhine, which runs south–north through Western Germany, is an exception. Southwards, the Pannonian Plain is bounded by the rivers Sava and Danube – and their respective floodplains. The Pannonian Plain stretches over the following countries: Austria, Croatia, Hungary, Romania, Serbia, Slovakia and Slovenia, and touches borders of Bosnia and Herzegovina and Ukraine ("peri- Pannonian states"). As southeastern division of the Eastern Alps, the Dinaric Alps extend for 650 kilometres along the coast of the Adriatic Sea (northwest-southeast), from the | while in Southeastern and Eastern Europe they were brought into the fold of the Eastern Orthodox Church. Multinational empires were a characteristic of Central Europe. Hungary and Poland, small and medium-size states today, were empires during their early histories. The historical Kingdom of Hungary was until 1918 three times larger than Hungary is today, while Poland was the largest state in Europe in the 16th century. Both these kingdoms housed a wide variety of different peoples. He also thinks that Central Europe is a dynamic historical concept, not a static spatial one. For example, Lithuania, a fair share of Belarus and western Ukraine are in Eastern Europe today, but years ago they were in Polish–Lithuanian Commonwealth. Johnson's study on Central Europe received acclaim and positive reviews in the scientific community. However, according to Romanian researcher Maria Bucur this very ambitious project suffers from the weaknesses imposed by its scope (almost 1600 years of history). Encyclopedias, gazetteers, dictionaries The Columbia Encyclopedia defines Central Europe as: Germany, Switzerland, Liechtenstein, Austria, Poland, the Czech Republic, Slovakia, and Hungary. The World Factbook uses a similar definition and adds also Slovenia. Encarta Encyclopedia and Encyclopædia Britannica do not clearly define the region, but Encarta places the same countries into Central Europe in its individual articles on countries, adding Slovenia in "south central Europe". The German Encyclopaedia Meyers Grosses Taschenlexikon (Meyers Big Pocket Encyclopedia), 1999, defines Central Europe as the central part of Europe with no precise borders to the East and West. The term is mostly used to denominate the territory between the Schelde to Vistula and from the Danube to the Moravian Gate. Usually the countries considered to be Central European are Austria, Croatia, the Czech Republic, Germany, Hungary, Liechtenstein, Poland, Slovakia, Slovenia, Switzerland; in the broader sense Romania and Serbia too, occasionally also Belgium, the Netherlands, and Luxembourg. According to Meyers Enzyklopädisches Lexikon, Central Europe is a part of Europe composed of Austria, Belgium, Czechia, Slovakia, Germany, Hungary, Luxembourg, Netherlands, Poland, Romania and Switzerland, and northern marginal regions of Italy and Yugoslavia (northern states – Croatia, Serbia and Slovenia), as well as northeastern France. The German (Standing Committee on Geographical Names), which develops and recommends rules for the uniform use of geographical names, proposes two sets of boundaries. The first follows international borders of current countries. The second subdivides and includes some countries based on cultural criteria. In comparison to some other definitions, it is broader, including Luxembourg, Croatia, Estonia, Latvia, Lithuania, and in the second sense, parts of Russia, Belarus, Ukraine, Romania, Serbia, Italy, and France. Geographical There is no general agreement either on what geographic area constitutes Central Europe, nor on how to further subdivide it geographically. At times, the term "Central Europe" denotes a geographic definition as the Danube region in the heart of the continent, including the language and culture areas which are today included in the states of Croatia, the Czech Republic, Hungary, Poland, Romania, Serbia, Slovakia, Slovenia and usually also Austria and Germany, but never Russia and other countries of the former Soviet Union towards the Ural mountains. Governmental and standards organisations The terminology EU11 countries refer the Central, Eastern and Baltic European member states which accessed in 2004 and after: in 2004 the Czech Republic, Estonia, Latvia, Lithuania, Hungary, Poland, Slovenia, and the Slovak Republic; in 2007 Bulgaria, Romania; and in 2013 Croatia. Map gallery States The comprehension of the concept of Central Europe is an ongoing source of controversy, though the Visegrád Group constituents are almost always included as de facto Central European countries. Although views on which countries belong to Central Europe are vastly varied, according to many sources (see section Definitions) the region includes the states listed in the sections below. Austria Czech Republic Germany Hungary Liechtenstein Poland Slovakia Slovenia Switzerland Depending on context, Central European countries are sometimes grouped as Eastern or Western European countries, collectively or individually but some place them in Eastern Europe instead: for instance Austria can be referred to as Central European, as well as Eastern European or Western European and Slovenia can sometimes be placed in either Southeastern or Eastern Europe. Other countries and regions Some sources also add neighbouring countries for historical reasons (the former Austro-Hungarian and German Empires, and modern Estonia, Latvia and Lithuania), or based on geographical and/or cultural reasons: Croatia (alternatively placed in Southeast Europe) Romania (Transylvania, along with Banat, Crișana, and Maramureș as well as Bukovina) Russia (Kaliningrad Oblast) Serbia (primarily Vojvodina and Northern Belgrade) Ukraine (Transcarpathia, Galicia and Northern Bukovina) Luxembourg. The three Baltic countries (Lithuania, Latvia, and Estonia), geographically in Northern Europe, have been considered part of Central Europe in the German tradition of the term, Mitteleuropa. Benelux countries are generally considered a part of Western Europe, rather than Central Europe. Nevertheless, they are occasionally mentioned in the Central European context due to cultural, historical and linguistic ties. Some regions of neighbouring states may sometimes be included in Central Europe: Italy (South Tyrol, Trentino, Trieste and Gorizia, Friuli, Veneto full or in part, occasionally Lombardy or all of Northern Italy) France (Alsace, Franconian Lorraine, occasionally the whole of Lorraine, Franche-Comté, the Ardennes and Savoy) Belgium (the Ardennes). Geography Geography defines Central Europe's natural borders with the neighbouring regions to the north across the Baltic Sea, namely Northern Europe (or Scandinavia), and to the south across the Alps, the Apennine peninsula (or Italy), and the Balkan peninsula across the Soča-Krka-Sava-Danube line. The borders to Western Europe and Eastern Europe are geographically less defined, and for this reason the cultural and historical boundaries migrate more easily west–east than south–north. The river Rhine, which runs south–north through Western Germany, is an exception. Southwards, the Pannonian Plain is bounded by the rivers Sava and Danube – and their respective floodplains. The Pannonian Plain stretches over the following countries: Austria, Croatia, Hungary, Romania, Serbia, Slovakia and Slovenia, and touches borders of Bosnia and Herzegovina and Ukraine ("peri- Pannonian states"). As southeastern division of the Eastern Alps, the Dinaric Alps extend for 650 kilometres along the coast of the Adriatic Sea (northwest-southeast), from the Julian Alps in the northwest down to the Šar-Korab massif, north–south. According to the Freie Universität Berlin, this mountain chain is classified as South Central European. The city of Trieste in this area, for example, expressly sees itself as a città mitteleuropea. This is particularly because it lies at the interface between the Latin, Slavic, Germanic, Greek and Jewish culture on the one hand and the geographical area of the Mediterranean and the Alps on the other. A geographical and cultural assignment is made. The Central European flora region stretches from Central France (the Massif Central) to Central Romania (Carpathians) and Southern Scandinavia. Demography Central Europe is one of the continent's most populous regions. It includes countries of varied sizes, ranging from tiny Liechtenstein to Germany, the second largest European country by population. Demographic figures for countries entirely located within notion of Central Europe ("the core countries") number around 165 million people, out of which around 82 million are residents of Germany. Other populations include: Poland with around 38.5 million residents, Czech Republic at 10.5 million, Hungary at 10 million, Austria with 8.8 million, Switzerland with 8.5 million, Slovakia at 5.4 million, and Liechtenstein at a bit less than 40,000. If the countries which are occasionally included in Central Europe were counted in, partially or in whole – Croatia (4.3 million), Slovenia (2 million, 2014 estimate), Romania (20 million), Lithuania (2.9 million), Latvia (2 million), Estonia (1.3 million), Serbia (7.1 million) – it would contribute to the rise of between 25 and 35 million, depending on whether regional or integral approach was used. If smaller, western and eastern historical parts of Central Europe would be included in the demographic corpus, further 20 million people of different nationalities would also be added in the overall count, it would surpass the 200 million people figure. Economy Currencies Currently, the members of the Eurozone include Austria, Germany, Luxembourg, Slovakia, and Slovenia. Croatia, the Czech Republic, Hungary and Poland use their currencies (Croatian kuna, Czech koruna, Hungarian forint, Polish złoty), but are obliged to adopt the Euro. Switzerland uses its own currency – Swiss franc, Serbia too (Serbian dinar), as well as Romania (Romanian leu). Human Development Index In 2018, Switzerland topped the HDI list among Central European countries, also ranking #2 in the world. Serbia rounded out the list at #11 (67 world). Globalisation The index of globalization in Central European countries (2016 data): Switzerland topped this list as well (#1 world). Prosperity Index Legatum Prosperity Index demonstrates an average and high level of prosperity in Central Europe (2018 data). Switzerland topped the index (#4 world). Corruption Most countries in Central Europe tend to score above the average in the Corruption Perceptions Index (2018 data), led by Switzerland, Germany, and Austria. Infrastructure Industrialisation occurred early in Central Europe. That caused construction of rail and other types of infrastructure. Rail Central Europe contains the continent's earliest railway systems, whose greatest expansion was recorded in Austro-Hungarian and German territories between 1860-1870s. By the mid-19th century Berlin, Vienna, and Buda/Pest were focal points for network lines connecting industrial areas of Saxony, Silesia, Bohemia, Moravia and Lower Austria with the Baltic (Kiel, Szczecin) and Adriatic (Rijeka, Trieste). Rail infrastructure in Central Europe remains the densest in the world. Railway density, with total length of lines operated (km) per 1,000 km2, is the highest in the Czech Republic (198.6), Poland (121.0), Slovenia (108.0), Germany (105.5), Hungary (98.7), Serbia (87.3), Slovakia (73.9) and Croatia (72.5). when compared with most of Europe and the rest of the world. River transport and canals Before the first railroads appeared in the 1840s, river transport constituted the main means of communication and trade. Earliest canals included Plauen Canal (1745), Finow Canal, and also Bega Canal (1710) which connected Timișoara to Novi Sad and Belgrade via Danube. The most significant achievement in this regard was the facilitation of navigability on Danube from the Black sea to Ulm in the 19th century. Branches Compared to most of Europe, the economies of Austria, Croatia, the Czech Republic, Germany, Hungary, Poland, Slovakia, Slovenia and Switzerland tend to demonstrate high complexity. Industrialisation has reached Central Europe relatively early: The Czech lands (1797), Luxembourg and Germany by 1860, Poland, Slovakia and Switzerland by 1870, Austria, Croatia, Hungary, Liechtenstein, Romania, Serbia and Slovenia by 1880. Agriculture Central European countries are some of the most significant food producers in the world. Germany is the world's largest hops producer with 34.27% share in 2010, third producer of rye and barley, 5th rapeseed producer, sixth largest milk producer, and fifth largest potato producer. Poland is the world's largest triticale producer, second largest producer of raspberries, currants, third largest of rye, the fifth apple and buckwheat producer, and seventh largest producer of potatoes. The Czech Republic is world's fourth largest hops producer and 8th producer of triticale. Hungary is world's fifth hops and seventh largest triticale producer. Serbia is world's second largest producer of plums and second largest of raspberries. Slovenia is world's sixth hops producer. Business Central European business has a regional organisation, Central European Business Association (CEBA), founded in 1996 in New York as a non-profit organization dedicated to promoting business opportunities within Central Europe and supporting the advancement of professionals in America with a Central European background. Tourism Central European countries, especially Austria, Croatia, Germany and Switzerland are some of the most competitive tourism destinations. Poland is presently a major destination for outsourcing. Outsourcing destination Kraków, Warsaw, and Wrocław (Poland), Prague and Brno (Czech Republic), Budapest (Hungary), Bucharest (Romania), Bratislava (Slovakia), Ljubljana (Slovenia), Belgrade (Serbia) and Zagreb (Croatia) are among the world's top 100 outsourcing destinations. Education Languages Various languages are taught in Central Europe, with certain languages being more popular in different countries. Education performance Student performance has varied across Central Europe, according to the Programme for International Student Assessment. In the 2012 study, countries scored medium, below or over the |
Gaspé Peninsula and the Atlantic Provinces, creating rolling hills indented by river valleys. It also runs through parts of southern Quebec. The Appalachian Mountains (more specifically the Chic-Choc, Notre Dame, and Long Range Mountains) are an old and eroded range of mountains, approximately 380 million years in age. Notable mountains in the Appalachians include Mount Jacques-Cartier (Quebec, ), Mount Carleton (New Brunswick, ), The Cabox (Newfoundland, ). Parts of the Appalachians are home to a rich endemic flora and fauna and are considered to have been nunataks during the last glaciation era. Great Lakes and St. Lawrence Lowlands The southern parts of Quebec and Ontario, in the section of the Great Lakes (bordered entirely by Ontario on the Canadian side) and St. Lawrence River basin (often called St. Lawrence Lowlands), is another particularly rich sedimentary plain. Prior to its colonization and heavy urban sprawl of the 20th century, this Eastern Great Lakes lowland forests area was home to large mixed forests covering a mostly flat area of land between the Appalachian Mountains and the Canadian Shield. Most of this forest has been cut down through agriculture and logging operations, but the remaining forests are for the most part heavily protected. In this part of Canada begins one of the world's largest estuaries, the Estuary of Saint Lawrence (see Gulf of St. Lawrence lowland forests). While the relief of these lowlands is particularly flat and regular, a group of batholites known as the Monteregian Hills are spread along a mostly regular line across the area. The most notable are Montreal's Mount Royal and Mont Saint-Hilaire. These hills are known for a great richness in precious minerals. Canadian Shield The northeastern part of Alberta, northern parts of Saskatchewan, Manitoba, Ontario and Quebec, all of Labrador and the Great Northern Peninsula of Newfoundland, eastern mainland Northwest Territories, most of Nunavut's mainland and, of its Arctic Archipelago, Baffin Island and significant bands through Somerset, Southampton, Devon and Ellesmere islands are located on a vast rock base known as the Canadian Shield. The Shield mostly consists of eroded hilly terrain and contains many lakes and important rivers used for hydroelectric production, particularly in northern Quebec and Ontario. The Shield also encloses an area of wetlands around the Hudson Bay. Some particular regions of the Shield are referred to as mountain ranges, including the Torngat and Laurentian Mountains. The Shield cannot support intensive agriculture, although there is subsistence agriculture and small dairy farms in many of the river valleys and around the abundant lakes, particularly in the southern regions. Boreal forest covers much of the shield, with a mix of conifers that provide valuable timber resources in areas such as the Central Canadian Shield forests ecoregion that covers much of Northern Ontario. The Canadian Shield is known for its vast mineral reserves such as emeralds, diamonds and copper, and is there also called the "mineral house". Canadian Interior Plains The Canadian Prairies, the Canadian portion of the Great Plains, are part of a vast sedimentary plain covering much of Alberta, southern Saskatchewan, and southwestern Manitoba, as well as much of the Taiga and Boreal region between the Canadian Rockies and the Great Slave and Great Bear Lakes in Northwest Territories. The plains generally describes the expanses of (largely flat) arable agricultural land which sustain extensive grain-farming operations in the southern part of the provinces. Despite this, some areas such as the Cypress Hills and the Alberta Badlands are quite hilly and the prairie provinces contain large areas of forest such as the Mid-Continental Canadian forests. The size is roughly ~. Canadian Arctic While the largest part of the Canadian Arctic is composed of seemingly endless permafrost and tundra north of the tree line, it encompasses geological regions of varying types: the Arctic Cordillera (with the British Empire Range and the United States Range on Ellesmere Island) contains the northernmost mountain system in the world. The Arctic Lowlands and Hudson Bay lowlands comprise a substantial part of the geographic region often designated as the Canadian Shield (in contrast to the sole geologic area). The ground in the Arctic is mostly composed of permafrost, making construction difficult and often hazardous, and agriculture virtually impossible. The Arctic, when defined as everything north of the tree line, covers most of Nunavut and the northernmost parts of Northwest Territories, Yukon, Manitoba, Ontario, Quebec and Labrador. Western Cordillera The Coast Mountains in British Columbia run from the lower Fraser River and the Fraser Canyon northwestward, separating the Interior Plateau from the Pacific Ocean. Its southeastern end is separated from the North Cascades by the Fraser Lowland, where nearly a third of Western Canada's population reside. The coastal flank of the Coast Mountains is characterized by an intense network of fjords and associated islands, very similar to the Norwegian coastline in Northern Europe; while their inland side transitions to the high plateau with dryland valleys notable for a series of large alpine lakes similar to those in southern Switzerland, beginning in deep mountains and ending in flatland. They are subdivided in three main groups, the Pacific Ranges between the Fraser River and Bella Coola, the Kitimat Ranges from there northwards to the Nass River, and the Boundary Ranges from there to the mountain terminus in Yukon at Champagne Pass and Chilkat Pass northwest of Haines, Alaska. The Saint Elias Mountains lie to their west and northwest, while the Yukon Ranges and Yukon Basin lie to their north. On the inland side of the Boundary Ranges are the Tahltan and Tagish Highlands and also the Skeena Mountains, part of the Interior Mountains system, which also extend southwards on the inland side of the Kitimat Ranges. The terrain of the main spine of the Coast Mountains is typified by heavy glaciation, including several very large icefields of varying elevation. Of the three subdivisions, the Pacific Ranges are the highest and are crowned by Mount Waddington, while the Boundary Ranges contain the largest icefields, the Juneau Icefield being the largest. The Kitimat Ranges are lower and less glacier-covered than either of the other two groupings, but are extremely rugged and dense. The Coast Mountains are made of igneous and metamorphic rock from an episode of arc volcanism related to subduction of the Kula and Farallon Plates during the Laramide orogeny about 100 million years ago. The widespread granite forming the Coast Mountains formed when magma intruded and cooled at depth beneath volcanoes of the Coast Range Arc whereas the metamorphic formed when intruding magma heated the surrounding rock to produce schist. The Insular Mountains extend from Vancouver Island in the south to the Queen Charlotte Islands in the north on the British Columbia Coast. It contains two main mountain ranges, the Vancouver Island Ranges on Vancouver Island and the Queen Charlotte Mountains on the Queen Charlotte Islands. Extreme points The northernmost point of land within the boundaries of Canada is Cape Columbia, Ellesmere Island, Nunavut . The northernmost point of the Canadian mainland is Zenith Point on Boothia Peninsula, Nunavut . The southernmost point is Middle Island, in Lake Erie, Ontario (41°41′N 82°40′W); the southernmost water point lies just south of the island, on the Ontario–Ohio border (41°40′35″N). The southernmost point of the Canadian mainland is Point Pelee, Ontario . The lowest point is sea level at 0 m, whilst the highest point is Mount Logan, Yukon, at 5,959 m / 19,550 ft . The westernmost point is Boundary Peak 187 (60°18′22.929″N 141°00′7.128″W) at the southern end of the Yukon–Alaska border, which roughly follows 141°W but leans very slightly east as it goes North . The easternmost point is Cape Spear, Newfoundland (47°31′N 52°37′W) . The easternmost point of the Canadian mainland is Elijah Point, Cape St. Charles, Labrador (52°13′N 55°37′W) . The Canadian pole of inaccessibility is allegedly near Jackfish River, Alberta (59°2′N 112°49′W). The furthest straight-line distance that can be travelled to Canadian points of land is between the southwest tip of Kluane National Park and Reserve (next to Mount Saint Elias) and Cripple Cove, Newfoundland (near Cape Race) at a distance of . Climatology Canada has a diverse climate. The climate varies from temperate on the west coast of British Columbia to a subarctic climate in the north. Extreme northern Canada can have snow for most of the year with a Polar climate. Landlocked areas tend to have a warm summer continental climate zone with the exception of Southwestern Ontario which has a hot summer humid continental climate. Parts of Western Canada have a semi-arid climate, and parts of Vancouver Island can even be classified as a warm-summer Mediterranean climate. Temperature extremes in Canada range from in Lytton, British Columbia, on 29 June 2021, to in Snag, Yukon, on 3 February 1947. Biogeography Canada is divided into fifteen major terrestrial and five marine ecozones, that are further subdivided into 53 ecoprovinces, 194 ecoregions, and 1,027 ecodistricts. These eco-areas encompass over 80,000 classified species of Canadian wildlife, with an equal number yet to be formally recognized or discovered. Due to pollution, loss of biodiversity, over-exploitation of commercial species, invasive species, and habitat loss, there are currently more than 800 wild life species at risk of being lost. Canada's major biomes are the tundra, boreal forest, grassland, and temperate deciduous forest. British Columbia contains several smaller biomes, including; mountain forest which extends to Alberta, and a small temperate rainforest along the Pacific coast, the semi arid desert located in the Okanagan and alpine tundra in the higher mountainous regions. Over half of Canada's landscape is intact and relatively free of human development. Approximately half of Canada is covered by forest, totaling around . The boreal forest of Canada is considered to be the largest intact forest on earth, with around undisturbed by roads, cities or industry. The Canadian | eroded hilly terrain and contains many lakes and important rivers used for hydroelectric production, particularly in northern Quebec and Ontario. The Shield also encloses an area of wetlands around the Hudson Bay. Some particular regions of the Shield are referred to as mountain ranges, including the Torngat and Laurentian Mountains. The Shield cannot support intensive agriculture, although there is subsistence agriculture and small dairy farms in many of the river valleys and around the abundant lakes, particularly in the southern regions. Boreal forest covers much of the shield, with a mix of conifers that provide valuable timber resources in areas such as the Central Canadian Shield forests ecoregion that covers much of Northern Ontario. The Canadian Shield is known for its vast mineral reserves such as emeralds, diamonds and copper, and is there also called the "mineral house". Canadian Interior Plains The Canadian Prairies, the Canadian portion of the Great Plains, are part of a vast sedimentary plain covering much of Alberta, southern Saskatchewan, and southwestern Manitoba, as well as much of the Taiga and Boreal region between the Canadian Rockies and the Great Slave and Great Bear Lakes in Northwest Territories. The plains generally describes the expanses of (largely flat) arable agricultural land which sustain extensive grain-farming operations in the southern part of the provinces. Despite this, some areas such as the Cypress Hills and the Alberta Badlands are quite hilly and the prairie provinces contain large areas of forest such as the Mid-Continental Canadian forests. The size is roughly ~. Canadian Arctic While the largest part of the Canadian Arctic is composed of seemingly endless permafrost and tundra north of the tree line, it encompasses geological regions of varying types: the Arctic Cordillera (with the British Empire Range and the United States Range on Ellesmere Island) contains the northernmost mountain system in the world. The Arctic Lowlands and Hudson Bay lowlands comprise a substantial part of the geographic region often designated as the Canadian Shield (in contrast to the sole geologic area). The ground in the Arctic is mostly composed of permafrost, making construction difficult and often hazardous, and agriculture virtually impossible. The Arctic, when defined as everything north of the tree line, covers most of Nunavut and the northernmost parts of Northwest Territories, Yukon, Manitoba, Ontario, Quebec and Labrador. Western Cordillera The Coast Mountains in British Columbia run from the lower Fraser River and the Fraser Canyon northwestward, separating the Interior Plateau from the Pacific Ocean. Its southeastern end is separated from the North Cascades by the Fraser Lowland, where nearly a third of Western Canada's population reside. The coastal flank of the Coast Mountains is characterized by an intense network of fjords and associated islands, very similar to the Norwegian coastline in Northern Europe; while their inland side transitions to the high plateau with dryland valleys notable for a series of large alpine lakes similar to those in southern Switzerland, beginning in deep mountains and ending in flatland. They are subdivided in three main groups, the Pacific Ranges between the Fraser River and Bella Coola, the Kitimat Ranges from there northwards to the Nass River, and the Boundary Ranges from there to the mountain terminus in Yukon at Champagne Pass and Chilkat Pass northwest of Haines, Alaska. The Saint Elias Mountains lie to their west and northwest, while the Yukon Ranges and Yukon Basin lie to their north. On the inland side of the Boundary Ranges are the Tahltan and Tagish Highlands and also the Skeena Mountains, part of the Interior Mountains system, which also extend southwards on the inland side of the Kitimat Ranges. The terrain of the main spine of the Coast Mountains is typified by heavy glaciation, including several very large icefields of varying elevation. Of the three subdivisions, the Pacific Ranges are the highest and are crowned by Mount Waddington, while the Boundary Ranges contain the largest icefields, the Juneau Icefield being the largest. The Kitimat Ranges are lower and less glacier-covered than either of the other two groupings, but are extremely rugged and dense. The Coast Mountains are made of igneous and metamorphic rock from an episode of arc volcanism related to subduction of the Kula and Farallon Plates during the Laramide orogeny about 100 million years ago. The widespread granite forming the Coast Mountains formed when magma intruded and cooled at depth beneath volcanoes of the Coast Range Arc whereas the metamorphic formed when intruding magma heated the surrounding rock to produce schist. The Insular Mountains extend from Vancouver Island in the south to the Queen Charlotte Islands in the north on the British Columbia Coast. It contains two main mountain ranges, the Vancouver Island Ranges on Vancouver Island and the Queen Charlotte Mountains on the Queen Charlotte Islands. Extreme points The northernmost point of land within the boundaries of Canada is Cape Columbia, Ellesmere Island, Nunavut . The northernmost point of the Canadian mainland is Zenith Point on Boothia Peninsula, Nunavut . The southernmost point is Middle Island, in Lake Erie, Ontario (41°41′N 82°40′W); the southernmost water point lies just south of the island, on the Ontario–Ohio border (41°40′35″N). The southernmost point of the Canadian mainland is Point Pelee, Ontario . The lowest point is sea level at 0 m, whilst the highest point is Mount Logan, Yukon, at 5,959 m / 19,550 ft . The westernmost point is Boundary Peak 187 (60°18′22.929″N 141°00′7.128″W) at the southern end of the Yukon–Alaska border, which roughly follows 141°W but leans very slightly east as it goes North . The easternmost point is Cape Spear, Newfoundland (47°31′N 52°37′W) . The easternmost point of the Canadian mainland is Elijah Point, Cape St. Charles, Labrador (52°13′N 55°37′W) . The Canadian pole of inaccessibility is allegedly near Jackfish River, Alberta (59°2′N 112°49′W). The furthest straight-line distance that can be travelled to Canadian points of land is between the southwest tip of Kluane National Park and Reserve (next to Mount Saint Elias) and Cripple Cove, Newfoundland (near Cape Race) at a distance of . Climatology Canada has a diverse climate. The climate varies from temperate on the west coast of British Columbia to a subarctic climate in the north. Extreme northern Canada can have snow for most of the year with a Polar climate. Landlocked areas tend to have a warm summer continental climate zone with the exception of Southwestern Ontario which has a hot summer humid continental climate. Parts of Western Canada have a semi-arid climate, and parts of Vancouver Island can even be classified as a warm-summer Mediterranean climate. Temperature extremes in Canada range from in Lytton, British Columbia, on 29 June 2021, to in Snag, Yukon, on 3 February 1947. Biogeography Canada is divided into fifteen major terrestrial and five marine ecozones, that are further subdivided into 53 ecoprovinces, 194 ecoregions, and 1,027 ecodistricts. These eco-areas encompass over 80,000 classified species of Canadian wildlife, with an equal number yet to be formally recognized or discovered. Due to pollution, loss of biodiversity, over-exploitation of commercial species, invasive species, and habitat loss, there are currently more than 800 wild life species at risk of being lost. Canada's major biomes are the tundra, boreal forest, grassland, and temperate deciduous forest. British Columbia contains several smaller biomes, including; mountain forest which extends to Alberta, |
from Asia. New immigrants settle mostly in major urban areas such as Toronto, Montreal and Vancouver. Canada also accepts large numbers of refugees, accounting for over 10 percent of annual global refugee resettlements. Population thumb|Population density of Canadian provinces and territories The Canada 2021 Census had a total population count of 36,991,981 individuals, making up approximately 0.5% of the world's total population. Provinces and territories <onlyinclude> Sources: Statistics Canada Cities Census metropolitan areas Population growth rates According to Organisation for Economic Co-operation and Development (OECD)/World Bank, the population in Canada increased from 1990 to 2008 with 5.6 million and 20.4% growth in population, compared to 21.7% growth in the United States and 31.2% growth in Mexico. According to the OECD/World Bank population statistics, for the same period the world population growth was 27%, a total of 1,423 million people. However, over the same period, the population of France grew by 8.0%. And from 1991 to 2011, the population of the UK increased by 10.0%. Total Fertility Rates in the 19th century The total fertility rate is the number of children born per woman. Source: Statistics Canada. Vital statistics (c) = Census results Current vital statistics Population projection Life expectancy at birth from 1831 to 2015 Sources: Our World In Data and the United Nations. 1831–1911 1921–1950 1950–2015 Source: UN World Population Prospects Age characteristics Other demographics statistics Demographic statistics according to the World Population Review in 2019. One birth every 1 minute One death every 2 minutes One net migrant every 2 minutes Net gain of one person every 1 minute Demographic statistics according to the CIA World Factbook, unless otherwise indicated. Population 35,881,659 (July 2018 est.) 35,623,680 (July 2017 est.) Age structure 0-14 years: 15.43% (male 2,839,236 /female 2,698,592) 15-24 years: 11.62% (male 2,145,626 /female 2,023,369) 25-54 years: 39.62% (male 7,215,261 /female 7,002,546) 55-64 years: 14.24% (male 2,538,820 /female 2,570,709) 65 years and over: 19.08% (male 3,055,560 /female 3,791,940) (2018 est.) 0-14 years: 15.44% (male 2,819,279/female 2,680,024) 15-24 years: 11.85% (male 2,171,703/female 2,048,546) 25-54 years: 39.99% (male 7,227,145/female 7,020,156) 55-64 years: 14.1% (male 2,492,120/female 2,529,652) 65 years and over: 18.63% (male 2,958,721/female 3,676,334) (2017 est.) Median age total: 42.4 years. Country comparison to the world: 31st male: 41.1 years female: 43.7 years (2018 est.) total: 42.2 years. male: 40.9 years female: 43.5 years (2017 est.) total: 40.6 years male: 39.6 years female: 41.5 years (2011) Median age by province and territory, 2011 Newfoundland and Labrador: 44.0 Nova Scotia: 43.7 New Brunswick:43.7 Prince Edward Island: 42.8 Quebec: 41.9 British Columbia: 41.9 Ontario: 40.4 Yukon: 39.1 Manitoba: 38.4 Saskatchewan: 38.2 Alberta: 36.5 Northwest Territories: 32.3 Nunavut: 24.1 Total: 40.6 Sources: Statistics Canada Birth rate 10.2 births/1,000 population (2018 est.) Country comparison to the world: 189th Death rate 8.8 deaths/1,000 population (2018 est.) Country comparison to the world: 67th Total fertility rate 1.6 children born/woman (2018 est.) Country comparison to the world: 180th Net migration rate 5.7 migrant(s)/1,000 population (2018 est.) Country comparison to the world: 20th 5.65 migrant(s)/1,000 population (2013 est.) Population growth rate 0.72% (2018 est.) Country comparison to the world: 139th Mother's mean age at first birth 28.1 years (2012 est.) Population distribution The vast majority of Canadians are positioned in a discontinuous band within approximately 300 km of the southern border with the United States; the most populated province is Ontario, followed by Quebec and British Columbia. Life expectancy at birth total population: 82 years male: 79.4 years female: 84.8 years (2018 est.) Dependency ratios total dependency ratio: 47.3 youth dependency ratio: 23.5 elderly dependency ratio: 23.8 potential support ratio: 4.2 (2015 est.) School life expectancy (primary to tertiary education) total: | 0.99 male(s)/female (2013 est.) Infant mortality rate total: 4.5 deaths/1,000 live births. Country comparison to the world: 180th male: 4.8 deaths/1,000 live births female: 4.2 deaths/1,000 live births (2017 est.) Ethnicity Ethnic origin As data is completely self-reported, and reporting individuals may have varying definitions of "Ethnic origin" (or may not know their ethnic origin), these figures should not be considered an exact record of the relative prevalence of different ethno-cultural ancestries but rather how Canadians self-identify. Statistics Canada projects that immigrants will represent between 24.5% and 30.0% of Canada's population in 2036, compared with 20.7% in 2011. Statistics Canada further projects that visible minorities among the working-age population (15 to 64 years) will make up 33.7–34.3% of Canada's total population, compared to 22.3% in 2016. Counting both single and multiple responses, the most commonly identified ethnic origins were (2016): The most common ethnic origins per province are as follows in 2006 (total responses; only percentages 10% or higher shown; ordered by percentage of "Canadian"): Quebec (7,723,525): Canadian (59.1%), French (29.1%) New Brunswick (735,835): Canadian (50.3%), French (27.2%), English (25.9%), Irish (21.6%), Scottish (19.9%) Newfoundland and Labrador (507,265): Canadian (49.0%), English (43.4%), Irish (21.8%) Nova Scotia (906,170): Canadian (39.1%), Scottish (31.2%), English (30.8%), Irish (22.3%), French (17.0%), German (10.8%) Prince Edward Island (137,375): Scottish (39.3%), Canadian (36.8%), English (31.1%), Irish (30.4%), French (21.1%) Ontario (12,651,795): Canadian (23.3%), English (23.1%), Scottish (16.4%), Irish (16.4%), French (10.8%) Alberta (3,567,980): English (24.9%), Canadian (21.8%), German (19.2%), Scottish (18.8%), Irish (15.8%), French (11.1%) Manitoba (1,174,345): English (21.8%), German (18.6%), Canadian (18.5%), Scottish (18.0%), Ukrainian (14.9%), Irish (13.2%), French (12.6%), North American Indian (10.6%) Saskatchewan (1,008,760): German (28.6%), English (24.9%), Scottish (18.9%), Canadian (18.8%), Irish (15.5%), Ukrainian (13.5%), French (12.2%), North American Indian (12.1%) British Columbia (4,324,455): English (27.7%), Scottish (19.3%), Canadian (19.1%), German (13.1%), Chinese (10.7%) Yukon (33,320): English (28.5%), Scottish (25.0%), Irish (22.0%), North American Indian (21.8%), Canadian (21.8%), German (15.6%), French (13.1%) Northwest Territories (40,800): North American Indian (37.0%), Scottish (13.9%), English (13.7%), Canadian (12.8%), Irish (11.9%), Inuit (11.7%) Nunavut (31,700): Inuit (85.4%) Italics indicates either that this response is dominant within this province, or that this province has the highest ratio (percentage) of this response among provinces. Visible minority population By province and territory By city over 100,000 Aboriginal population Note: Inuit, other Aboriginal and mixed Aboriginal groups are not listed as their own, but they are all accounted for in total Aboriginal By province and territory All statistics are from the Canada 2011 Census. By city over 100,000 Future projections Languages Language used most often at work: English: 78.3% French: 21.7% Non-official languages: 2% Languages by language used most often at home: English: 67.1% French: 21.5% Non-official languages: 11.4% Languages by mother tongue: Religion Statistics Canada (StatCan) grouped responses to the 2011 National Household Survey (NHS) question on religion into nine core religious categories – Buddhist, Christian, Hindu, Jewish, Muslim, Sikh, Traditional (Aboriginal) Spirituality, other religions and no religious affiliation. Among these, of Canadians were self-identified as Christians in 2011. The second, third, and fourth-largest categories were of Canadians with no religious affiliation at , Canadian Muslims at , and Canadian Hindus at . Within the 2011 NHS results, StatCan further subcategorized Christianity in nine groups of its own – Anglican, Baptist, Catholic, Christian Orthodox, Lutheran, Pentecostal, Presbyterian, United Church and Other Christian. Among these, of Canadians were self-identified as Catholic in 2011. The second and third-largest ungrouped subcategories of Christian Canadians were United at and Anglican at , while of Christians were grouped into the Other Christian subcategory comprising numerous denominations. Of the 3,036,785 or of Canadians identified as Other Christians: 105,365 ( of Canadians) were identified as Church of Jesus Christ of Latter-day Saints (LDS Church); 137,775 ( of Canadians) were identified as Jehovah's Witness; 175,880 ( of Canadians) were identified as Mennonite; 550,965 ( of Canadians) were identified as Protestant; and 102,830 ( of Canadians) were identified as Reformed. See also Demographics of North America 1666 census of New France Canada 2016 Census List of Canadian census areas demographic extremes Interprovincial migration in Canada Cahiers québécois de démographie academic journal Canadian Studies in Population academic journal Notes References Further reading Roderic Beaujot and Don Kerr, (2007) The Changing Face of Canada: Essential Readings in Population, Canadian Scholars' Press, . External links Canada Year Book (2010) – Statistics Canada Population estimates and projections, 2010 – |
consent) and by the failed attempts at constitutional reform. Two provincial referenda, in 1980 and 1995, rejected proposals for sovereignty with majorities of 60% and 50.6% respectively. Given the narrow federalist victory in 1995, a reference was made by the Chrétien government to the Supreme Court of Canada in 1998 regarding the legality of unilateral provincial secession. The court decided that a unilateral declaration of secession would be unconstitutional. This resulted in the passage of the Clarity Act in 2000. The Bloc Québécois, a sovereigntist party which runs candidates exclusively in Quebec, was started by a group of MPs who left the Progressive Conservative (PC) party (along with several disaffected Liberal MPs), and first put forward candidates in the 1993 federal election. With the collapse of the PCs in that election, the Bloc and Liberals were seen as the only two viable parties in Quebec. Thus, prior to the 2006 election, any gain by one party came at the expense of the other, regardless of whether national unity was really at issue. The Bloc, then, benefited (with a significant increase in seat total) from the impressions of corruption that surrounded the Liberal Party in the lead-up to the 2004 election. However, the newly unified Conservative party re-emerged as a viable party in Quebec by winning 10 seats in the 2006 election. In the 2011 election, the New Democratic Party succeeded in winning 59 of Quebec's 75 seats, successfully reducing the number of seats of every other party substantially. The NDP surge nearly destroyed the Bloc, reducing them to 4 seats, far below the minimum requirement of 12 seats for Official party status. Newfoundland and Labrador is also a problem regarding national unity. As the Dominion of Newfoundland was a self-governing country equal to Canada until 1949, there are large, though unco-ordinated, feelings of Newfoundland nationalism and anti-Canadian sentiment among much of the population. This is due in part to the perception of chronic federal mismanagement of the fisheries, forced resettlement away from isolated settlements in the 1960s, the government of Quebec still drawing inaccurate political maps whereby they take parts of Labrador, and to the perception that mainland Canadians look down upon Newfoundlanders. In 2004, the Newfoundland and Labrador First Party contested provincial elections and in 2008 in federal ridings within the province. In 2004, then-premier Danny Williams ordered all federal flags removed from government buildings as a result of lost offshore revenues to equalization clawbacks. On December 23, 2004, premier Williams made this statement to reporters in St. John's, Western alienation is another national-unity-related concept that enters into Canadian politics. Residents of the four western provinces, particularly Alberta, have often been unhappy with a lack of influence and a perceived lack of understanding when residents of Central Canada consider "national" issues. While this is seen to play itself out through many avenues (media, commerce, and so on.), in politics, it has given rise to a number of political parties whose base constituency is in western Canada. These include the United Farmers of Alberta, who first won federal seats in 1917, the Progressives (1921), the Social Credit Party (1935), the Co-operative Commonwealth Federation (1935), the Reconstruction Party (1935), New Democracy (1940) and most recently the Reform Party (1989). The Reform Party's slogan "The West Wants In" was echoed by commentators when, after a successful merger with the PCs, the successor party to both parties, the Conservative Party won the 2006 election. Led by Stephen Harper, who is an MP from Alberta, the electoral victory was said to have made "The West IS In" a reality. However, regardless of specific electoral successes or failures, the concept of western alienation continues to be important in Canadian politics, particularly on a provincial level, where opposing the federal government is a common tactic for provincial politicians. For example, in 2001, a group of prominent Albertans produced the Alberta Agenda, urging Alberta to take steps to make full use of its constitutional powers, much as Quebec has done. Political conditions Canada is considered by most sources to be a very stable democracy. In 2006, The Economist ranked Canada the third-most democratic nation in its Democracy Index, ahead of all other nations in the Americas and ahead of every nation more populous than itself. In 2008, Canada was ranked World No. 11 and again ahead of all countries more populous and ahead of other states in the Americas. The Liberal Party of Canada, under the leadership of Paul Martin, won a minority victory in the June 2004 general elections. In December 2003, Martin had succeeded fellow Liberal Jean Chrétien, who had, in 2000, become the first prime minister to lead three consecutive majority governments since 1945. However, in 2004 the Liberals lost seats in Parliament, going from 172 of 301 parliamentary seats to 135 of 308, and from 40.9% to 36.7% in the popular vote. The Canadian Alliance, which did well in western Canada in the 2000 election but was unable to make significant inroads in the East, merged with the Progressive Conservative Party to form the Conservative Party of Canada in late 2003. They proved to be moderately successful in the 2004 campaign, gaining seats from a combined Alliance-PC total of 78 in 2000 to 99 in 2004. However, the new Conservatives lost in popular vote, going from 37.7% in 2000 down to 29.6%. In 2006, the Conservatives, led by Stephen Harper, won a minority government with 124 seats. They improved their percentage from 2004, garnering 36.3% of the vote. During this election, the Conservatives also made major breakthroughs in Quebec. They gained 10 seats here, whereas in 2004 they had no seats. At the 2011 federal election, the Conservatives won a majority government with 167 seats. For the first time, the NDP became the Official Opposition, with 102 seats; the Liberals finished in third place with 34 seats. This was the first election in which the Green Party won a seat, that of leader Elizabeth May; the Bloc won 4 seats, losing official party status. More recently, with the existence of strong third parties and first past the post electorates amongst other factors, Canada on a federal and provincial level has experienced huge swings in seat shares, where third parties (eg NDP, Reform) end up (usually briefly) replacing the Liberals, the Progressive Conservatives or the Conservatives as the main opposition or even the government and leaving them as a rump. Such federally examples include the 1993 federal election with the collapse of the Progressive Conservatives, and the 2011 election leaving the Liberal Party a (temporary) rump along with Bloc Québécois. Other examples include the changes of fortune for the Albertan NDP during the province’s 2015 and 2019 elections, and possibly the 2018 Quebec elections with the rise of Coalition Avenir Québec taking government out of Liberal and Parti Québécois. On a provincial level, in the legislatures of Western Provinces the NDP often is the left leaning main party instead of that province’s Liberal Party branch, the latter generally being a rump or smaller than the NDP (excluding British Columbia where the Liberal Party is the main party, right of the NDP). The other main party (right of the NDP) is either the Progressive Conservatives or their successor, or the Saskatchewan Party in Saskatchewan. Realignment: Conservatives in power The Liberal Party, after dominating Canadian politics since the 1920s, was in decline in the early years of the 21st century. As Lang (2010) concluded, they lost their majority in Parliament in the 2004 election, were defeated in 2006, and in 2008 became little more than a "rump", falling to their lowest seat count in decades and a mere 26% of the popular vote. Furthermore, said Lang (a Liberal himself), its prospects "are as bleak as they have ever been." In the 2011 election, the Liberals suffered a major defeat, managing to secure only 18.9% of the vote share and only 34 seats. As a result, the Liberals lost their status as the official opposition to the NDP. In explaining those trends, Behiels (2010) synthesized major studies and reported that "a great many journalists, political advisors, and politicians argue that a new political party paradigm is emerging" She claimed they saw a new power configuration based on a right-wing political party capable of sharply changing the traditional role of the state (federal and provincial) in the twenty-first-century. Behiels said that, unlike Brian Mulroney who tried but failed to challenge the long-term dominance of the Liberals, Harper's attempt had proven to be more determined, systematic and successful. Many commentators thought it signalled a major realignment. The Economist said, "the election represents the biggest realignment of Canadian politics since 1993." Lawrence Martin, commentator for The Globe and Mail said, "Harper has completed a remarkable reconstruction of a Canadian political landscape that endured for more than a century. The realignment saw both old parties of the moderate middle, the Progressive Conservatives and the Liberals, either eliminated or marginalized." Maclean's said, the election marked "an unprecedented realignment of Canadian politics" as "the Conservatives are now in a position to replace the Liberals as the natural governing party in Canada." Despite the grim outlook and poor early poll numbers, when the 2015 election was held, the Liberals under Justin Trudeau had an unprecedented comeback and the realignment was proved only temporary. Gaining 148 seats, they won a majority government for the first time since 2000. The Toronto Star claimed the comeback was "headed straight for the history books" and that Harper's name would "forever be joined with that of his Liberal nemesis in Canada's electoral annals". Spencer McKay for the National Post suggested that "maybe we've witnessed a revival of Canada's 'natural governing party'". Party funding The rules governing the funding of parties are designed to ensure reliance on personal contributions. Personal donations to federal parties and campaigns benefit from tax credits, although the amount of tax relief depends on the amount given. Also only people paying income taxes receive any benefit from this. The rules are based on the belief that union or business funding should not be allowed to have as much impact on federal election funding as these are not contributions from citizens and are not evenly spread out between parties. They are still allowed to contribute to the election but only in a minor fashion. The new rules stated that a party had to receive 2% of the vote nationwide in order to receive the general federal funding for parties. Each vote garnered a certain dollar amount for a party (approximately $1.75) in future funding. For the initial disbursement, approximations were made based on previous elections. The NDP received more votes than expected (its national share of the vote went up) while the new Conservative Party of Canada received fewer votes than had been estimated and was asked to refund the difference. Quebec was the first province to implement a similar system of funding many years before the changes to funding of federal parties. Federal funds are disbursed quarterly to parties, beginning at the start of 2005. For the moment, this disbursement delay leaves the NDP and the Green Party in a better position to fight an election, since they rely more on individual contributors than federal funds. The Green Party now receives federal funds, since it for the first time received a sufficient share of the vote in the 2004 election. In 2007, news emerged of a funding loophole that "could cumulatively exceed the legal limit by more than $60,000," through anonymous recurrent donations of $200 to every riding of a party from corporations or unions. At the time, for each individual, the legal annual donation limit was $1,100 for each party, $1,100 combined total for each party's associations, and in an election year, an additional $1,100 combined total for each party's candidates. All three limits increase on 1 April every year based on the inflation rate. Alt URL Two of the biggest federal political parties in Canada experienced a drop in donations in 2020, in light of the COVID-19 pandemic impact on the global economy. Political parties, leaders and status Ordered by number of elected representatives in the House of Commons Liberal Party: Justin Trudeau, Prime Minister of Canada Conservative Party: Candice Bergen, Leader of the Official Opposition Bloc Québécois: Yves-François Blanchet New Democratic Party: Jagmeet Singh Green Party: Elizabeth May, parliamentary leader Leaders’ debates Leaders’ debates in Canada consist of two debates, one English and one French, both produced by a consortium of Canada's five major television broadcasters (CBC/SRC, CTV, Global and TVA) and usually consist of the leaders of all parties with representation in the House of Commons. These debates air on the networks of the producing consortium as well as the public affairs and parliamentary channel CPAC and the American public affairs network C-SPAN. Judiciary The highest court in Canada is the Supreme Court of Canada and is the final court of appeal in the Canadian justice system. The court is composed of nine judges: eight Puisne Justices and the Chief Justice of Canada. Justices of the Supreme Court of Canada are appointed by the Governor-in-Council. The Supreme Court Act limits eligibility for appointment to persons who have been judges of a superior court, or members of the bar for ten or more years. Members of the bar or superior judge of Quebec, by law, must hold three of the nine positions on the Supreme Court of Canada. Government departments and structure The Canadian government operates the public service using departments, smaller agencies (for example, commissions, tribunals, and boards), and crown corporations. There are two types of departments: central agencies such as | political party should be formed or how its legal, internal and financial structures should be established. Most parties elect their leaders in instant-runoff elections to ensure that the winner receives more than 50% of the votes. Normally the party leader stands as a candidate to be an MP during an election. Canada's parliamentary system empowers political parties and their party leaders. Where one party gets a majority of the seats in the House of Commons, that party is said to have a "majority government." Through party discipline, the party leader, who is elected in only one riding, exercises a great deal of control over the cabinet and the parliament. Historically the prime minister and senators are selected by the governor general as a representative of the Queen, though in modern practice the monarch's duties are ceremonial. Consequently, the prime minister, while technically selected by the governor general, is for all practical purposes selected by the party with the majority of seats. That is, the party that gets the most seats normally forms the government, with that party's leader becoming prime minister. The prime minister is not directly elected by the general population, although the prime minister is almost always directly elected as an MP within his or her constituency. Again senators while technically selected at the pleasure of the monarch, are ceremonially selected by the governor general at the advice (and for most practical purposes authority) of the prime minister. A minority government situation occurs when the party that holds the most seats in the House of Commons holds fewer seats than the opposition parties combined. In this scenario usually the party leader whose party has the most seats in the House is selected by the governor general to lead the government, however, to create stability, the leader chosen must have the support of the majority of the House, meaning they need the support of at least one other party. Federal-provincial relations As a federation, the existence and powers of the federal government and the ten provinces are guaranteed by the Constitution. The Constitution Act, 1867 sets out the basic constitutional structure of the federal government and the provinces. The powers of the federal Parliament and the provinces can only be changed by constitutional amendments passed by the federal and provincial governments.<ref>Constitution Act, 1982, Part V — Procedure for Amending Constitution of Canada.]</ref> The Crown is the formal head of state of the federal government and each of the ten provinces, but rarely has any political role. The governments are led by the representatives of the people: elected by all Canadians, at the federal level, and by the Canadian citizens of each provinces, at the provincial level. Federal-provincial (or intergovernmental, formerly Dominion-provincial) relations is a regular issue in Canadian politics: Quebec wishes to preserve and strengthen its distinctive nature, western provinces desire more control over their abundant natural resources, especially energy reserves; industrialized Central Canada is concerned with its manufacturing base, and the Atlantic provinces strive to escape from being less affluent than the rest of the country. In order to ensure that social programs such as health care and education are funded consistently throughout Canada, the "have-not" (poorer) provinces receive a proportionately greater share of federal "transfer (equalization) payments" than the richer, or "have", provinces do; this has been somewhat controversial. The richer provinces often favour freezing transfer payments, or rebalancing the system in their favour, based on the claim that they already pay more in taxes than they receive in federal government services, and the poorer provinces often favour an increase on the basis that the amount of money they receive is not sufficient for their existing needs. Particularly in the past decade, some scholars have argued that the federal government's exercise of its unlimited constitutional spending power has contributed to strained federal-provincial relations. This power allows the federal government to influence provincial policies, by offering funding in areas that the federal government cannot itself regulate. The federal spending power is not expressly set out in the Constitution Act, 1867; however, in the words of the Court of Appeal for Ontario the power "can be inferred" from s. 91(1A), "the public debt and property". A prime example of an exercise of the spending power is the Canada Health Act, which is a conditional grant of money to the provinces. Regulation of health services is, under the Constitution, a provincial responsibility. However, by making the funding available to the provinces under the Canada Health Act contingent upon delivery of services according to federal standards, the federal government has the ability to influence health care delivery. This spending power, coupled with Supreme Court rulings — such as Reference re Canada Assistance Plan (B.C.) — that have held that funding delivered under the spending power can be reduced unilaterally at any time, has contributed to strained federal-provincial relations. Quebec and Canadian politics Except for three short-lived transitional or minority governments, prime ministers from Quebec led Canada continuously from 1968 to early 2006. People from Quebec led both Liberal and Progressive Conservative governments in this period. Monarchs, governors general, and prime ministers are now expected to be at least functional, if not fluent, in both English and French. In selecting leaders, political parties give preference to candidates who are fluently bilingual. By law, three of the nine positions on the Supreme Court of Canada must be held by judges from Quebec. This representation makes sure that at least three judges have sufficient experience with the civil law system to treat cases involving Quebec laws. National unity Canada has a long and storied history of secessionist movements (see Secessionist movements of Canada). National unity has been a major issue in Canada since the forced union of Upper and Lower Canada in 1840. The predominant and lingering issue concerning Canadian national unity has been the ongoing conflict between the French-speaking majority in Quebec and the English-speaking majority in the rest of Canada. Quebec's continued demands for recognition of its "distinct society" through special political status has led to attempts for constitutional reform, most notably with the failed attempts to amend the constitution through the Meech Lake Accord and the Charlottetown Accord (the latter of which was rejected through a national referendum). Since the Quiet Revolution, sovereigntist sentiments in Quebec have been variably stoked by the patriation of the Canadian constitution in 1982 (without Quebec's consent) and by the failed attempts at constitutional reform. Two provincial referenda, in 1980 and 1995, rejected proposals for sovereignty with majorities of 60% and 50.6% respectively. Given the narrow federalist victory in 1995, a reference was made by the Chrétien government to the Supreme Court of Canada in 1998 regarding the legality of unilateral provincial secession. The court decided that a unilateral declaration of secession would be unconstitutional. This resulted in the passage of the Clarity Act in 2000. The Bloc Québécois, a sovereigntist party which runs candidates exclusively in Quebec, was started by a group of MPs who left the Progressive Conservative (PC) party (along with several disaffected Liberal MPs), and first put forward candidates in the 1993 federal election. With the collapse of the PCs in that election, the Bloc and Liberals were seen as the only two viable parties in Quebec. Thus, prior to the 2006 election, any gain by one party came at the expense of the other, regardless of whether national unity was really at issue. The Bloc, then, benefited (with a significant increase in seat total) from the impressions of corruption that surrounded the Liberal Party in the lead-up to the 2004 election. However, the newly unified Conservative party re-emerged as a viable party in Quebec by winning 10 seats in the 2006 election. In the 2011 election, the New Democratic Party succeeded in winning 59 of Quebec's 75 seats, successfully reducing the number of seats of every other party substantially. The NDP surge nearly destroyed the Bloc, reducing them to 4 seats, far below the minimum requirement of 12 seats for Official party status. Newfoundland and Labrador is also a problem regarding national unity. As the Dominion of Newfoundland was a self-governing country equal to Canada until 1949, there are large, though unco-ordinated, feelings of Newfoundland nationalism and anti-Canadian sentiment among much of the population. This is due in part to the perception of chronic federal mismanagement of the fisheries, forced resettlement away from isolated settlements in the 1960s, the government of Quebec still drawing inaccurate political maps whereby they take parts of Labrador, and to the perception that mainland Canadians look down upon Newfoundlanders. In 2004, the Newfoundland and Labrador First Party contested provincial elections and in 2008 in federal ridings within the province. In 2004, then-premier Danny Williams ordered all federal flags removed from government buildings as a result of lost offshore revenues to equalization clawbacks. On December 23, 2004, premier Williams made this statement to reporters in St. John's, Western alienation is another national-unity-related concept that enters into Canadian politics. Residents of the four western provinces, particularly Alberta, have often been unhappy with a lack of influence and a perceived lack of understanding when residents of Central Canada consider "national" issues. While this is seen to play itself out through many avenues (media, commerce, and so on.), in politics, it has given rise to a number of political parties whose base constituency is in western Canada. These include the United Farmers of Alberta, who first won federal seats in 1917, the Progressives (1921), the Social Credit Party (1935), the Co-operative Commonwealth Federation (1935), the Reconstruction Party (1935), New Democracy (1940) and most recently the Reform Party (1989). The Reform Party's slogan "The West Wants In" was echoed by commentators when, after a successful merger with the PCs, the successor party to both parties, the Conservative Party won the 2006 election. Led by Stephen Harper, who is an MP from Alberta, the electoral victory was said to have made "The West IS In" a reality. However, regardless of specific electoral successes or failures, the concept of western alienation continues to be important in Canadian politics, particularly on a provincial level, where opposing the federal government is a common tactic for provincial politicians. For example, in 2001, a group of prominent Albertans produced the Alberta Agenda, urging Alberta to take steps to make full use of its constitutional powers, much as Quebec has done. Political conditions Canada is considered by most sources to be a very stable democracy. In 2006, The Economist ranked Canada the third-most democratic nation in its Democracy Index, ahead of all other nations in the Americas and ahead of every nation more populous than itself. In 2008, Canada was ranked World No. 11 and again ahead of all countries more populous and ahead of other states in the Americas. The Liberal Party of Canada, under the leadership of Paul Martin, won a minority victory in the June 2004 general elections. In December 2003, Martin had succeeded fellow Liberal Jean Chrétien, who had, in 2000, become the first prime minister to lead three consecutive majority governments since 1945. However, in 2004 the Liberals lost seats in Parliament, going from 172 of 301 parliamentary seats to 135 of 308, and from 40.9% to 36.7% in the popular vote. The Canadian Alliance, which did well in western Canada in the 2000 election but was unable to make significant inroads in the East, merged with the Progressive Conservative Party to form the Conservative Party of Canada in late 2003. They proved to be moderately successful in the 2004 campaign, gaining seats from a combined Alliance-PC total of 78 in 2000 to 99 in 2004. However, the new Conservatives lost in popular vote, going from 37.7% in 2000 down to 29.6%. In 2006, the Conservatives, led by Stephen Harper, won a minority government with 124 seats. They improved their percentage from 2004, garnering 36.3% of the vote. During this election, the Conservatives also made major breakthroughs in Quebec. They gained 10 seats here, whereas in 2004 they had no seats. At the 2011 federal election, the Conservatives won a majority government with 167 seats. For the first time, the NDP became the Official Opposition, with 102 seats; the Liberals finished in third place with 34 seats. This was the first election in which the Green Party won a seat, that of leader Elizabeth May; the Bloc won 4 seats, losing official party status. More recently, with the existence of strong third parties and first past the post electorates amongst other factors, Canada on a federal and provincial level has experienced huge swings in seat shares, where third parties (eg NDP, Reform) end up (usually briefly) replacing the Liberals, the Progressive Conservatives or the Conservatives as the main opposition or even the government and leaving them as a rump. Such federally examples include the 1993 federal election with the collapse of the Progressive Conservatives, and the 2011 election leaving the Liberal Party a (temporary) rump along with Bloc Québécois. Other examples include the changes of fortune for the Albertan NDP during the province’s 2015 and 2019 elections, and possibly the 2018 Quebec elections with the rise of Coalition Avenir Québec taking government out of Liberal and Parti Québécois. On a provincial level, in the legislatures of Western Provinces the NDP often is the left leaning main party instead of that province’s Liberal Party branch, the latter generally being a rump or smaller than the NDP (excluding British Columbia where the Liberal Party is the main party, right of the NDP). The other main party (right of the NDP) is either the Progressive Conservatives or their successor, or the Saskatchewan Party in Saskatchewan. Realignment: Conservatives in power The Liberal Party, after dominating Canadian politics since the 1920s, was in decline in the early years of the 21st century. As Lang (2010) concluded, they lost their majority in Parliament in the 2004 election, were defeated in 2006, and in 2008 became little more than a "rump", falling to their lowest seat count in decades and a mere 26% of the popular vote. Furthermore, said Lang (a Liberal himself), its prospects "are as bleak as they have ever been." In the 2011 election, the Liberals suffered a major defeat, managing to secure only 18.9% of the vote share and only 34 seats. As a result, the Liberals lost their status as the official opposition to the NDP. In explaining those trends, Behiels (2010) synthesized major studies and reported that "a great many journalists, political advisors, and politicians argue that a new political party paradigm is emerging" She claimed they saw a new power configuration based on a right-wing political party capable of sharply changing the traditional role of the state (federal and provincial) in the twenty-first-century. Behiels said that, unlike Brian Mulroney who tried but failed to challenge the long-term dominance of the Liberals, Harper's attempt had proven to be more determined, systematic and successful. Many commentators thought it signalled a major realignment. The Economist said, "the election represents the biggest realignment of Canadian politics since 1993." Lawrence Martin, commentator for The Globe and Mail said, "Harper has completed a remarkable reconstruction of a Canadian political landscape that endured for more than a century. The realignment saw both old parties of the moderate middle, the Progressive Conservatives and the Liberals, either eliminated or marginalized." Maclean's said, the election marked "an unprecedented realignment of Canadian politics" as "the Conservatives are now in a position to replace the Liberals as the natural governing party in Canada." Despite the grim outlook and poor early poll numbers, when the 2015 election was held, the Liberals under Justin Trudeau had an unprecedented comeback and the realignment was proved only temporary. Gaining 148 seats, they won a majority government for the first time since 2000. The Toronto Star claimed the comeback was "headed straight for the history books" and that Harper's name would "forever be joined with that of his Liberal nemesis in Canada's electoral annals". Spencer McKay for the National Post suggested that "maybe we've witnessed a revival of Canada's 'natural governing party'". Party funding The rules governing the funding of parties are designed to ensure reliance on personal contributions. Personal donations to federal parties and campaigns benefit from tax credits, although the |
Many, if not most, towns in northern Canada, where agriculture is difficult, exist because of a nearby mine or source of timber. Canada is a world leader in the production of many natural resources such as gold, nickel, uranium, diamonds, lead, and in recent years, crude petroleum, which, with the world's second-largest oil reserves, is taking an increasingly prominent position in natural resources extraction. Several of Canada's largest companies are based in natural resource industries, such as Encana, Cameco, Goldcorp, and Barrick Gold. The vast majority of these products are exported, mainly to the United States. There are also many secondary and service industries that are directly linked to primary ones. For instance one of Canada's largest manufacturing industries is the pulp and paper sector, which is directly linked to the logging business. The reliance on natural resources has several effects on the Canadian economy and Canadian society. While manufacturing and service industries are easy to standardize, natural resources vary greatly by region. This ensures that differing economic structures developed in each region of Canada, contributing to Canada's strong regionalism. At the same time the vast majority of these resources are exported, integrating Canada closely into the international economy. Howlett and Ramesh argue that the inherent instability of such industries also contributes to greater government intervention in the economy, to reduce the social impact of market changes. Natural resource industries also raise important questions of sustainability. Despite many decades as a leading producer, there is little risk of depletion. Large discoveries continue to be made, such as the massive nickel find at Voisey's Bay. Moreover, the far north remains largely undeveloped as producers await higher prices or new technologies as many operations in this region are not yet cost effective. In recent decades Canadians have become less willing to accept the environmental destruction associated with exploiting natural resources. High wages and Aboriginal land claims have also curbed expansion. Instead, many Canadian companies have focused their exploration, exploitation and expansion activities overseas where prices are lower and governments more amenable. Canadian companies are increasingly playing important roles in Latin America, Southeast Asia, and Africa. The depletion of renewable resources has raised concerns in recent years. After decades of escalating overutilization the cod fishery all but collapsed in the 1990s, and the Pacific salmon industry also suffered greatly. The logging industry, after many years of activism, has in recent years moved to a more sustainable model, or to other countries. Data The following table shows the main economic indicators in 1980–2020 (with IMF staff estimates for 2021–2026). Inflation below 3% is in green. Unemployment rate Export trade Export trade from Canada measured in US dollars. In 2020, Canada exported over US$390 billion. Import trade Import trade in 2017 measured in US dollars. Measuring productivity Productivity measures are key indicators of economic performance and a key source of economic growth and competitiveness. The Organisation for Economic Co-operation and Development (OECD)'s Compendium of Productivity Indicators, published annually, presents a broad overview of productivity levels and growth in member nations, highlighting key measurement issues. It analyses the role of "productivity as the main driver of economic growth and convergence" and the "contributions of labour, capital and MFP in driving economic growth". According to the definition above "MFP is often interpreted as the contribution to economic growth made by factors such as technical and organisational innovation" (OECD 2008,11). Measures of productivity include Gross Domestic Product (GDP)(OECD 2008,11) and multifactor productivity. Multifactor productivity Another productivity measure, used by the OECD, is the long-term trend in multifactor productivity (MFP) also known as total factor productivity (TFP). This indicator assesses an economy's "underlying productive capacity ('potential output'), itself an important measure of the growth possibilities of economies and of inflationary pressures". MFP measures the residual growth that cannot be explained by the rate of change in the services of labour, capital and intermediate outputs, and is often interpreted as the contribution to economic growth made by factors such as technical and organisational innovation. (OECD 2008,11) According to the OECD's annual economic survey of Canada in June 2012, Canada has experienced weak growth of multi-factor productivity (MFP) and has been declining further since 2002. One of the ways MFP growth is raised is by boosting innovation and Canada's innovation indicators such as business R&D and patenting rates were poor. Raising MFP growth is "needed to sustain rising living standards, especially as the population ages". Bank of Canada The mandate of the central bank—the Bank of Canada is to conduct monetary policy that "preserves the value of money by keeping inflation low and stable". Monetary Policy Report The Bank of Canada issues its bank rate announcement through its Monetary Policy Report which is released eight times a year. The Bank of Canada, a federal crown corporation, has the responsibility of Canada's monetary system. Under the inflation-targeting monetary policy that has been the cornerstone of Canada's monetary and fiscal policy since the early 1990s, the Bank of Canada sets an inflation target The inflation target was set at 2 per cent, which is the midpoint of an inflation range of 1 to 3 per cent. They established a set of inflation-reduction targets to keep inflation "low, stable and predictable" and to foster "confidence in the value of money", contribute to Canada's sustained growth, employment gains and improved standard of living. In a January 9, 2019 statement on the release of the Monetary Policy Report, Bank of Canada Governor Stephen S. Poloz summarized major events since the October report, such as "negative economic consequences" of the US-led trade war with China. In response to the ongoing trade war "bond yields have fallen, yield curves have flattened even more and stock markets have repriced significantly" in "global financial markets". In Canada, low oil prices will impact Canada's "macroeconomic outlook". Canada's housing sector is not stabilizing as quickly as anticipated. Inflation targeting During the period that John Crow was Governor of the Bank of Canada—1987 to 1994— there was a worldwide recession and the bank rate rose to around 14% and unemployment topped 11%. Although since that time inflation-targeting has been adopted by "most advanced-world central banks", in 1991 it was innovative and Canada was an early adopter when the then-Finance Minister Michael Wilson approved the Bank of Canada's first inflation-targeting in the 1991 federal budget. The inflation target was set at 2 per cent. Inflation is measured by the total consumer price index (CPI). In 2011 the Government of Canada and the Bank of Canada extended Canada's inflation-control target to December 31, 2016. The Bank of Canada uses three unconventional instruments to achieve the inflation target: "a conditional statement on the future path of the policy rate", quantitative easing, and credit easing. As a result, interest rates and inflation eventually came down along with the value of the Canadian dollar. From 1991 to 2011 the inflation-targeting regime kept "price gains fairly reliable". Following the Financial crisis of 2007–08 the narrow focus of inflation-targeting as a means of providing stable growth in the Canadian economy was questioned. By 2011, the then-Bank of Canada Governor Mark Carney argued that the central bank's mandate would allow for a more flexible inflation-targeting in specific situations where he would consider taking longer "than the typical six to eight quarters to return inflation to 2 per cent". On July 15, 2015, the Bank of Canada announced that it was lowering its target for the overnight rate by another one-quarter percentage point, to 0.5 per cent "to try to stimulate an economy that appears to have failed to rebound meaningfully from the oil shock woes that dragged it into decline in the first quarter". According to the Bank of Canada announcement, in the first quarter of 2015, the total Consumer price index (CPI) inflation was about 1 per cent. This reflects "year-over-year price declines for consumer energy products". Core inflation in the first quarter of 2015 was about 2 per cent with an underlying trend in inflation at about 1.5 to 1.7 per cent. In response to the Bank of Canada's July 15, 2015 rate adjustment, Prime Minister Stephen Harper explained that the economy was "being dragged down by forces beyond Canadian borders such as global oil prices, the European debt crisis, and China's economic slowdown" which has made the global economy "fragile". The Chinese stock market had lost about US$3 trillion of wealth by July 2015 when panicked investors sold stocks, | target: "a conditional statement on the future path of the policy rate", quantitative easing, and credit easing. As a result, interest rates and inflation eventually came down along with the value of the Canadian dollar. From 1991 to 2011 the inflation-targeting regime kept "price gains fairly reliable". Following the Financial crisis of 2007–08 the narrow focus of inflation-targeting as a means of providing stable growth in the Canadian economy was questioned. By 2011, the then-Bank of Canada Governor Mark Carney argued that the central bank's mandate would allow for a more flexible inflation-targeting in specific situations where he would consider taking longer "than the typical six to eight quarters to return inflation to 2 per cent". On July 15, 2015, the Bank of Canada announced that it was lowering its target for the overnight rate by another one-quarter percentage point, to 0.5 per cent "to try to stimulate an economy that appears to have failed to rebound meaningfully from the oil shock woes that dragged it into decline in the first quarter". According to the Bank of Canada announcement, in the first quarter of 2015, the total Consumer price index (CPI) inflation was about 1 per cent. This reflects "year-over-year price declines for consumer energy products". Core inflation in the first quarter of 2015 was about 2 per cent with an underlying trend in inflation at about 1.5 to 1.7 per cent. In response to the Bank of Canada's July 15, 2015 rate adjustment, Prime Minister Stephen Harper explained that the economy was "being dragged down by forces beyond Canadian borders such as global oil prices, the European debt crisis, and China's economic slowdown" which has made the global economy "fragile". The Chinese stock market had lost about US$3 trillion of wealth by July 2015 when panicked investors sold stocks, which created declines in the commodities markets, which in turn negatively impacted resource-producing countries like Canada. The Bank's main priority has been to keep inflation at a moderate level. As part of that strategy, interest rates were kept at a low level for almost seven years. Since September 2010, the key interest rate (overnight rate) was 0.5%. In mid 2017, inflation remained below the Bank's 2% target, (at 1.6%) mostly because of reductions in the cost of energy, food and automobiles; as well, the economy was in a continuing spurt with a predicted GDP growth of 2.8 percent by year end. Early on July 12, 2017, the bank issued a statement that the benchmark rate would be increased to 0.75%. Key industries In 2017, the Canadian economy had the following relative weighting by industry, as percentage value of GDP: Service sector The service sector in Canada is vast and multifaceted, employing about three quarters of Canadians and accounting for 70% of GDP. The largest employer is the retail sector, employing almost 12% of Canadians. The retail industry is concentrated mainly in a small number of chain stores clustered together in shopping malls. In recent years, there has been an increase in the number of big-box stores, such as Wal-Mart (of the United States), Real Canadian Superstore, and Best Buy (of the United States). This has led to fewer workers in this sector and a migration of retail jobs to the suburbs. The second largest portion of the service sector is the business service and it employs only a slightly smaller percentage of the population. This includes the financial services, real estate, and communications industries. This portion of the economy has been rapidly growing in recent years. It is largely concentrated in the major urban centres, especially Toronto, Montreal and Vancouver (see Banking in Canada). The education and health sectors are two of Canada's largest, but both are largely under the influence of the government. The health care industry has been quickly growing, and is the third largest in Canada. Its rapid growth has led to problems for governments who must find money to fund it. Canada has an important high tech industry, and a burgeoning film, television, and entertainment industry creating content for local and international consumption (see Media in Canada). Tourism is of ever increasing importance, with the vast majority of international visitors coming from the United States. Casino gaming is currently the fastest-growing component of the Canadian tourism industry, contributing $5 billion in profits for Canadian governments and employing 41,000 Canadians as of 2001. Manufacturing The general pattern of development for wealthy nations was a transition from a raw material production based economy to a manufacturing based one, and then to a service based economy. At its World War II peak in 1944, Canada's manufacturing sector accounted for 29% of GDP, declining to 10.37% in 2017. Canada has not suffered as greatly as most other rich, industrialized nations from the pains of the relative decline in the importance of manufacturing since the 1960s. A 2009 study by Statistics Canada also found that, while manufacturing declined as a relative percentage of GDP from 24.3% in the 1960s to 15.6% in 2005, manufacturing volumes between 1961 and 2005 kept pace with the overall growth in the volume index of GDP. Manufacturing in Canada was especially hit hard by the financial crisis of 2007–08. As of 2017, manufacturing accounts for 10% of Canada's GDP, a relative decline of more than 5% of GDP since 2005. Central Canada is home to branch plants to all the major American and Japanese automobile makers and many parts factories owned by Canadian firms such as Magna International and Linamar Corporation. Steel Canada was the world's nineteenth-largest steel exporter in 2018. In year-to-date 2019 (through March), further referred to as YTD 2019, Canada exported 1.39 million metric tons of steel, a 22 percent decrease from 1.79 million metric tons in YTD 2018. Canada's exports represented about 1.5 percent of all steel exported globally in 2017, based on available data. By volume, Canada's 2018 steel exports represented just over one-tenth the volume of the world's largest exporter, China. In value terms, steel represented 1.4 percent of the total goods Canada exported in 2018. The growth in exports in the decade since 2009 has been 29%. The largest producers in 2018 were ArcelorMittal, Essar Steel Algoma, and the first of those alone accounted for roughly half of Canadian steel production through its two subsidiaries. The top two markets for Canada's exports were its NAFTA partners, and by themselves accounted for 92 percent of exports by volume. Canada sent 83 percent of its steel exports to the United States in YTD 2019. The gap between domestic demand and domestic production increased to -2.4 million metric tons, up from -0.2 million metric tons in YTD 2018. In YTD 2019, exports as a share of production decreased to 41.6 percent from 53 percent in YTD 2018. In 2017, heavy industry accounted for 10.2% of Canada's Greenhouse gas emissions. Mining In 2019, the country was the 4th largest world producer of platinum; the world's 5th largest producer of gold; the world's 5th largest producer of nickel; the world's 10th largest producer of copper; the 8th largest world producer of iron ore; the 4th largest world producer of titanium; the world's largest producer of potash; the 2nd largest world producer of niobium; the 4th largest world producer of sulfur; the world's 7th largest producer of molybdenum; the 7th worldwide producer of cobalt; the 8th largest world producer of lithium; the 8th largest world producer of zinc; the 13th largest world producer of gypsum; the 14th worldwide producer of antimony; the world's 10th largest producer of graphite; in addition to being the 6th largest world producer of salt. It was the 2nd largest producer in the world of uranium in 2018. Energy Canada has access to cheap sources of energy because of its geography. This has enabled the creation of several important industries, such as the large aluminum industries in British Columbia and Quebec. Canada is also one of the world's highest per capita consumers of energy. Electricity The electricity sector in Canada has played a significant role in the economic and political life of the country since the late 19th century. The sector is organized along provincial and territorial lines. In a majority of provinces, large government-owned integrated public utilities play a leading role in the generation, transmission and distribution of electricity. Ontario and Alberta have created electricity markets in the last decade in order to increase investment and competition in this sector of the economy. In 2017, the electricity sector accounted for 10% of total national greenhouse gas emissions. Canada has substantial electricity trade with the neighbouring United States amounting to 72 TWh exports and 10 TWh imports in 2017. Hydroelectricity accounted for 59% of all electric generation in Canada in 2016, making Canada the world's second-largest producer of hydroelectricity after China. Since 1960, large hydroelectric projects, especially in Quebec, British Columbia, Manitoba and Newfoundland and Labrador, have significantly increased the country's generation capacity. The second-largest single source of power (15% of the total) is nuclear power, with several plants in Ontario generating more than half of that province's electricity, and one generator in New Brunswick. This makes Canada the world's sixth-largest producer of electricity generated by nuclear power, producing 95 TWh in 2017. Fossil fuels provide 19% of Canadian electric power, about half as coal (9% of the total) and the remainder a mix of natural gas and oil. Only five provinces use coal for electricity generation. Alberta, Saskatchewan, and Nova Scotia rely on coal for nearly half their generation while other provinces and territories use little or none. Alberta and Saskatchewan also use a substantial amount of natural gas. Remote communities including all of Nunavut and much of the Northwest Territories produce most of their electricity from diesel generators, at high economic and environmental cost. The federal government has set up initiatives to reduce dependence on diesel-fired electricity. Non-hydro renewables are a fast-growing portion of the total, at 7% in 2016. Oil and Gas Canada possesses large oil and gas resources centred in Alberta and the Northern Territories, but also present in neighbouring British Columbia and Saskatchewan. The vast Athabasca oil sands give Canada the world's third largest reserves of oil after Saudi Arabia and Venezuela according to USGS. As such, the oil and gas industry represents 27% of Canada's total greenhouse gas emissions, an increase of 84% since 1990, mostly due to the development of the oil sands. Historically, an important issue in Canadian politics is the interplay between the oil and energy industry in Western Canada and the industrial heartland of Southern Ontario. Foreign investment in Western oil projects has fueled Canada's rising dollar. This has raised the price of Ontario's manufacturing exports and made them less competitive, a problem similar to the decline of the manufacturing sector in the Netherlands. The National Energy Policy of the early 1980s attempted to make Canada oil-sufficient and to ensure equal supply and price of oil in all parts of Canada, especially for the eastern manufacturing base. This policy proved deeply divisive as it forced Alberta to sell low-priced oil to eastern Canada. The policy was eliminated 5 years after it was first announced amid a collapse of oil prices in 1985. The new Prime Minister Brian Mulroney had campaigned against the policy in the 1984 Canadian federal election. One of the most controversial sections of the Canada–United States Free Trade Agreement of 1988 was a promise that Canada would never charge the United States more for energy than fellow Canadians. Agriculture Canada is also one of the world's largest suppliers of agricultural products, particularly of wheat and other grains. Canada is a major exporter of agricultural products, to the United States and Asia. As with all other developed nations the proportion of the population and GDP devoted to agriculture fell dramatically over the 20th century. The agriculture and agri-food manufacturing sector created $49.0 billion to Canada's GDP in 2015, accounting for 2.6% of total GDP. This sector also accounts for 8.4% of Canada's Greenhouse gas emissions. As with other developed nations, the Canadian agriculture industry receives significant government subsidies and supports. However, Canada has been a strong supporter of reducing market influencing subsidies through the World Trade Organization. In 2000, Canada spent approximately CDN$4.6 billion on supports for the industry. Of this, $2.32 billion was classified under the WTO designation of "green box" support, meaning it did not directly influence the market, such as money for research or disaster relief. All but $848.2 million were subsidies worth less than 5% of the value of the crops they were provided for. Free-trade agreements Free-trade agreements in force Source: Canada–Israel Free Trade Agreement (Entered into force January 1, 1997, modernization ongoing) Canada–Chile Free Trade Agreement (Entered into force July 5, 1997) Canada–Costa Rica Free Trade Agreement (Entered into force November 1, 2002, modernization ongoing) Canada–European Free Trade Association Free Trade Agreement (Iceland, Norway, Switzerland and Liechtenstein; entered into force July 1, 2009) Canada–Peru Free Trade Agreement (Entered into force August 1, 2009) Canada–Colombia Free Trade Agreement (Signed November 21, 2008, entered into force August 15, 2011; Canada's ratification of this FTA had been dependent upon Colombia's ratification of the "Agreement Concerning Annual Reports on Human Rights and Free Trade Between Canada and the Republic of Colombia" signed on May 27, 2010) Canada–Jordan Free Trade Agreement (Signed on June 28, 2009, entered into force October 1, 2012) Canada–Panama Free Trade Agreement (Signed on May 14, 2010, entered into force April 1, 2013) Canada–South Korea Free Trade Agreement (Signed on March 11, 2014, entered into force January 1, 2015) Canada–Ukraine Free Trade Agreement (Signed 11 July 2016, entered into force August 1, 2017) Comprehensive and Progressive Agreement for Trans-Pacific Partnership (signed March 8, 2018, entered into force December 30, 2018) Canada-United States-Mexico Agreement (Signed November 30, 2018, entered into force July 1, 2020) Free-trade agreements no longer in force Source: Canada–U.S. Free Trade Agreement (Signed October 12, 1987, entered into force January 1, 1989, later superseded by NAFTA) Trans-Pacific Partnership (concluded October 5, 2015, superseded by CPTPP) North American Free Trade Agreement (Entered into force January 1, 1994, later superseded by CUSMA) Comprehensive Economic and Trade Agreement (concluded August 5, 2014) Ongoing free-trade agreements negotiations Source: Canada is negotiating bilateral FTAs with the following countries respectively trade blocs: Caribbean Community (CARICOM) Guatemala, Nicaragua and El Salvador Dominican Republic India Japan Morocco Singapore Andean Community (FTA's are already in force with Peru and Colombia) Canada has been involved in negotiations to create the following regional trade blocks: Canada and Central American Free Trade Agreement Free Trade Area of the Americas (FTAA) Political issues Relations with the U.S. Canada and the United States share a common trading relationship. Canada's job market continues to perform well along with the US, reaching a 30-year low in the unemployment rate in December 2006, following 14 consecutive years of employment growth. The United States is by far Canada's largest trading partner, with more than $1.7 billion CAD in trade per day in 2005. In 2009, 73% of Canada's exports went to the United States, and 63% of Canada's imports were from the United States. Trade with Canada makes up 23% of the United States' exports and 17% of its imports. By comparison, in 2005 this was more than U.S. trade with all countries in the European Union combined, and well over twice U.S. trade with all the countries of Latin America combined. Just the two-way trade that crosses the Ambassador Bridge between Michigan and Ontario equals all U.S. exports to Japan. Canada's importance to the United States |
Canada. In 1882, Canadian Pacific transmitted its first commercial telegram over telegraph lines they had erected alongside its tracks, breaking Western Union's monopoly. Great North Western Telegraph, facing bankruptcy, was taken over in 1915 by Canadian Northern. By the end of World War II, Canadians communicated by telephone, more than any other country. In 1967 the CP and CN networks were merged to form CNCP Telecommunications. As of 1951, approximately 7000 messages were sent daily from the United States to Canada. An agreement with Western Union required that U.S. company to route messages in a specified ratio of 3:1, with three telegraphic messages transmitted to Canadian National for every message transmitted to Canadian Pacific. The agreement was complicated by the fact that some Canadian destinations were served by only one of the two networks. Fixed-line telephony Telephones - fixed lines: total subscriptions: 13.926 million (2020) Subscriptions per 100 inhabitants: 36.9 (2020 est.) Telephones - mobile cellular: 36,093,021 (2020) Subscriptions per 100 inhabitants: 95.63 (2020 est.) Telephone system: (2019) Domestic: Nearly 37 per 100 fixed-line and 96 per 100 mobile-cellular teledensity; domestic satellite system with about 300 earth stations (2020) International: country code - +1; submarine cables provide links within the Americas and Europe; satellite earth stations - 7 (5 Intelsat - 4 trans-Atlantic Ocean and 1 trans-Pacific Ocean, and 2 Intersputnik - (Atlantic Ocean region) Call signs ITU prefixes: Letter combinations available for use in Canada as the first two letters of a television or radio station's call sign are CF, CG, CH, CI, CJ, CK, CY, CZ, VA, VB, VC, VD, VE, VF, VG, VO, VX, VY, XJ, XK, XL, XM, XN and XO. Only CF, CH, CI, CJ and CK are currently in common use, although four radio stations in St. John's, Newfoundland and Labrador retained call letters beginning with VO when Newfoundland joined Canadian Confederation in 1949. Stations owned by the Canadian Broadcasting Corporation use CB through a special agreement with the government of Chile. Some codes beginning with VE and VF are also in use to identify radio repeater transmitters. Radio As of 2016, there were over 1,100 radio stations and audio services broadcasting in Canada. Of these, 711 are private commercial radio stations. These commercial stations account for over three quarters of radio stations in Canada. The remainder of the radio stations are a mix of public broadcasters, such as CBC Radio, as well as campus, community, and Aboriginal stations. Television As of 2018, 762 TV services were broadcasting in Canada. This includes both conventional television stations and discretionary services. Cable and satellite television services are available throughout Canada. The largest cable providers are Bell Canada, Rogers Cable, Shaw Cable, Vidéotron, Telus and Cogeco, while the two licensed satellite providers are Bell Satellite TV and Shaw Direct. Internet Bell, Rogers, Telus, and Shaw are among the bigger ISPs | per 100 inhabitants: 36.9 (2020 est.) Telephones - mobile cellular: 36,093,021 (2020) Subscriptions per 100 inhabitants: 95.63 (2020 est.) Telephone system: (2019) Domestic: Nearly 37 per 100 fixed-line and 96 per 100 mobile-cellular teledensity; domestic satellite system with about 300 earth stations (2020) International: country code - +1; submarine cables provide links within the Americas and Europe; satellite earth stations - 7 (5 Intelsat - 4 trans-Atlantic Ocean and 1 trans-Pacific Ocean, and 2 Intersputnik - (Atlantic Ocean region) Call signs ITU prefixes: Letter combinations available for use in Canada as the first two letters of a television or radio station's call sign are CF, CG, CH, CI, CJ, CK, CY, CZ, VA, VB, VC, VD, VE, VF, VG, VO, VX, VY, XJ, XK, XL, XM, XN and XO. Only CF, CH, CI, CJ and CK are currently in common use, although four radio stations in St. John's, Newfoundland and Labrador retained call letters beginning with VO when Newfoundland joined Canadian Confederation in 1949. Stations owned by the Canadian Broadcasting Corporation use CB through a special agreement with the government of Chile. Some codes beginning with VE and VF are also in use to identify radio repeater transmitters. Radio As of 2016, there were over 1,100 radio stations and audio services broadcasting in Canada. Of these, 711 are private commercial radio stations. These commercial stations account for over three quarters of radio stations in Canada. The remainder of the radio stations are a mix of public broadcasters, such as CBC Radio, as well as campus, community, and Aboriginal stations. Television As of 2018, 762 TV services were broadcasting in Canada. This includes both conventional television stations and discretionary services. Cable and satellite television services are available throughout Canada. The largest cable providers are Bell Canada, Rogers Cable, Shaw Cable, Vidéotron, Telus and Cogeco, while the two licensed satellite providers are Bell Satellite TV and Shaw Direct. Internet Bell, Rogers, Telus, and Shaw are among the bigger ISPs in Canada. Depending on your location, Bell and Rogers would be the big internet service providers in Eastern provinces, while Shaw and Telus are the main players competing in western provinces. Internet service providers: there are more than 44 ISPs in Canada, including Beanfield, Bell Canada, Cable Axion, Cablevision (Canada), Chebucto Community Net, Cogeco, Colbanet, Craig Wireless, Dery Telecom, Eastlink (company), Electronic Box, Everus Communications, Guest-tek, Information Gateway Services, Internet |
municipalities). Below is a table of Canada's ten biggest airports by passenger traffic in 2019. Railways In 2007, Canada had a total of of freight and passenger railway, of which is electrified. While intercity passenger transportation by rail is now very limited, freight transport by rail remains common. Total revenues of rail services in 2006 was $10.4 billion, of which only 2.8% was from passenger services. In a year are usually earned about $11 billion, of which 3.2% is from passengers and the rest from freight. The Canadian National and Canadian Pacific Railway are Canada's two major freight railway companies, each having operations throughout North America. In 2007, 357 billion tonne-kilometres of freight were transported by rail, and 4.33 million passengers travelled 1.44 billion passenger-kilometres (an almost negligible amount compared to the 491 billion passenger-kilometres made in light road vehicles). 34,281 people were employed by the rail industry in the same year. Nationwide passenger services are provided by the federal crown corporation Via Rail. Three Canadian cities have commuter rail services: in the Montreal area by AMT, in the Toronto area by GO Transit, and in the Vancouver area by West Coast Express. Smaller railways such as Ontario Northland, Rocky Mountaineer, and Algoma Central also run passenger trains to remote rural areas. In Canada railways are served by standard gauge, , rails. See also track gauge in Canada. Canada has railway links with the lower 48 US States, but no connection with Alaska other than a train ferry service from Prince Rupert, British Columbia, although a line has been proposed. There are no other international rail connections. Waterways In 2005, of cargo was loaded and unloaded at Canadian ports. The Port of Vancouver is the busiest port in Canada, moving or 15% of Canada's total in domestic and international shipping in 2003. Transport Canada oversees most of the regulatory functions related to marine registration, safety of large vessel, and port pilotage duties. Many of Canada's port facilities are in the process of being divested from federal responsibility to other agencies or municipalities. Inland waterways comprise , including the St. Lawrence Seaway. Transport Canada enforces acts and regulations governing water transportation and safety. Ferry services Passenger ferry service Vancouver Island and surrounding islands and peninsulas to the British Columbia mainland Several Sunshine Coast communities to the British Columbia mainland and to Alaska Internationally to St. Pierre and Miquelon Automobile ferry service Nova Scotia to Newfoundland and Labrador Quebec to Newfoundland across the Strait of Belle Isle Labrador to Newfoundland Chandler to the Magdalen Islands, Quebec Prince Edward Island to the Magdalen Islands, Quebec Prince Edward Island to Nova Scotia Digby, Nova Scotia, to Saint John, New Brunswick Train ferry service British Columbia to Alaska or Washington state Canals The St. Lawrence waterway was at one time the world's greatest inland water navigation system. The main route canals of Canada are those of the St. Lawrence River and the Great Lakes. The others are subsidiary canals. St. Lawrence Seaway Welland Canal Soo Locks Trent-Severn Waterway Rideau Canal Ports and harbours The National Harbours Board administered Halifax, Saint John, Chicoutimi, Trois-Rivières, Churchill, and Vancouver until 1983. At one time, over 300 harbours across Canada were supervised by the Department of Transport. A program of divestiture was implemented around the turn of the millennium, and as of 2014, 493 of the 549 sites identified for divestiture in 1995 have been sold or otherwise transferred, as indicated by a DoT list. The government maintains an active divestiture programme, and after divestiture Transport Canada oversees only 17 Canada Port Authorities for the 17 largest shipping ports. Pacific coast Victoria, British Columbia Vancouver, British Columbia Prince Rupert, British Columbia Atlantic coast Halifax, Nova Scotia Saint John, New Brunswick St. John's, Newfoundland and Labrador Sept-Îles, Quebec Sydney, Nova Scotia Botwood, Newfoundland and Labrador Arctic coast Churchill, Manitoba Great Lakes and St Lawrence River Bécancour, Quebec Hamilton, Ontario Montreal, Quebec Quebec City, Quebec Trois-Rivières, Quebec Thunder Bay, Ontario Toronto, Ontario Windsor, Ontario Merchant marine Canada's merchant marine comprised a total of 173 ships ( or over) or at the end of 2007. Pipelines Pipelines are part of the energy extraction and transportation network of Canada and are used to transport natural gas, natural gas liquids, crude oil, synthetic crude and other petroleum based products. Canada has of pipeline for transportation of crude and refined oil, and for liquefied petroleum gas. Public transit Most Canadian cities have public transport, if only a bus system. Three Canadian cities have rapid transit systems, four have light rail systems, and three have commuter rail systems (see below). In 2016, 12.4% of Canadians used public transportation to get to work. This compares to 79.5% that got to work using a car (67.4% driving alone, 12.1% as part of a carpool), 5.5% that walked and 1.4% that rode a bike. Government organizations across Canada owned 17,852 buses of various types in 2016. Organizations in Ontario (38.8%) and Quebec (21.9%) accounted for just over three-fifths of the country's total bus fleet. Urban municipalities owned more than 85% of all buses. in 2016, diesel buses were the leading bus type in Canada (65.9%), followed by bio-diesel (18.1%) and hybrid (9.4%) buses. Electric, natural gas and other buses collectively accounted for the remaining 6.6%. Rapid transit systems There are three rapid transit systems operating in Canada: the Montreal Metro, the Toronto subway, and the Vancouver SkyTrain. There is also an airport circulator, the Link Train, at Toronto Pearson International Airport. It operates 24 hours a day, 7 days a week and is wheelchair-accessible. It is free of cost. Light rail systems There are light rail systems in four cities – the Calgary CTrain, the Edmonton LRT, the Ottawa O-Train, and Waterloo Region's Ion – while Toronto has an extensive streetcar system. The 2016 Canada's Core Public Infrastructure Survey from Statistics Canada found that all of Canada's 247 streetcars were owned by the City of Toronto. The vast majority (87.9%) of these streetcars were purchased from 1970 to 1999, while 12.1% were purchased in 2016. Reflecting the age of the streetcars, 88.0% were reported to be in very poor condition, while 12.0% were reported to be in good condition. Commuter train systems Commuter trains serve the cities and surrounding areas of Montreal, Toronto and Vancouver: History The standard history covers the French regime, fur traders, the canals, and early roads, and gives extensive attention to the railways. European contact Prior to the arrival of European settlers, Aboriginal peoples in Canada walked. They also used canoes, kayaks, umiaks and Bull Boats, in addition to the snowshoe, toboggan and sled in winter. They had no wheeled vehicles, and no animals larger than dogs. Europeans adopted canoes as they pushed deeper into the continent's interior, and were thus able to travel via the waterways that fed from the St. Lawrence River and Hudson Bay. In the 19th century and early 20th century transportation relied on harnessing oxen to Red River ox carts or horse to wagon. Maritime transportation was via manual labour such as canoe or wind on sail. Water or land travel speeds was approximately . Settlement was along river routes. Agricultural | freight transport by rail remains common. Total revenues of rail services in 2006 was $10.4 billion, of which only 2.8% was from passenger services. In a year are usually earned about $11 billion, of which 3.2% is from passengers and the rest from freight. The Canadian National and Canadian Pacific Railway are Canada's two major freight railway companies, each having operations throughout North America. In 2007, 357 billion tonne-kilometres of freight were transported by rail, and 4.33 million passengers travelled 1.44 billion passenger-kilometres (an almost negligible amount compared to the 491 billion passenger-kilometres made in light road vehicles). 34,281 people were employed by the rail industry in the same year. Nationwide passenger services are provided by the federal crown corporation Via Rail. Three Canadian cities have commuter rail services: in the Montreal area by AMT, in the Toronto area by GO Transit, and in the Vancouver area by West Coast Express. Smaller railways such as Ontario Northland, Rocky Mountaineer, and Algoma Central also run passenger trains to remote rural areas. In Canada railways are served by standard gauge, , rails. See also track gauge in Canada. Canada has railway links with the lower 48 US States, but no connection with Alaska other than a train ferry service from Prince Rupert, British Columbia, although a line has been proposed. There are no other international rail connections. Waterways In 2005, of cargo was loaded and unloaded at Canadian ports. The Port of Vancouver is the busiest port in Canada, moving or 15% of Canada's total in domestic and international shipping in 2003. Transport Canada oversees most of the regulatory functions related to marine registration, safety of large vessel, and port pilotage duties. Many of Canada's port facilities are in the process of being divested from federal responsibility to other agencies or municipalities. Inland waterways comprise , including the St. Lawrence Seaway. Transport Canada enforces acts and regulations governing water transportation and safety. Ferry services Passenger ferry service Vancouver Island and surrounding islands and peninsulas to the British Columbia mainland Several Sunshine Coast communities to the British Columbia mainland and to Alaska Internationally to St. Pierre and Miquelon Automobile ferry service Nova Scotia to Newfoundland and Labrador Quebec to Newfoundland across the Strait of Belle Isle Labrador to Newfoundland Chandler to the Magdalen Islands, Quebec Prince Edward Island to the Magdalen Islands, Quebec Prince Edward Island to Nova Scotia Digby, Nova Scotia, to Saint John, New Brunswick Train ferry service British Columbia to Alaska or Washington state Canals The St. Lawrence waterway was at one time the world's greatest inland water navigation system. The main route canals of Canada are those of the St. Lawrence River and the Great Lakes. The others are subsidiary canals. St. Lawrence Seaway Welland Canal Soo Locks Trent-Severn Waterway Rideau Canal Ports and harbours The National Harbours Board administered Halifax, Saint John, Chicoutimi, Trois-Rivières, Churchill, and Vancouver until 1983. At one time, over 300 harbours across Canada were supervised by the Department of Transport. A program of divestiture was implemented around the turn of the millennium, and as of 2014, 493 of the 549 sites identified for divestiture in 1995 have been sold or otherwise transferred, as indicated by a DoT list. The government maintains an active divestiture programme, and after divestiture Transport Canada oversees only 17 Canada Port Authorities for the 17 largest shipping ports. Pacific coast Victoria, British Columbia Vancouver, British Columbia Prince Rupert, British Columbia Atlantic coast Halifax, Nova Scotia Saint John, New Brunswick St. John's, Newfoundland and Labrador Sept-Îles, Quebec Sydney, Nova Scotia Botwood, Newfoundland and Labrador Arctic coast Churchill, Manitoba Great Lakes and St Lawrence River Bécancour, Quebec Hamilton, Ontario Montreal, Quebec Quebec City, Quebec Trois-Rivières, Quebec Thunder Bay, Ontario Toronto, Ontario Windsor, Ontario Merchant marine Canada's merchant marine comprised a total of 173 ships ( or over) or at the end of 2007. Pipelines Pipelines are part of the energy extraction and transportation network of Canada and are used to transport natural gas, natural gas liquids, crude oil, synthetic crude and other petroleum based products. Canada has of pipeline for transportation of crude and refined oil, and for liquefied petroleum gas. Public transit Most Canadian cities have public transport, if only a bus system. Three Canadian cities have rapid transit systems, four have light rail systems, and three have commuter rail systems (see below). In 2016, 12.4% of Canadians used public transportation to get to work. This compares to 79.5% that got to work using a car (67.4% driving alone, 12.1% as part of a carpool), 5.5% that walked and 1.4% that rode a bike. Government organizations across Canada owned 17,852 buses of various types in 2016. Organizations in Ontario (38.8%) and Quebec (21.9%) accounted for just over three-fifths of the country's total bus fleet. Urban municipalities owned more than 85% of all buses. in 2016, diesel buses were the leading bus type in Canada (65.9%), followed by bio-diesel (18.1%) and hybrid (9.4%) buses. Electric, natural gas and other buses collectively accounted for the remaining 6.6%. Rapid transit systems There are three rapid transit systems operating in Canada: the Montreal Metro, the Toronto subway, and the Vancouver SkyTrain. There is also an airport circulator, the Link Train, at Toronto Pearson International Airport. It operates 24 hours a day, 7 days a week and is wheelchair-accessible. It is free of cost. Light rail systems There are light rail systems in four cities – the Calgary CTrain, the Edmonton LRT, the Ottawa O-Train, and Waterloo Region's Ion – while Toronto has an extensive streetcar system. The 2016 Canada's Core Public Infrastructure Survey from Statistics Canada found that all of Canada's 247 streetcars were owned by the City of Toronto. The vast majority |
invasion would bring an end to British support of Native American resistance to American expansion, typified by Tecumseh's coalition of tribes. Americans may also have wanted to acquire Canada. Once war broke out, the American strategy was to seize Canada. There was some hope that settlers in western Canada—most of them recent immigrants from the U.S.—would welcome the chance to overthrow their British rulers. However, the American invasions were defeated primarily by British regulars with support from Native Americans and Upper Canada militia. Aided by the large Royal Navy, a series of British raids on the American coast were highly successful, culminating with an attack on Washington that resulted in the British burning of the White House, the Capitol, and other public buildings. At the end of the war, Britain's American Indian allies had largely been defeated, and the Americans controlled a strip of Western Ontario centered on Fort Malden. However, Britain held much of Maine, and, with the support of their remaining American Indian allies, huge areas of the Old Northwest, including Wisconsin and much of Michigan and Illinois. With the surrender of Napoleon in 1814, Britain ended naval policies that angered Americans; with the defeat of the Indian tribes the threat to American expansion was ended. The upshot was both the United States and Canada asserted their sovereignty, Canada remained under British rule, and London and Washington had nothing more to fight over. The war was ended by the Treaty of Ghent, which took effect in February 1815. A series of postwar agreements further stabilized peaceful relations along the Canadian-US border. Canada reduced American immigration for fear of undue American influence, and built up the Anglican Church of Canada as a counterweight to the largely American Methodist and Baptist churches. In later years, Anglophone Canadians, especially in Ontario, viewed the War of 1812 as a heroic and successful resistance against invasion and as a victory that defined them as a people. The myth that the Canadian militia had defeated the invasion almost single-handed, known logically as the "militia myth", became highly prevalent after the war, having been propounded by John Strachan, Anglican Bishop of York. Post War of 1812 and mid-19th century In the aftermath of the War of 1812, pro-British conservatives led by Anglican Bishop John Strachan took control in Ontario ("Upper Canada"), and promoted the Anglican religion as opposed to the more republican Methodist and Baptist churches. A small interlocking elite, known as the Family Compact took full political control. Democracy, as practiced in the US, was ridiculed. The policies had the desired effect of deterring immigration from United States. Revolts in favor of democracy in Ontario and Quebec ("Lower Canada") in 1837 were suppressed; many of the leaders fled to the US. The American policy was to largely ignore the rebellions, and indeed ignore Canada generally in favor of westward expansion of the American Frontier. American Civil War The British Empire and Canada were neutral in the American Civil War, and about 40,000 Canadians volunteered for the Union Army—many already lived in the U.S., and a few for the Confederate Army. However, hundreds of Americans who were called up in the draft fled to Canada. In 1864, the Confederate government tried to use Canada as a base to attack American border towns. They raided the town St. Albans, Vermont on October 19, 1864, killing an American citizen and robbing three banks of over US$200,000. The three Confederates escaped to Canada where they were arrested, but then released. Many Americans suspected – falsely – that the Canadian government knew of the raid ahead of time. There was widespread anger when the raiders were released by a local court in Canada. The American Secretary of State William H. Seward let the British government know, "it is impossible to consider those proceedings as either legal, just or friendly towards the United States." Alabama claims Americans were angry at the British role during the American Civil War. Some leaders demanded for a huge payment, on the premise that British involvement had lengthened the war. Senator Charles Sumner, the chairman of the Senate Foreign Relations Committee, originally wanted to ask for $2 billion, or alternatively the ceding of all of Canada to the United States. When American Secretary of State William H. Seward negotiated the Alaska Purchase with Russia in 1867, he intended it as the first step in a comprehensive plan to gain control of the entire northwest Pacific Coast. Seward was a firm believer in Manifest Destiny, primarily for its commercial advantages to the U.S., Seward expected British Columbia to seek annexation to the U.S. and thought Britain might accept this in exchange for the Alabama claims. Soon other elements endorsed annexation, Their plan was to annex British Columbia, Red River Colony (Manitoba), and Nova Scotia, in exchange for dropping the damage claims. The idea reached a peak in the spring and summer of 1870, with American expansionists, Canadian separatists, and Pro-American Englishmen seemingly combining forces. The plan was dropped for multiple reasons. London continued to stall, American commercial and financial groups pressed Washington for a quick settlement of the dispute on a cash basis, growing Canadian nationalist sentiment in British Columbia called for staying inside the British Empire, Congress became preoccupied with Reconstruction, and most Americans showed little interest in territorial expansion. The "Alabama Claims" dispute went to international arbitration. In one of the first major cases of arbitration, the tribunal in 1872 supported the American claims and ordered Britain to pay $15.5 million. Britain paid and the episode ended in peaceful relations. Late 19th century Canada became a self-governing dominion in 1867 in internal affairs while Britain controlled control of diplomacy and of defence policy. Prior to Confederation, there was an Oregon boundary dispute in which the Americans claimed the 54th degree latitude. The Oregon Treaty of 1846 largely resolved the issue, splitting the disputed territory - the northern half became British Columbia, and the southern half eventually formed the states of Washington and Oregon. Strained relations with America continued, however, due to a series of small-scale armed incursions (the "Fenian raids" - by Irish-American Civil War veterans across the border from 1866 to 1871 in an attempt to trade Canada for Irish independence. The American government, angry at Canadian tolerance of Confederate raiders during the American Civil War of 1861-1865, moved very slowly to disarm the Fenians. The Fenian raids were small-scale attacks carried out by the Fenian Brotherhood, an Irish Republican organization based among Irish Catholics in the United States. Targets included British Army forts, customs posts and other locations near the border. The raids were small, unsuccessful episodes in 1866, and again from 1870 to 1871. They aimed to bring pressure on Great Britain to withdraw from Ireland. None of these raids achieved their aims and all were quickly defeated by local Canadian forces. The British government, in charge of diplomatic relations, protested cautiously, as Anglo-American relations were tense. Much of the tension was relieved as the Fenians faded away and in 1872 by the settlement of the Alabama Claims, when Britain paid the U.S. $15.5 million for war losses caused by warships built in Britain and sold to the Confederacy. Disputes over ocean boundaries on Georges Bank and over fishing, whaling, and sealing rights in the Pacific were settled by international arbitration, setting an important precedent. Early 20th century Alaska boundary A short-lived controversy was the Alaska boundary dispute, settled in favor of the United States in 1903. The issue was unimportant until the Klondike Gold Rush brought tens of thousands of men to Canada's Yukon, and they had to arrive through American ports. Canada needed its port and claimed that it had a legal right to a port near the present American town of Haines, Alaska. It would provide an all-Canadian route to the rich goldfields. The dispute was settled by arbitration, and the British delegate voted with the Americans—to the astonishment and disgust of Canadians who suddenly realized that Britain considered its relations with the United States paramount compared to those with Canada. The arbitration validated the status quo, but made Canada angry at London. 1907 saw a minor controversy over USS Nashville sailing into the Great Lakes via Canada without Canadian permission. To head off future embarrassments, in 1909 the two sides signed the International Boundary Waters Treaty and the International Joint Commission was established to manage the Great Lakes and keep them disarmed. It was amended in World War II to allow the building and training of warships. Free trade rejected Anti-Americanism reached a shrill peak in 1911 in Canada. The Liberal government in 1911 negotiated a Reciprocity treaty with the U.S. that would lower trade barriers. Canadian manufacturing interests were alarmed that free trade would allow the bigger and more efficient American factories to take their markets. The Conservatives made it a central campaign issue in the 1911 election, warning that it would be a "sell out" to the United States with economic annexation a special danger. The Conservative slogan was "No truck or trade with the Yankees", as they appealed to Canadian nationalism and nostalgia for the British Empire to win a major victory. World War I British Canadians were annoyed in 1914-16 when Washington insisted on neutrality and seemed to profit heavily while Canada was sacrificing its wealth and its youth. However when the US finally declared war on Germany in April 1917, there was swift cooperation and friendly coordination, as one historian reports: Official co-operation between Canada and the United States—the pooling of grain, fuel, power, and transportation resources, the underwriting of a Canadian loan by bankers of New York—produced a good effect on the public mind. Canadian recruiting detachments were welcomed in the United States, while a reciprocal agreement was ratified to facilitate the return of draft-evaders. A Canadian War Mission was established at Washington, and many other ways the activities of the two countries were coordinated for efficiency. Immigration regulations were relaxed and thousands of American farmhands crossed the border to assist in harvesting the Canadian crops. Officially and publicly, at least, the two nations were on better terms than ever before in their history, and on the American side this attitude extended through almost all classes of society. Post-First World War Canada demanded and received permission from London to send its own delegation to the Versailles Peace Talks in 1919, with the proviso that it sign the treaty under the British Empire. Canada subsequently took responsibility for its own foreign and military affairs in the 1920s. Its first ambassador to the United States, Vincent Massey, was named in 1927. The United States first ambassador to Canada was William Phillips. Canada became an active member of the British Commonwealth, the League of Nations, and the World Court, none of which included the U.S. In July 1923, as part of his Pacific Northwest tour and a week before his death, US President Warren Harding visited Vancouver, making him the first head of state of the United States to visit confederated Canada. The then Premier of British Columbia, John Oliver, and then mayor of Vancouver, Charles Tisdall, hosted a lunch in his honor at the Hotel Vancouver. Over 50,000 people heard Harding speak in Stanley Park. A monument to Harding designed by Charles Marega was unveiled in Stanley Park in 1925. Relations with the United States were cordial until 1930, when Canada vehemently protested the new Smoot–Hawley Tariff Act by which the U.S. raised tariffs (taxes) on products imported from Canada. Canada retaliated with higher tariffs of its own against American products, and moved toward more trade within the British Commonwealth. U.S.–Canadian trade fell 75% as the Great Depression dragged both countries down. Down to the 1920s the war and naval departments of both nations designed hypothetical war game scenarios on paper with the other as an enemy. These were routine training exercises; the departments were never told to get ready for a real war. In 1921, Canada developed Defence Scheme No. 1 for an attack on American cities and for forestalling invasion by the United States until British reinforcements arrived. Through the later 1920s and 1930s, the United States Army War College developed a plan for a war with the British Empire waged largely on North American territory, in War Plan Red. Herbert Hoover meeting in 1927 with British Ambassador Sir Esme Howard agreed on the "absurdity of contemplating the possibility of war between the United States and the British Empire." In 1938, as the roots of World War II were set in motion, U.S. President Franklin Roosevelt gave a public speech at Queen's University in Kingston, Ontario, declaring that the United States would not sit idly by if another power tried to dominate Canada. Diplomats saw it as a clear warning to Germany not to attack Canada. Second World War The two nations cooperated closely in World War II, as both nations saw new levels of prosperity and a determination to defeat the Axis powers. Prime Minister William Lyon Mackenzie King and President Franklin D. Roosevelt were determined not to repeat the mistakes of their predecessors. They met in August 1940 at Ogdensburg, issuing a declaration calling for close cooperation, and formed the Permanent Joint Board on Defense (PJBD). King sought to raise Canada's international visibility by hosting the August 1943 Quadrant conference in Quebec on military and political strategy; he was a gracious host but was kept out of the important meetings by Winston Churchill and Roosevelt. Canada allowed the construction of the Alaska Highway and participated in the building of the atomic bomb. 49,000 Americans joined the RCAF (Canadian) or RAF (British) air forces through the Clayton Knight Committee, which had Roosevelt's permission to recruit in the U.S. in 1940–42. American attempts in the mid-1930s to integrate British Columbia into a united West Coast military command had aroused Canadian opposition. Fearing a Japanese invasion of Canada's vulnerable British Columbia Coast, American officials urged the creation of a united military command for an eastern Pacific Ocean theater of war. Canadian leaders feared American imperialism and the loss of autonomy more than a Japanese invasion. In 1941, Canadians successfully argued within the PJBD for mutual cooperation rather than unified command for the West Coast. Newfoundland The United States built large military bases in Newfoundland during World War II. At the time it was a British crown colony, having lost dominion status. The American spending ended the depression and brought new prosperity; Newfoundland's business community sought closer ties with the United States as expressed by the Economic Union Party. Ottawa took notice and wanted Newfoundland to join Canada, which it did after hotly contested referenda. There was little demand in the United States for the acquisition of Newfoundland, so the United States did not protest the British decision not to allow an American option on the Newfoundland referendum. Cold War Prime Minister William Lyon Mackenzie King, working closely with his Foreign Minister Louis St. Laurent, handled foreign relations 1945–48 in cautious fashion. Canada donated money to the United Kingdom to help it rebuild; was elected to the UN Security Council; and helped design NATO. However, Mackenzie King rejected free trade with the United States, and decided not to play a role in the Berlin airlift. Canada had been actively involved in the League of Nations, primarily because it could act separately from Britain. It played a modest role in the postwar formation of the United Nations, as well as the International Monetary Fund. It played a somewhat larger role in 1947 in designing the General Agreement on Tariffs and Trade. After the mid-20th century onwards, Canada and the United States became extremely close partners. Canada was a close ally of the United States during the Cold War. Vietnam War resisters While Canada openly accepted draft evaders and later deserters from the United States, there was never serious international dispute due to Canada's actions, while Sweden's acceptance was heavily criticized by the United States. The issue of accepting American exiles became a local political debate in Canada that focused on Canada's sovereignty in its immigration law. The United States did not become involved because American politicians viewed Canada as geographically close ally not worth disturbing. Nixon Shock 1971 The United States had become Canada's largest market, and after the war the Canadian economy became dependent on smooth trade flows with the United States so much that in 1971 when the United States enacted the "Nixon Shock" economic policies (including a 10% tariff on all imports) it put the Canadian government into a panic. Washington refused to exempt Canada from its 1971 New Economic Policy, so Trudeau saw a solution in closer economic ties with Europe. Trudeau proposed a "Third Option" policy of diversifying Canada's trade and downgrading the importance of the American market. In a 1972 speech in Ottawa, Nixon declared the "special relationship" between Canada and the United States dead. Relations deteriorated on many points in the Nixon years (1969–74), including trade disputes, defense agreements, energy, fishing, the environment, cultural imperialism, and foreign policy. They changed for the better when Trudeau and President Jimmy Carter (1977-1981) found a better rapport. The late 1970s saw a more sympathetic American attitude toward Canadian political and economic needs, the pardoning of draft evaders who had moved to Canada, and the passing of old such as the Watergate scandal and the Vietnam War. Canada more than ever welcomed American investments during "the stagflation" that hurt both nations. 1990s The main issues in Canada–U.S. relations in the 1990s focused on the North American Free Trade Agreement, which was signed in 1994. It created a common market that by 2014 was worth $19 trillion, encompassed 470 million people, and had created millions of jobs. Wilson says, "Few dispute that NAFTA has produced large and measurable gains for Canadian consumers, workers, and businesses." However, he adds, "NAFTA has fallen well short of expectations." Migration history From the 1750s to the 21st century, there has been extensive mingling of the Canadian and American populations, with large movements in both directions. New England Yankees settled large parts of Nova Scotia before 1775, and were neutral during the American Revolution. At the end of the American Revolution, about 75,000 United Empire Loyalists moved out of the new United States to Nova Scotia, New Brunswick, and the lands of Quebec, east and south of Montreal. From 1790 to 1812 many farmers moved from New York and New England into Upper Canada (mostly to Niagara, and the north shore of Lake Ontario). In the mid and late 19th century gold rushes attracted American prospectors, mostly to British Columbia after the Cariboo Gold Rush, Fraser Canyon Gold Rush, and later to the Yukon Territory. In the early 20th century, the opening of land blocks in the Prairie Provinces attracted many farmers from the American Midwest. Many Mennonites immigrated from Pennsylvania and formed their own colonies. In the 1890s some Mormons went north to form communities in Alberta after The Church of Jesus Christ of Latter-day Saints rejected plural marriage. The 1960s saw the arrival of about 50,000 draft-dodgers who opposed the Vietnam War.<ref>Renee Kasinsky, "Refugees from Militarism: Draft Age Americans in Canada (1976)</ref> Canada was a way-station through which immigrants from other lands stopped for a while, ultimately heading to the U.S. In 1851–1951, 7.1 million people arrived in Canada (mostly from Continental Europe), and 6.6 million left Canada, most of them to the U.S. After 1850, the pace of industrialization and urbanization was much faster in the United States, drawing a wide range of immigrants from the North. By 1870, 1/6 of all the people born in Canada had moved to the United States, with the highest concentrations in New England, which was the destination of Francophone emigrants from Quebec and Anglophone emigrants from the Maritimes. It was common for people to move back and forth across the border, such as seasonal lumberjacks, entrepreneurs looking for larger markets, and families looking for jobs in the textile mills that paid much higher wages than in Canada. The southward migration slacked off after 1890, as Canadian industry began a growth spurt. By then, the American frontier was closing, and thousands of farmers looking for fresh land moved from the United States north into the Prairie Provinces. The net result of the flows were that in 1901 there were 128,000 American-born residents in Canada (3.5% of the Canadian population) and 1.18 million Canadian-born residents in the United States (1.6% of the U.S. population). In the late 19th and early 20th centuries, about 900,000 French Canadians moved to the U.S., with 395,000 residents there in 1900. Two-thirds went to mill towns in New England, where they formed distinctive ethnic communities. By the late 20th century, most had abandoned the French language (see New England French), but most kept the Catholic religion. About twice as many English Canadians came to the U.S., but they did not form distinctive ethnic settlements. Relations between political executives The executive of each country is represented differently. The President of the United States serves as both the head of state and head of government, and his "administration" is the executive, while the Prime Minister of Canada is head of government only, and his or her "government" or "ministry" directs the executive. W.L. Mackenzie King and Franklin D. Roosevelt (October 1935 – April 1945) In 1940, W.L. Mackenzie King and Franklin D. Roosevelt signed a defense pact, known as the Ogdensburg Agreement. King hosted conferences for Churchill and Roosevelt, but did not participate in the talks. Louis St. Laurent and Harry S. Truman (November 1948 – January 1953) Prime Minister Laurent and President Truman were both anti-communist during the early years of the Cold War. John G. Diefenbaker and Dwight Eisenhower (June 1957 –January 1961) President Dwight Eisenhower (1952–1961) took pains to foster good relations with Progressive Conservative John Diefenbaker (1957–1963) . That led to approval of plans to join together in NORAD, an integrated air defence system, in mid-1957. Relations with President John Kennedy were much less cordial. Diefenbaker opposed apartheid in the South Africa and helped force it out of the Commonwealth of Nations. His indecision on whether to accept Bomarc nuclear missiles from the United States led to his government's downfall. John G. Diefenbaker and John F. Kennedy (January 1961 – April 1963) Diefenbaker and President John F. Kennedy did not get along well personally. This was evident in Diefenbaker's response to the Cuban Missile Crisis, where he did not support the United States. However, Diefenbaker's Minister of Defence went behind Diefenbaker's back and did send Canada's military to high alert given Canada’s legal treaty obligations, and in order to try and appease Kennedy. Lester B. Pearson and Lyndon B. Johnson (November 1963 – April 1968) In 1965, Prime Minister Lester B. Pearson gave a speech in Philadelphia criticizing American involvement in the Vietnam War. This infuriated Lyndon B. Johnson, who gave him a harsh talk, saying "You don't come here and piss on my rug". Brian Mulroney and Ronald Reagan (September 1984 – January 1989) Relations between Brian Mulroney and Ronald Reagan were famously close. This relationship resulted in negotiations for the Canada–United States Free Trade Agreement, and the U.S.–Canada Air Quality Agreement to reduce acid-rain-causing emissions, both major policy goals of Mulroney, that would be finalized under the presidency of George H. W. Bush. Jean Chrétien and Bill Clinton (November 1993 – January 2001) Although Jean Chrétien was wary of appearing too close to President Bill Clinton, both men had a passion for golf. During a news conference with Prime Minister Chrétien in April 1997, President Clinton quipped "I don't know if any two world leaders have played golf together more than we have, but we meant to break a record". Their governments had many small trade quarrels over the Canadian content of American magazines, softwood lumber, and so on, but on the whole were quite friendly. Both leaders had run on reforming or abolishing NAFTA, but the agreement went ahead with the addition of environmental and labor side agreements. Crucially, the Clinton administration lent rhetorical support to Canadian unity during the 1995 referendum in Quebec on separation from Canada. Jean Chrétien and George W. Bush (January 2001 – December 2003) Relations between Chrétien and George W. Bush were strained throughout their overlapping times in office. After the September 11 attacks terror attacks, Jean Chrétien publicly mused that U.S. foreign policy might be part of the "root causes" of terrorism. Some Americans criticized his "smug moralism", and Chrétien's public refusal to support the 2003 Iraq war was met with negative responses in the United States, especially among conservatives. Stephen Harper and George W. Bush (February 2006 – January 2009) Stephen Harper and George W. Bush were thought to share warm personal relations and also close ties between their administrations. Because Bush was so unpopular among liberals in Canada (particularly in the media), this was underplayed by the Harper government. Shortly after being congratulated by Bush for his victory in February 2006, Harper rebuked U.S. ambassador to Canada David Wilkins for criticizing the Conservatives' plans to assert Canada's sovereignty over the Arctic Ocean waters with military force. Stephen Harper and Barack Obama (January 2009 – November 2015) President Barack Obama's first international trip was to Canada on February 19, 2009, thereby sending a strong message of peace and cooperation. With the exception of Canadian lobbying against "Buy American" provisions in the U.S. stimulus package, relations between the two administrations were smooth. They also held friendly bets on hockey games during the Winter Olympic season. In the 2010 Winter Olympics hosted by Canada in Vancouver, Canada defeated the US in both gold medal matches, entitling Stephen Harper to receive a case of Molson Canadian beer from Barack Obama; in reverse, if Canada had lost, Harper would have provided a case of Yuengling beer to Obama. During the 2014 Winter Olympics, alongside U.S. Secretary of State John Kerry & Minister of Foreign Affairs John Baird, Stephen Harper was given a case of Samuel Adams beer by Obama for the Canadian gold medal victory over the US in women's hockey, and the semi-final victory over the US in men's hockey. Canada-United States Regulatory Cooperation Council (RCC) (2011) On February 4, 2011, Harper and Obama issued a "Declaration on a Shared Vision for Perimeter Security and Economic Competitiveness" and announced the creation of the Canada–United States Regulatory Cooperation Council (RCC) "to increase regulatory transparency and coordination between the two countries." Health Canada and the United States Food and Drug Administration (FDA) under the RCC mandate, undertook the "first of its kind" initiative by selecting "as its first area of alignment common cold indications for certain over-the-counter antihistamine ingredients (GC 2013-01-10)." On December 7, 2011, Harper flew to Washington, met with Obama and signed an agreement to implement the joint action plans that had been developed since the initial meeting in February. The plans called on both countries to spend more on border infrastructure, share more information on people who cross the border, and acknowledge more of each other's safety and security inspection on third-country traffic. An editorial in The Globe and Mail praised the agreement for giving Canada the ability to track whether failed refugee claimants have left Canada via the U.S. and for eliminating "duplicated baggage screenings on connecting flights". The agreement is not a legally binding treaty, and relies on the political will and ability of the executives of both governments to implement the terms of the agreement. These types of executive agreements are routine—on both sides of the Canada–U.S. border. Justin Trudeau and Barack Obama (November 2015 – January 2017) President Barack Obama and Prime Minister Justin Trudeau first met formally at the APEC summit meeting in Manila, Philippines in November 2015, nearly a week after the latter was sworn into the office. Both leaders expressed eagerness for increased cooperation and coordination between the two countries during the course of Trudeau's government with Trudeau promising an "enhanced Canada–U.S. partnership". On November 6, 2015, Obama announced the U.S. State Department's rejection of the proposed Keystone XL pipeline, the fourth phase of the Keystone oil pipeline system running between Canada and the United States, to which Trudeau expressed disappointment but said that the rejection would not damage Canada–U.S. relations and would instead provide a "fresh start" to strengthening ties through cooperation and coordination, saying that "the Canada–U.S. relationship is much bigger than any one project." Obama has since praised Trudeau's efforts to prioritize the reduction of climate change, calling it "extraordinarily helpful" to establish a worldwide consensus on addressing the issue. Although Trudeau has told Obama his plans to withdraw Canada's McDonnell Douglas CF-18 Hornet jets assisting in the American-led intervention against ISIL, Trudeau said that Canada will still "do more than its part" in combating the terrorist group by increasing the number of Canadian special forces members training and fighting on ground in Iraq and Syria. Trudeau visited the White House for an official visit and state dinner on March 10, 2016. Trudeau and Obama were reported to have shared warm personal relations during the visit, making humorous remarks about which country was better at hockey and which country had better beer. Obama complimented Trudeau's 2015 election campaign for its "message of hope and change" and "positive and optimistic vision". Obama and Trudeau also held "productive" discussions on climate change and relations between the two countries, and Trudeau invited Obama to speak in the Canadian parliament in Ottawa later in the year. Justin Trudeau and Donald Trump (January 2017 – January 2021) Following the victory of Donald Trump in the 2016 U.S. presidential election, Trudeau congratulated him and invited him to visit Canada at the "earliest opportunity." Prime Minister Trudeau and President Trump formally met for the first time at the White House on February 13, 2017, nearly a month after Trump was sworn into the office. Trump has ruffled relations with Canada with tariffs on softwood lumber. Diafiltered Milk was brought up by Trump as an area that needed negotiating. In 2018, Trump and Trudeau negotiated the United States–Mexico–Canada Agreement (USMCA), a free trade agreement concluded | April 1917, there was swift cooperation and friendly coordination, as one historian reports: Official co-operation between Canada and the United States—the pooling of grain, fuel, power, and transportation resources, the underwriting of a Canadian loan by bankers of New York—produced a good effect on the public mind. Canadian recruiting detachments were welcomed in the United States, while a reciprocal agreement was ratified to facilitate the return of draft-evaders. A Canadian War Mission was established at Washington, and many other ways the activities of the two countries were coordinated for efficiency. Immigration regulations were relaxed and thousands of American farmhands crossed the border to assist in harvesting the Canadian crops. Officially and publicly, at least, the two nations were on better terms than ever before in their history, and on the American side this attitude extended through almost all classes of society. Post-First World War Canada demanded and received permission from London to send its own delegation to the Versailles Peace Talks in 1919, with the proviso that it sign the treaty under the British Empire. Canada subsequently took responsibility for its own foreign and military affairs in the 1920s. Its first ambassador to the United States, Vincent Massey, was named in 1927. The United States first ambassador to Canada was William Phillips. Canada became an active member of the British Commonwealth, the League of Nations, and the World Court, none of which included the U.S. In July 1923, as part of his Pacific Northwest tour and a week before his death, US President Warren Harding visited Vancouver, making him the first head of state of the United States to visit confederated Canada. The then Premier of British Columbia, John Oliver, and then mayor of Vancouver, Charles Tisdall, hosted a lunch in his honor at the Hotel Vancouver. Over 50,000 people heard Harding speak in Stanley Park. A monument to Harding designed by Charles Marega was unveiled in Stanley Park in 1925. Relations with the United States were cordial until 1930, when Canada vehemently protested the new Smoot–Hawley Tariff Act by which the U.S. raised tariffs (taxes) on products imported from Canada. Canada retaliated with higher tariffs of its own against American products, and moved toward more trade within the British Commonwealth. U.S.–Canadian trade fell 75% as the Great Depression dragged both countries down. Down to the 1920s the war and naval departments of both nations designed hypothetical war game scenarios on paper with the other as an enemy. These were routine training exercises; the departments were never told to get ready for a real war. In 1921, Canada developed Defence Scheme No. 1 for an attack on American cities and for forestalling invasion by the United States until British reinforcements arrived. Through the later 1920s and 1930s, the United States Army War College developed a plan for a war with the British Empire waged largely on North American territory, in War Plan Red. Herbert Hoover meeting in 1927 with British Ambassador Sir Esme Howard agreed on the "absurdity of contemplating the possibility of war between the United States and the British Empire." In 1938, as the roots of World War II were set in motion, U.S. President Franklin Roosevelt gave a public speech at Queen's University in Kingston, Ontario, declaring that the United States would not sit idly by if another power tried to dominate Canada. Diplomats saw it as a clear warning to Germany not to attack Canada. Second World War The two nations cooperated closely in World War II, as both nations saw new levels of prosperity and a determination to defeat the Axis powers. Prime Minister William Lyon Mackenzie King and President Franklin D. Roosevelt were determined not to repeat the mistakes of their predecessors. They met in August 1940 at Ogdensburg, issuing a declaration calling for close cooperation, and formed the Permanent Joint Board on Defense (PJBD). King sought to raise Canada's international visibility by hosting the August 1943 Quadrant conference in Quebec on military and political strategy; he was a gracious host but was kept out of the important meetings by Winston Churchill and Roosevelt. Canada allowed the construction of the Alaska Highway and participated in the building of the atomic bomb. 49,000 Americans joined the RCAF (Canadian) or RAF (British) air forces through the Clayton Knight Committee, which had Roosevelt's permission to recruit in the U.S. in 1940–42. American attempts in the mid-1930s to integrate British Columbia into a united West Coast military command had aroused Canadian opposition. Fearing a Japanese invasion of Canada's vulnerable British Columbia Coast, American officials urged the creation of a united military command for an eastern Pacific Ocean theater of war. Canadian leaders feared American imperialism and the loss of autonomy more than a Japanese invasion. In 1941, Canadians successfully argued within the PJBD for mutual cooperation rather than unified command for the West Coast. Newfoundland The United States built large military bases in Newfoundland during World War II. At the time it was a British crown colony, having lost dominion status. The American spending ended the depression and brought new prosperity; Newfoundland's business community sought closer ties with the United States as expressed by the Economic Union Party. Ottawa took notice and wanted Newfoundland to join Canada, which it did after hotly contested referenda. There was little demand in the United States for the acquisition of Newfoundland, so the United States did not protest the British decision not to allow an American option on the Newfoundland referendum. Cold War Prime Minister William Lyon Mackenzie King, working closely with his Foreign Minister Louis St. Laurent, handled foreign relations 1945–48 in cautious fashion. Canada donated money to the United Kingdom to help it rebuild; was elected to the UN Security Council; and helped design NATO. However, Mackenzie King rejected free trade with the United States, and decided not to play a role in the Berlin airlift. Canada had been actively involved in the League of Nations, primarily because it could act separately from Britain. It played a modest role in the postwar formation of the United Nations, as well as the International Monetary Fund. It played a somewhat larger role in 1947 in designing the General Agreement on Tariffs and Trade. After the mid-20th century onwards, Canada and the United States became extremely close partners. Canada was a close ally of the United States during the Cold War. Vietnam War resisters While Canada openly accepted draft evaders and later deserters from the United States, there was never serious international dispute due to Canada's actions, while Sweden's acceptance was heavily criticized by the United States. The issue of accepting American exiles became a local political debate in Canada that focused on Canada's sovereignty in its immigration law. The United States did not become involved because American politicians viewed Canada as geographically close ally not worth disturbing. Nixon Shock 1971 The United States had become Canada's largest market, and after the war the Canadian economy became dependent on smooth trade flows with the United States so much that in 1971 when the United States enacted the "Nixon Shock" economic policies (including a 10% tariff on all imports) it put the Canadian government into a panic. Washington refused to exempt Canada from its 1971 New Economic Policy, so Trudeau saw a solution in closer economic ties with Europe. Trudeau proposed a "Third Option" policy of diversifying Canada's trade and downgrading the importance of the American market. In a 1972 speech in Ottawa, Nixon declared the "special relationship" between Canada and the United States dead. Relations deteriorated on many points in the Nixon years (1969–74), including trade disputes, defense agreements, energy, fishing, the environment, cultural imperialism, and foreign policy. They changed for the better when Trudeau and President Jimmy Carter (1977-1981) found a better rapport. The late 1970s saw a more sympathetic American attitude toward Canadian political and economic needs, the pardoning of draft evaders who had moved to Canada, and the passing of old such as the Watergate scandal and the Vietnam War. Canada more than ever welcomed American investments during "the stagflation" that hurt both nations. 1990s The main issues in Canada–U.S. relations in the 1990s focused on the North American Free Trade Agreement, which was signed in 1994. It created a common market that by 2014 was worth $19 trillion, encompassed 470 million people, and had created millions of jobs. Wilson says, "Few dispute that NAFTA has produced large and measurable gains for Canadian consumers, workers, and businesses." However, he adds, "NAFTA has fallen well short of expectations." Migration history From the 1750s to the 21st century, there has been extensive mingling of the Canadian and American populations, with large movements in both directions. New England Yankees settled large parts of Nova Scotia before 1775, and were neutral during the American Revolution. At the end of the American Revolution, about 75,000 United Empire Loyalists moved out of the new United States to Nova Scotia, New Brunswick, and the lands of Quebec, east and south of Montreal. From 1790 to 1812 many farmers moved from New York and New England into Upper Canada (mostly to Niagara, and the north shore of Lake Ontario). In the mid and late 19th century gold rushes attracted American prospectors, mostly to British Columbia after the Cariboo Gold Rush, Fraser Canyon Gold Rush, and later to the Yukon Territory. In the early 20th century, the opening of land blocks in the Prairie Provinces attracted many farmers from the American Midwest. Many Mennonites immigrated from Pennsylvania and formed their own colonies. In the 1890s some Mormons went north to form communities in Alberta after The Church of Jesus Christ of Latter-day Saints rejected plural marriage. The 1960s saw the arrival of about 50,000 draft-dodgers who opposed the Vietnam War.<ref>Renee Kasinsky, "Refugees from Militarism: Draft Age Americans in Canada (1976)</ref> Canada was a way-station through which immigrants from other lands stopped for a while, ultimately heading to the U.S. In 1851–1951, 7.1 million people arrived in Canada (mostly from Continental Europe), and 6.6 million left Canada, most of them to the U.S. After 1850, the pace of industrialization and urbanization was much faster in the United States, drawing a wide range of immigrants from the North. By 1870, 1/6 of all the people born in Canada had moved to the United States, with the highest concentrations in New England, which was the destination of Francophone emigrants from Quebec and Anglophone emigrants from the Maritimes. It was common for people to move back and forth across the border, such as seasonal lumberjacks, entrepreneurs looking for larger markets, and families looking for jobs in the textile mills that paid much higher wages than in Canada. The southward migration slacked off after 1890, as Canadian industry began a growth spurt. By then, the American frontier was closing, and thousands of farmers looking for fresh land moved from the United States north into the Prairie Provinces. The net result of the flows were that in 1901 there were 128,000 American-born residents in Canada (3.5% of the Canadian population) and 1.18 million Canadian-born residents in the United States (1.6% of the U.S. population). In the late 19th and early 20th centuries, about 900,000 French Canadians moved to the U.S., with 395,000 residents there in 1900. Two-thirds went to mill towns in New England, where they formed distinctive ethnic communities. By the late 20th century, most had abandoned the French language (see New England French), but most kept the Catholic religion. About twice as many English Canadians came to the U.S., but they did not form distinctive ethnic settlements. Relations between political executives The executive of each country is represented differently. The President of the United States serves as both the head of state and head of government, and his "administration" is the executive, while the Prime Minister of Canada is head of government only, and his or her "government" or "ministry" directs the executive. W.L. Mackenzie King and Franklin D. Roosevelt (October 1935 – April 1945) In 1940, W.L. Mackenzie King and Franklin D. Roosevelt signed a defense pact, known as the Ogdensburg Agreement. King hosted conferences for Churchill and Roosevelt, but did not participate in the talks. Louis St. Laurent and Harry S. Truman (November 1948 – January 1953) Prime Minister Laurent and President Truman were both anti-communist during the early years of the Cold War. John G. Diefenbaker and Dwight Eisenhower (June 1957 –January 1961) President Dwight Eisenhower (1952–1961) took pains to foster good relations with Progressive Conservative John Diefenbaker (1957–1963) . That led to approval of plans to join together in NORAD, an integrated air defence system, in mid-1957. Relations with President John Kennedy were much less cordial. Diefenbaker opposed apartheid in the South Africa and helped force it out of the Commonwealth of Nations. His indecision on whether to accept Bomarc nuclear missiles from the United States led to his government's downfall. John G. Diefenbaker and John F. Kennedy (January 1961 – April 1963) Diefenbaker and President John F. Kennedy did not get along well personally. This was evident in Diefenbaker's response to the Cuban Missile Crisis, where he did not support the United States. However, Diefenbaker's Minister of Defence went behind Diefenbaker's back and did send Canada's military to high alert given Canada’s legal treaty obligations, and in order to try and appease Kennedy. Lester B. Pearson and Lyndon B. Johnson (November 1963 – April 1968) In 1965, Prime Minister Lester B. Pearson gave a speech in Philadelphia criticizing American involvement in the Vietnam War. This infuriated Lyndon B. Johnson, who gave him a harsh talk, saying "You don't come here and piss on my rug". Brian Mulroney and Ronald Reagan (September 1984 – January 1989) Relations between Brian Mulroney and Ronald Reagan were famously close. This relationship resulted in negotiations for the Canada–United States Free Trade Agreement, and the U.S.–Canada Air Quality Agreement to reduce acid-rain-causing emissions, both major policy goals of Mulroney, that would be finalized under the presidency of George H. W. Bush. Jean Chrétien and Bill Clinton (November 1993 – January 2001) Although Jean Chrétien was wary of appearing too close to President Bill Clinton, both men had a passion for golf. During a news conference with Prime Minister Chrétien in April 1997, President Clinton quipped "I don't know if any two world leaders have played golf together more than we have, but we meant to break a record". Their governments had many small trade quarrels over the Canadian content of American magazines, softwood lumber, and so on, but on the whole were quite friendly. Both leaders had run on reforming or abolishing NAFTA, but the agreement went ahead with the addition of environmental and labor side agreements. Crucially, the Clinton administration lent rhetorical support to Canadian unity during the 1995 referendum in Quebec on separation from Canada. Jean Chrétien and George W. Bush (January 2001 – December 2003) Relations between Chrétien and George W. Bush were strained throughout their overlapping times in office. After the September 11 attacks terror attacks, Jean Chrétien publicly mused that U.S. foreign policy might be part of the "root causes" of terrorism. Some Americans criticized his "smug moralism", and Chrétien's public refusal to support the 2003 Iraq war was met with negative responses in the United States, especially among conservatives. Stephen Harper and George W. Bush (February 2006 – January 2009) Stephen Harper and George W. Bush were thought to share warm personal relations and also close ties between their administrations. Because Bush was so unpopular among liberals in Canada (particularly in the media), this was underplayed by the Harper government. Shortly after being congratulated by Bush for his victory in February 2006, Harper rebuked U.S. ambassador to Canada David Wilkins for criticizing the Conservatives' plans to assert Canada's sovereignty over the Arctic Ocean waters with military force. Stephen Harper and Barack Obama (January 2009 – November 2015) President Barack Obama's first international trip was to Canada on February 19, 2009, thereby sending a strong message of peace and cooperation. With the exception of Canadian lobbying against "Buy American" provisions in the U.S. stimulus package, relations between the two administrations were smooth. They also held friendly bets on hockey games during the Winter Olympic season. In the 2010 Winter Olympics hosted by Canada in Vancouver, Canada defeated the US in both gold medal matches, entitling Stephen Harper to receive a case of Molson Canadian beer from Barack Obama; in reverse, if Canada had lost, Harper would have provided a case of Yuengling beer to Obama. During the 2014 Winter Olympics, alongside U.S. Secretary of State John Kerry & Minister of Foreign Affairs John Baird, Stephen Harper was given a case of Samuel Adams beer by Obama for the Canadian gold medal victory over the US in women's hockey, and the semi-final victory over the US in men's hockey. Canada-United States Regulatory Cooperation Council (RCC) (2011) On February 4, 2011, Harper and Obama issued a "Declaration on a Shared Vision for Perimeter Security and Economic Competitiveness" and announced the creation of the Canada–United States Regulatory Cooperation Council (RCC) "to increase regulatory transparency and coordination between the two countries." Health Canada and the United States |
been the domain of Christian cathedral schools or monastic schools (Scholae monasticae), led by monks and nuns. Evidence of such schools dates back to the 6th century CE. These new universities expanded the curriculum to include academic programs for clerics, lawyers, civil servants, and physicians. The university is generally regarded as an institution that has its origin in the Medieval Christian setting. Accompanying the rise of the "new towns" throughout Europe, mendicant orders were founded, bringing the consecrated religious life out of the monastery and into the new urban setting. The two principal mendicant movements were the Franciscans and the Dominicans, founded by St. Francis and St. Dominic, respectively. Both orders made significant contributions to the development of the great universities of Europe. Another new order was the Cistercians, whose large isolated monasteries spearheaded the settlement of former wilderness areas. In this period, church building and ecclesiastical architecture reached new heights, culminating in the orders of Romanesque and Gothic architecture and the building of the great European cathedrals. Christian nationalism emerged during this era in which Christians felt the impulse to recover lands in which Christianity had historically flourished. From 1095 under the pontificate of Urban II, the First Crusade was launched. These were a series of military campaigns in the Holy Land and elsewhere, initiated in response to pleas from the Byzantine Emperor Alexios I for aid against Turkish expansion. The Crusades ultimately failed to stifle Islamic aggression and even contributed to Christian enmity with the sacking of Constantinople during the Fourth Crusade. The Christian Church experienced internal conflict between the 7th and 13th centuries that resulted in a schism between the so-called Latin or Western Christian branch (the Catholic Church), and an Eastern, largely Greek, branch (the Eastern Orthodox Church). The two sides disagreed on a number of administrative, liturgical and doctrinal issues, most prominently Eastern Orthodox opposition to papal supremacy. The Second Council of Lyon (1274) and the Council of Florence (1439) attempted to reunite the churches, but in both cases, the Eastern Orthodox refused to implement the decisions, and the two principal churches remain in schism to the present day. However, the Catholic Church has achieved union with various smaller eastern churches. In the thirteenth century, a new emphasis on Jesus' suffering, exemplified by the Franciscans' preaching, had the consequence of turning worshippers' attention towards Jews, on whom Christians had placed the blame for Jesus' death. Christianity's limited tolerance of Jews was not new—Augustine of Hippo said that Jews should not be allowed to enjoy the citizenship that Christians took for granted—but the growing antipathy towards Jews was a factor that led to the expulsion of Jews from England in 1290, the first of many such expulsions in Europe. Beginning around 1184, following the crusade against Cathar heresy, various institutions, broadly referred to as the Inquisition, were established with the aim of suppressing heresy and securing religious and doctrinal unity within Christianity through conversion and prosecution. Protestant Reformation and Counter-Reformation The 15th-century Renaissance brought about a renewed interest in ancient and classical learning. During the Reformation, Martin Luther posted the Ninety-five Theses 1517 against the sale of indulgences. Printed copies soon spread throughout Europe. In 1521 the Edict of Worms condemned and excommunicated Luther and his followers, resulting in the schism of the Western Christendom into several branches. Other reformers like Zwingli, Oecolampadius, Calvin, Knox, and Arminius further criticized Catholic teaching and worship. These challenges developed into the movement called Protestantism, which repudiated the primacy of the pope, the role of tradition, the seven sacraments, and other doctrines and practices. The Reformation in England began in 1534, when King Henry VIII had himself declared head of the Church of England. Beginning in 1536, the monasteries throughout England, Wales and Ireland were dissolved. Thomas Müntzer, Andreas Karlstadt and other theologians perceived both the Catholic Church and the confessions of the Magisterial Reformation as corrupted. Their activity brought about the Radical Reformation, which gave birth to various Anabaptist denominations. Partly in response to the Protestant Reformation, the Catholic Church engaged in a substantial process of reform and renewal, known as the Counter-Reformation or Catholic Reform. The Council of Trent clarified and reasserted Catholic doctrine. During the following centuries, competition between Catholicism and Protestantism became deeply entangled with political struggles among European states. Meanwhile, the discovery of America by Christopher Columbus in 1492 brought about a new wave of missionary activity. Partly from missionary zeal, but under the impetus of colonial expansion by the European powers, Christianity spread to the Americas, Oceania, East Asia and sub-Saharan Africa. Throughout Europe, the division caused by the Reformation led to outbreaks of religious violence and the establishment of separate state churches in Europe. Lutheranism spread into the northern, central, and eastern parts of present-day Germany, Livonia, and Scandinavia. Anglicanism was established in England in 1534. Calvinism and its varieties, such as Presbyterianism, were introduced in Scotland, the Netherlands, Hungary, Switzerland, and France. Arminianism gained followers in the Netherlands and Frisia. Ultimately, these differences led to the outbreak of conflicts in which religion played a key factor. The Thirty Years' War, the English Civil War, and the French Wars of Religion are prominent examples. These events intensified the Christian debate on persecution and toleration. In the revival of neoplatonism Renaissance humanists did not reject Christianity; quite the contrary, many of the greatest works of the Renaissance were devoted to it, and the Catholic Church patronized many works of Renaissance art. Much, if not most, of the new art was commissioned by or in dedication to the Church. Some scholars and historians attributes Christianity to having contributed to the rise of the Scientific Revolution, Many well-known historical figures who influenced Western science considered themselves Christian such as Nicolaus Copernicus, Galileo Galilei, Johannes Kepler, Isaac Newton and Robert Boyle. Post-Enlightenment In the era known as the Great Divergence, when in the West, the Age of Enlightenment and the scientific revolution brought about great societal changes, Christianity was confronted with various forms of skepticism and with certain modern political ideologies, such as versions of socialism and liberalism. Events ranged from mere anti-clericalism to violent outbursts against Christianity, such as the dechristianization of France during the French Revolution, the Spanish Civil War, and certain Marxist movements, especially the Russian Revolution and the persecution of Christians in the Soviet Union under state atheism. Especially pressing in Europe was the formation of nation states after the Napoleonic era. In all European countries, different Christian denominations found themselves in competition to greater or lesser extents with each other and with the state. Variables were the relative sizes of the denominations and the religious, political, and ideological orientation of the states. Urs Altermatt of the University of Fribourg, looking specifically at Catholicism in Europe, identifies four models for the European nations. In traditionally Catholic-majority countries such as Belgium, Spain, and Austria, to some extent, religious and national communities are more or less identical. Cultural symbiosis and separation are found in Poland, the Republic of Ireland, and Switzerland, all countries with competing denominations. Competition is found in Germany, the Netherlands, and again Switzerland, all countries with minority Catholic populations, which to a greater or lesser extent identified with the nation. Finally, separation between religion (again, specifically Catholicism) and the state is found to a great degree in France and Italy, countries where the state actively opposed itself to the authority of the Catholic Church. The combined factors of the formation of nation states and ultramontanism, especially in Germany and the Netherlands, but also in England to a much lesser extent, often forced Catholic churches, organizations, and believers to choose between the national demands of the state and the authority of the Church, specifically the papacy. This conflict came to a head in the First Vatican Council, and in Germany would lead directly to the Kulturkampf, where liberals and Protestants under the leadership of Bismarck managed to severely restrict Catholic expression and organization. Christian commitment in Europe dropped as modernity and secularism came into their own, particularly in Czechia and Estonia, while religious commitments in America have been generally high in comparison to Europe. The late 20th century has shown the shift of Christian adherence to the Third World and the Southern Hemisphere in general, with the West no longer the chief standard bearer of Christianity. Approximately 7 to 10% of Arabs are Christians, most prevalent in Egypt, Syria and Lebanon. Demographics With around 2.4 billion adherents, split into three main branches of Catholic, Protestant, and Eastern Orthodox, Christianity is the world's largest religion. The Christian share of the world's population has stood at around 33% for the last hundred years, which means that one in three persons on Earth are Christians. This masks a major shift in the demographics of Christianity; large increases in the developing world have been accompanied by substantial declines in the developed world, mainly in Western Europe and North America. According to a 2015 Pew Research Center study, within the next four decades, Christianity will remain the largest religion; and by 2050, the Christian population is expected to exceed 3 billion. According to some scholars, Christianity ranks at first place in net gains through religious conversion. As a percentage of Christians, the Catholic Church and Orthodoxy (both Eastern and Oriental) are declining in some parts of the world (though Catholicism is growing in Asia, in Africa, vibrant in Eastern Europe, etc.), while Protestants and other Christians are on the rise in the developing world. The so-called popular Protestantism is one of the fastest growing religious categories in the world. Nevertheless, Catholicism will also continue to grow to 1.63 billion by 2050, according to Todd Johnson of the Center for the Study of Global Christianity. Africa alone, by 2015, will be home to 230 million African Catholics. And if in 2018, the U.N. projects that Africa's population will reach 4.5 billion by 2100 (not 2 billion as predicted in 2004), Catholicism will indeed grow, as will other religious groups. According to Pew Research Center, Africa is expected to be home to 1.1 billion African Christians by 2050. In 2010, 87% of world's Christian population lived in countries where Christians are in the majority, while 13% of world's Christian population lived in countries where Christians are in the minority. Christianity is the predominant religion in Europe, the Americas, Oceania, and Southern Africa. In Asia, it is the dominant religion in Armenia, Cyprus, Georgia, East Timor, and the Philippines. However, it is declining in some areas including the northern and western United States, some areas in Oceania (Australia and New Zealand), northern Europe (including Great Britain, Scandinavia and other places), France, Germany, and the Canadian provinces of Ontario, British Columbia, and Quebec, and some parts of Asia (especially the Middle East, due to the Christian emigration, and Macau). The Christian population is not decreasing in Brazil, the southern United States, and the province of Alberta, Canada, but the percentage is decreasing. Since the fall of communism, the proportion of Christians has been stable or even increased in the Central and Eastern European countries. Christianity is growing rapidly in both numbers and percentage in China, other Asian countries, Sub-Saharan Africa, Latin America, Eastern Europe, North Africa (Maghreb), Gulf Cooperation Council countries, and Oceania. Despite the declining numbers, Christianity remains the dominant religion in the Western World, where 70% are Christians. Christianity remains the largest religion in Western Europe, where 71% of Western Europeans identified themselves as Christian in 2018. A 2011 Pew Research Center survey found that 76% of Europeans, 73% in Oceania and about 86% in the Americas (90% in Latin America and 77% in North America) identified themselves as Christians. By 2010 about 157 countries and territories in the world had Christian majorities. However, there are many charismatic movements that have become well established over large parts of the world, especially Africa, Latin America, and Asia. Since 1900, primarily due to conversion, Protestantism has spread rapidly in Africa, Asia, Oceania, and Latin America. From 1960 to 2000, the global growth of the number of reported Evangelical Protestants grew three times the world's population rate, and twice that of Islam. According to the historian Geoffrey Blainey from the University of Melbourne, since the 1960s there has been a substantial increase in the number of conversions from Islam to Christianity, mostly to the Evangelical and Pentecostal forms. A study conducted by St. Mary's University estimated about 10.2 million Muslim converts to Christianity in 2015, according to the study significant numbers of Muslims converts to Christianity can be found in Afghanistan, Azerbaijan, Central Asia (including Kazakhstan, Kyrgyzstan, and other countries), Indonesia, Malaysia, the Middle East (including Iran, Saudi Arabia, Turkey, and other countries), North Africa (including Algeria, Morocco, and Tunisia), Sub-Saharan Africa, and the Western World (including Albania, Belgium, France, Germany, Kosovo, the Netherlands, Russia, Scandinavia, United kingdom, the United States, and other western countries). It is also reported that Christianity is popular among people of different backgrounds in Africa and Asia, according to a report by the Singapore Management University, more people in Southeast Asia are converting to Christianity, many of them are young and have a university degree. According to scholar Juliette Koning and Heidi Dahles of Vrije Universiteit Amsterdam there is a "rapid expansion" of Christianity in Singapore, China, Hong Kong, Taiwan, Indonesia, Malaysia, and South Korea. According to scholar Terence Chong from the Institute of Southeast Asian Studies, since 1980s Christianity is expanding in China, Singapore, Indonesia, Japan, Malaysia, Taiwan, South Korea, and Vietnam. In most countries in the developed world, church attendance among people who continue to identify themselves as Christians has been falling over the last few decades. Some sources view this simply as part of a drift away from traditional membership institutions, while others link it to signs of a decline in belief in the importance of religion in general. Europe's Christian population, though in decline, still constitutes the largest geographical component of the religion. According to data from the 2012 European Social Survey, around a third of European Christians say they attend services once a month or more, Conversely about more than two-thirds of Latin American Christians; according to the World Values Survey, about 90% of African Christians (in Ghana, Nigeria, Rwanda, South Africa and Zimbabwe) said they attended church regularly. Christianity, in one form or another, is the sole state religion of the following nations: Argentina (Catholic), Tuvalu (Reformed), Tonga (Methodist), Norway (Lutheran), Costa Rica (Catholic), the Kingdom of Denmark (Lutheran), England (Anglican), Georgia (Georgian Orthodox), Greece (Greek Orthodox), Iceland (Lutheran), Liechtenstein (Catholic), Malta (Catholic), Monaco (Catholic), and Vatican City (Catholic). There are numerous other countries, such as Cyprus, which although do not have an established church, still give official recognition and support to a specific Christian denomination. Churches and denominations The four primary divisions of Christianity are the Catholic Church, the Eastern Orthodox Church, Oriental Orthodoxy, and Protestantism. A broader distinction that is sometimes drawn is between Eastern Christianity and Western Christianity, which has its origins in the East–West Schism (Great Schism) of the 11th century. Recently, neither Western or Eastern World Christianity has also stood out, for example, African-initiated churches. However, there are other present and historical Christian groups that do not fit neatly into one of these primary categories. There is a diversity of doctrines and liturgical practices among groups calling themselves Christian. These groups may vary ecclesiologically in their views on a classification of Christian denominations. The Nicene Creed (325), however, is typically accepted as authoritative by most Christians, including the Catholic, Eastern Orthodox, Oriental Orthodox, and major Protestant (including Anglican) denominations. Catholic Church The Catholic Church consists of those particular churches, headed by bishops, in communion with the pope, the bishop of Rome, as its highest authority in matters of faith, morality, and church governance. Like Eastern Orthodoxy, the Catholic Church, through apostolic succession, traces its origins to the Christian community founded by Jesus Christ. Catholics maintain that the "one, holy, catholic, and apostolic church" founded by Jesus subsists fully in the Catholic Church, but also acknowledges other Christian churches and communities and works towards reconciliation among all Christians. The Catholic faith is detailed in the Catechism of the Catholic Church. Of its seven sacraments, the Eucharist is the principal one, celebrated liturgically in the Mass. The church teaches that through consecration by a priest, the sacrificial bread and wine become the body and blood of Christ. The Virgin Mary is venerated in the Catholic Church as Mother of God and Queen of Heaven, honoured in dogmas and devotions. Its teaching includes Divine Mercy, sanctification through faith and evangelization of the Gospel as well as Catholic social teaching, which emphasises voluntary support for the sick, the poor, and the afflicted through the corporal and spiritual works of mercy. The Catholic Church operates thousands of Catholic schools, universities, hospitals, and orphanages around the world, and is the largest non-government provider of education and health care in the world. Among its other social services are numerous charitable and humanitarian organizations. Canon law () is the system of laws and legal principles made and enforced by the hierarchical authorities of the Catholic Church to regulate its external organisation and government and to order and direct the activities of Catholics toward the mission of the church. The canon law of the Latin Church was the first modern Western legal system and is the oldest continuously functioning legal system in the West, while the distinctive traditions of Eastern Catholic canon law govern the 23 Eastern Catholic particular churches sui iuris. As the world's oldest and largest continuously functioning international institution, it has played a prominent role in the history and development of Western civilization. The 2,834 sees are grouped into 24 particular autonomous Churches (the largest of which being the Latin Church), each with its own distinct traditions regarding the liturgy and the administering of sacraments. With more than 1.1 billion baptized members, the Catholic Church is the largest Christian church and represents 50.1% all Christians as well as one sixth of the world's population. Catholics live all over the world through missions, diaspora, and conversions. Eastern Orthodox Church The Eastern Orthodox Church consists of those churches in communion with the patriarchal sees of the East, such as the Ecumenical Patriarch of Constantinople. Like the Catholic Church, the Eastern Orthodox Church also traces its heritage to the foundation of Christianity through apostolic succession and has an episcopal structure, though the autonomy of its component parts is emphasized, and most of them are national churches. Eastern Orthodox theology is based on holy tradition which incorporates the dogmatic decrees of the seven Ecumenical Councils, the Scriptures, and the teaching of the Church Fathers. The church teaches that it is the one, holy, catholic and apostolic church established by Jesus Christ in his Great Commission, and that its bishops are the successors of Christ's apostles. It maintains that it practises the original Christian faith, as passed down by holy tradition. Its patriarchates, reminiscent of the pentarchy, and other autocephalous and autonomous churches reflect a variety of hierarchical organisation. It recognises seven major sacraments, of which the Eucharist is the principal one, celebrated liturgically in synaxis. The church teaches that through consecration invoked by a priest, the sacrificial bread and wine become the body and blood of Christ. The Virgin Mary is venerated in the Eastern Orthodox Church as the God-bearer, honoured in devotions. Eastern Orthodoxy is the second largest single denomination in Christianity, with an estimated 230 million adherents, although Protestants collectively outnumber them, substantially. As one of the oldest surviving religious institutions in the world, the Eastern Orthodox Church has played a prominent role in the history and culture of Eastern and Southeastern Europe, the Caucasus, and the Near East. Oriental Orthodoxy The Oriental Orthodox Churches (also called "Old Oriental" churches) are those eastern churches that recognize the first three ecumenical councils—Nicaea, Constantinople, and Ephesus—but reject the dogmatic definitions of the Council of Chalcedon and instead espouse a Miaphysite christology. The Oriental Orthodox communion consists of six groups: Syriac Orthodox, Coptic Orthodox, Ethiopian Orthodox, Eritrean Orthodox, Malankara Orthodox Syrian Church (India), and Armenian Apostolic churches. These six churches, while being in communion with each other, are completely independent hierarchically. These churches are generally not in communion with the Eastern Orthodox Church, with whom they are in dialogue for erecting a communion. Together, they have about 62 million members worldwide. As some of the oldest religious institutions in the world, the Oriental Orthodox Churches have played a prominent role in the history and culture of Armenia, Egypt, Turkey, Eritrea, Ethiopia, Sudan and parts of the Middle East and India. An Eastern Christian body of autocephalous churches, its bishops are equal by virtue of episcopal ordination, and its doctrines can be summarized in that the churches recognize the validity of only the first three ecumenical councils. Assyrian Church of the East The Assyrian Church of the East, with an unbroken patriarchate established in the 17th century, is an independent Eastern Christian denomination which claims continuity from the Church of the East—in parallel to the Catholic patriarchate established in the 16th century that evolved into the Chaldean Catholic Church, an Eastern Catholic church in full communion with the Pope. It is an Eastern Christian church that follows the traditional christology and ecclesiology of the historical Church of the East. Largely aniconic and not in communion with any other church, it belongs to the eastern branch of Syriac Christianity, and uses the East Syriac Rite in its liturgy. Its main spoken language is Syriac, a dialect of Eastern Aramaic, and the majority of its adherents are ethnic Assyrians. It is officially headquartered in the city of Erbil in northern Iraqi Kurdistan, and its original area also spreads into south-eastern Turkey and north-western Iran, corresponding to ancient Assyria. Its hierarchy is composed of metropolitan bishops and diocesan bishops, while lower clergy consists of priests and deacons, who serve in dioceses (eparchies) and parishes throughout the Middle East, India, North America, Oceania, and Europe (including the Caucasus and Russia). The Ancient Church of the East distinguished itself from the Assyrian Church of the East in 1964. It is one of the Assyrian churches that claim continuity with the historical Church of the East, one of the oldest Christian churches in Mesopotamia. Protestantism In 1521, the Edict of Worms condemned Martin Luther and officially banned citizens of the Holy Roman Empire from defending or propagating his ideas. This split within the Roman Catholic church is now called the Reformation. Prominent Reformers included Martin Luther, Huldrych Zwingli, and John Calvin. The 1529 Protestation at Speyer against being excommunicated gave this party the name Protestantism. Luther's primary theological heirs are known as Lutherans. Zwingli and Calvin's heirs are far broader denominationally, and are referred to as the Reformed tradition. Protestants have developed their own culture, with major contributions in education, the humanities and sciences, the political and social order, the economy and the arts, and many other fields. The Anglican churches descended from the Church of England and organized in the Anglican Communion. Some, but not all Anglicans consider themselves both Protestant and Catholic. Since the Anglican, Lutheran, and the Reformed branches of Protestantism originated for the most part in cooperation with the government, these movements are termed the "Magisterial Reformation". On the other hand, groups such as the Anabaptists, who often do not consider themselves to be Protestant, originated in the Radical Reformation, which though sometimes protected under Acts of Toleration, do not trace their history back to any state church. They are further distinguished by their rejection of infant baptism; they believe in baptism only of adult believers—credobaptism (Anabaptists include the Amish, Apostolic, Mennonites, Hutterites, River Brethren and Schwarzenau Brethren/German Baptist groups.) The term Protestant also refers to any churches which formed later, with either the Magisterial or Radical traditions. In the 18th century, for example, Methodism grew out of Anglican minister John Wesley's evangelical revival movement. Several Pentecostal and non-denominational churches, which emphasize the cleansing power of the Holy Spirit, in turn grew out of Methodism. Because Methodists, Pentecostals and other evangelicals stress "accepting Jesus as your personal Lord and Savior", which comes from Wesley's emphasis of the New Birth, they often refer to themselves as being born-again. Protestantism is the second largest major group of Christians after Catholicism by number of followers, although the Eastern Orthodox Church is larger than any single Protestant denomination. Estimates vary, mainly over the question of which denominations to classify as Protestant. Yet, the total number of Protestant Christians is generally estimated between 800 million and 1 billion, corresponding to nearly 40% of world's Christians. The majority of Protestants are members of just a handful of denominational families, i.e. Adventists, Anglicans, Baptists, Reformed (Calvinists), Lutherans, Methodists, Moravians/Hussites, and Pentecostals. Nondenominational, evangelical, charismatic, neo-charismatic, independent, and other churches are on the rise, and constitute a significant part of Protestant Christianity. Some groups of individuals who hold basic Protestant tenets identify themselves simply as "Christians" or "born-again Christians". They typically distance themselves from the confessionalism and creedalism of other Christian communities by calling themselves "non-denominational" or "evangelical". Often founded by individual pastors, they have little affiliation with historic denominations. Restorationism The Second Great Awakening, a period of religious revival that occurred in the United States during the early 1800s, saw the development of a number of unrelated churches. They generally saw themselves as restoring the original church of Jesus Christ rather than reforming one of the existing churches. A common belief held by Restorationists was that the other divisions of Christianity had introduced doctrinal defects into Christianity, which was known as the Great Apostasy. In Asia, Iglesia ni Cristo is a known restorationist religion that was established during the early 1900s. Some of the churches originating during this period are historically connected to early 19th-century camp meetings in the Midwest and upstate New York. One of the largest churches produced from the movement is The Church of Jesus Christ of Latter-day Saints. American Millennialism and Adventism, which arose from Evangelical Protestantism, influenced the Jehovah's Witnesses movement and, as a reaction specifically to William Miller, the Seventh-day Adventists. Others, including the Christian Church (Disciples of Christ), Evangelical Christian Church in Canada, Churches of Christ, and the Christian churches and churches of Christ, have their roots in the contemporaneous Stone-Campbell Restoration Movement, which was centered in Kentucky and Tennessee. Other groups originating in this time period include the Christadelphians and the previously mentioned Latter Day Saints movement. While the churches originating in the Second Great Awakening have some superficial similarities, their doctrine and practices vary significantly. Other Within Italy, Poland, Lithuania, Transylvania, Hungary, Romania, and the United Kingdom, Unitarian Churches emerged from the Reformed tradition in the 16th century; the Unitarian Church of Transylvania is an example such a denomination that arose in this era. They adopted the Anabaptist doctrine of credobaptism. Various smaller Independent Catholic communities, such as the Old Catholic Church, include the word Catholic in their title, and arguably have more or less liturgical practices in common with the Catholic Church, but are no longer in full communion with the Holy See. Spiritual Christians, such as the Doukhobors and Molokans, broke from the Russian Orthodox Church and maintain close association with Mennonites and Quakers due to similar religious practices; all of these groups are furthermore collectively considered to be peace churches due to their belief in pacifism. Messianic Judaism (or the Messianic Movement) is the name of a Christian movement comprising a number of streams, whose members may consider themselves Jewish. The movement originated in the 1960s and 1970s, and it blends elements of religious Jewish practice with evangelical Christianity. Messianic Judaism affirms Christian creeds such as the messiahship and divinity of "Yeshua" (the Hebrew name of Jesus) and the Triune Nature of God, while also adhering to some Jewish dietary laws and customs. Esoteric Christians regard Christianity as a mystery religion and profess the existence and possession of certain esoteric doctrines or practices, hidden from the public and accessible only to a narrow circle of "enlightened", "initiated", or highly educated people. Some of the esoteric Christian institutions include the Rosicrucian Fellowship, the Anthroposophical Society, and Martinism. Nondenominational Christianity or non-denominational Christianity consists of churches which typically distance themselves from the confessionalism or creedalism of other Christian communities by not formally aligning with a specific Christian denomination. Nondenominational Christianity first arose in the 18th century through the Stone-Campbell Restoration Movement, with followers organizing themselves simply as "Christians" and "Disciples of Christ", but many typically adhere to evangelical Christianity. Influence on Western culture Western culture, throughout most of its history, has been nearly equivalent to Christian culture, and a large portion of the population of the Western Hemisphere can be described as practicing or nominal Christians. The notion of "Europe" and the "Western World" has been intimately connected with the concept of "Christianity and Christendom". Many historians even attribute Christianity for being the link that created a unified European identity. Though Western culture contained several polytheistic religions during its early years under the Greek and Roman empires, as the centralized Roman power waned, the dominance of the Catholic Church was the only consistent force in Western Europe. Until the Age of Enlightenment, Christian culture guided the course of philosophy, literature, art, music and science. Christian disciplines of the respective arts have subsequently developed into Christian philosophy, Christian art, Christian music, Christian literature, and so on. Christianity has had a significant impact on education, as the church created the bases of the Western system of education, and was the sponsor of founding universities in the Western world, as the university is generally regarded as an institution that has its origin in the Medieval Christian setting. Historically, Christianity has often been a patron of science and medicine; many Catholic clergy, Jesuits in particular, have been active in the sciences throughout history and have made significant contributions to the development of science. Protestantism also has had an important influence on science. According to the Merton Thesis, there was a positive correlation between the rise of English Puritanism and German Pietism on the one hand, and early experimental science on the other. The civilizing influence of Christianity includes social welfare, founding hospitals, economics (as the Protestant work ethic), architecture, politics, literature, personal hygiene (ablution), and family life. Eastern Christians (particularly Nestorian Christians) contributed to the Arab Islamic civilization during the reign of the Ummayad and the Abbasid, by translating works of Greek philosophers to Syriac and afterwards, to Arabic. They also excelled in philosophy, science, theology, and medicine. Christians have made a myriad of contributions to human progress in a broad and diverse range of fields, including philosophy, science and technology, medicine, fine arts and architecture, politics, literatures, music, and business. According to 100 Years of Nobel Prizes a review of the Nobel Prizes award between 1901 and 2000 reveals that (65.4%) of Nobel Prizes Laureates, have identified Christianity in its various forms as their religious preference. Cultural Christians are secular people with a Christian heritage who may not believe in the religious claims of Christianity, but who retain an affinity for the popular culture, art, music, and so on related to the religion. Postchristianity is the term for the decline of Christianity, particularly in Europe, Canada, Australia, and to a minor degree the Southern Cone, in the 20th and 21st centuries, considered in terms of postmodernism. It refers to the loss of Christianity's monopoly on values and world view in historically Christian societies. Ecumenism Christian groups and denominations have long expressed ideals of being reconciled, and in the 20th century, Christian ecumenism advanced in two ways. One way was greater cooperation between groups, such as the World Evangelical Alliance founded in 1846 in London or the Edinburgh Missionary Conference of Protestants in 1910, the Justice, Peace and Creation Commission of the World Council of Churches founded in 1948 by Protestant and Orthodox churches, and similar national councils like the National Council of Churches in Australia, which includes Catholics. The other way was an institutional union with united churches, a practice that can be traced back to unions between Lutherans and Calvinists in early 19th-century Germany. Congregationalist, Methodist, and Presbyterian churches united in 1925 to form the United Church of Canada, and in 1977 to form the Uniting Church in Australia. The Church of South India was formed in 1947 by the union of Anglican, Baptist, Methodist, Congregationalist, and Presbyterian churches. The Christian Flag is an ecumenical flag designed in the early 20th century to represent all of Christianity and Christendom. The ecumenical, monastic Taizé Community is notable for being | Church of the East split after the Council of Ephesus (431) and Oriental Orthodoxy split after the Council of Chalcedon (451) over differences in Christology, while the Eastern Orthodox Church and the Catholic Church separated in the East–West Schism (1054), especially over the authority of the bishop of Rome. Protestantism split in numerous denominations from the Catholic Church in the Reformation era (16th century) over theological and ecclesiological disputes, most predominantly on the issue of justification and the primacy of the bishop of Rome. Christianity played a prominent role in the development of Western civilization, particularly in Europe from late antiquity and the Middle Ages. Following the Age of Discovery (15th–17th century), Christianity was spread into the Americas, Oceania, sub-Saharan Africa, and the rest of the world via missionary work. The four largest branches of Christianity are the Catholic Church (1.3 billion/50.1%), Protestantism (920 million/36.7%), the Eastern Orthodox Church (230 million), and the Oriental Orthodox churches (62 million) (Orthodox churches combined at 11.9%), though thousands of smaller church communities exist despite efforts toward unity (ecumenism). Despite a decline in adherence in the West, Christianity remains the dominant religion in the region, with about 70% of that population identifying as Christian. Christianity is growing in Africa and Asia, the world's most populous continents. Christians remain persecuted in some regions of the world, especially in the Middle East, North Africa, East Asia, and South Asia. Etymology Early Jewish Christians referred to themselves as 'The Way' (), probably coming from Isaiah 40:3, "prepare the way of the Lord." According to Acts 11:26, the term "Christian" (, ), meaning "followers of Christ" in reference to Jesus's disciples, was first used in the city of Antioch by the non-Jewish inhabitants there. The earliest recorded use of the term "Christianity" (, ) was by Ignatius of Antioch around 100 AD. Beliefs While Christians worldwide share basic convictions, there are also differences of interpretations and opinions of the Bible and sacred traditions on which Christianity is based. Creeds Concise doctrinal statements or confessions of religious beliefs are known as creeds. They began as baptismal formulae and were later expanded during the Christological controversies of the 4th and 5th centuries to become statements of faith. "Jesus is Lord" is the earliest creed of Christianity and continues to be used, as with the World Council of Churches. The Apostles' Creed is the most widely accepted statement of the articles of Christian faith. It is used by a number of Christian denominations for both liturgical and catechetical purposes, most visibly by liturgical churches of Western Christian tradition, including the Latin Church of the Catholic Church, Lutheranism, Anglicanism, and Western Rite Orthodoxy. It is also used by Presbyterians, Methodists, and Congregationalists. This particular creed was developed between the 2nd and 9th centuries. Its central doctrines are those of the Trinity and God the Creator. Each of the doctrines found in this creed can be traced to statements current in the apostolic period. The creed was apparently used as a summary of Christian doctrine for baptismal candidates in the churches of Rome. Its points include: Belief in God the Father, Jesus Christ as the Son of God, and the Holy Spirit The death, descent into hell, resurrection and ascension of Christ The holiness of the Church and the communion of saints Christ's second coming, the Day of Judgement and salvation of the faithful The Nicene Creed was formulated, largely in response to Arianism, at the Councils of Nicaea and Constantinople in 325 and 381 respectively, and ratified as the universal creed of Christendom by the First Council of Ephesus in 431. The Chalcedonian Definition, or Creed of Chalcedon, developed at the Council of Chalcedon in 451, though rejected by the Oriental Orthodox, taught Christ "to be acknowledged in two natures, in confusedly, unchangeably, indivisibly, inseparably": one divine and one human, and that both natures, while perfect in themselves, are nevertheless also perfectly united into one person. The Athanasian Creed, received in the Western Church as having the same status as the Nicene and Chalcedonian, says: "We worship one God in Trinity, and Trinity in Unity; neither confounding the Persons nor dividing the Substance." Most Christians (Catholic, Eastern Orthodox, Oriental Orthodox, and Protestant alike) accept the use of creeds, and subscribe to at least one of the creeds mentioned above. Many Evangelical Protestants reject creeds as definitive statements of faith, even while agreeing with some or all of the substance of the creeds. For example, most Baptists do not use creeds "in that they have not sought to establish binding authoritative confessions of faith on one another." Also rejecting creeds are groups with roots in the Restoration Movement, such as the Christian Church (Disciples of Christ), the Evangelical Christian Church in Canada, and the Churches of Christ. Jesus The central tenet of Christianity is the belief in Jesus as the Son of God and the Messiah (Christ). Christians believe that Jesus, as the Messiah, was anointed by God as savior of humanity and hold that Jesus' coming was the fulfillment of messianic prophecies of the Old Testament. The Christian concept of messiah differs significantly from the contemporary Jewish concept. The core Christian belief is that through belief in and acceptance of the death and resurrection of Jesus, sinful humans can be reconciled to God, and thereby are offered salvation and the promise of eternal life. While there have been many theological disputes over the nature of Jesus over the earliest centuries of Christian history, generally, Christians believe that Jesus is God incarnate and "true God and true man" (or both fully divine and fully human). Jesus, having become fully human, suffered the pains and temptations of a mortal man, but did not sin. As fully God, he rose to life again. According to the New Testament, he rose from the dead, ascended to heaven, is seated at the right hand of the Father, and will ultimately return to fulfill the rest of the Messianic prophecy, including the resurrection of the dead, the Last Judgment, and the final establishment of the Kingdom of God. According to the canonical gospels of Matthew and Luke, Jesus was conceived by the Holy Spirit and born from the Virgin Mary. Little of Jesus' childhood is recorded in the canonical gospels, although infancy gospels were popular in antiquity. In comparison, his adulthood, especially the week before his death, is well documented in the gospels contained within the New Testament, because that part of his life is believed to be most important. The biblical accounts of Jesus' ministry include: his baptism, miracles, preaching, teaching, and deeds. Death and resurrection Christians consider the resurrection of Jesus to be the cornerstone of their faith (see 1 Corinthians 15) and the most important event in history. Among Christian beliefs, the death and resurrection of Jesus are two core events on which much of Christian doctrine and theology is based. According to the New Testament, Jesus was crucified, died a physical death, was buried within a tomb, and rose from the dead three days later. The New Testament mentions several post-resurrection appearances of Jesus on different occasions to his twelve apostles and disciples, including "more than five hundred brethren at once", before Jesus' ascension to heaven. Jesus' death and resurrection are commemorated by Christians in all worship services, with special emphasis during Holy Week, which includes Good Friday and Easter Sunday. The death and resurrection of Jesus are usually considered the most important events in Christian theology, partly because they demonstrate that Jesus has power over life and death and therefore has the authority and power to give people eternal life. Christian churches accept and teach the New Testament account of the resurrection of Jesus with very few exceptions. Some modern scholars use the belief of Jesus' followers in the resurrection as a point of departure for establishing the continuity of the historical Jesus and the proclamation of the early church. Some liberal Christians do not accept a literal bodily resurrection, seeing the story as richly symbolic and spiritually nourishing myth. Arguments over death and resurrection claims occur at many religious debates and interfaith dialogues. Paul the Apostle, an early Christian convert and missionary, wrote, "If Christ was not raised, then all our preaching is useless, and your trust in God is useless." Salvation Paul the Apostle, like Jews and Roman pagans of his time, believed that sacrifice can bring about new kinship ties, purity, and eternal life. For Paul, the necessary sacrifice was the death of Jesus: Gentiles who are "Christ's" are, like Israel, descendants of Abraham and "heirs according to the promise" The God who raised Jesus from the dead would also give new life to the "mortal bodies" of Gentile Christians, who had become with Israel, the "children of God", and were therefore no longer "in the flesh". Modern Christian churches tend to be much more concerned with how humanity can be saved from a universal condition of sin and death than the question of how both Jews and Gentiles can be in God's family. According to Eastern Orthodox theology, based upon their understanding of the atonement as put forward by Irenaeus' recapitulation theory, Jesus' death is a ransom. This restores the relation with God, who is loving and reaches out to humanity, and offers the possibility of theosis c.q. divinization, becoming the kind of humans God wants humanity to be. According to Catholic doctrine, Jesus' death satisfies the wrath of God, aroused by the offense to God's honor caused by human's sinfulness. The Catholic Church teaches that salvation does not occur without faithfulness on the part of Christians; converts must live in accordance with principles of love and ordinarily must be baptized. In Protestant theology, Jesus' death is regarded as a substitutionary penalty carried by Jesus, for the debt that has to be paid by humankind when it broke God's moral law. Martin Luther taught that baptism was necessary for salvation, but modern Lutherans and other Protestants tend to teach that salvation is a gift that comes to an individual by God's grace, sometimes defined as "unmerited favor", even apart from baptism. Christians differ in their views on the extent to which individuals' salvation is pre-ordained by God. Reformed theology places distinctive emphasis on grace by teaching that individuals are completely incapable of self-redemption, but that sanctifying grace is irresistible. In contrast Catholics, Orthodox Christians, and Arminian Protestants believe that the exercise of free will is necessary to have faith in Jesus. Trinity Trinity refers to the teaching that the one God comprises three distinct, eternally co-existing persons: the Father, the Son (incarnate in Jesus Christ), and the Holy Spirit. Together, these three persons are sometimes called the Godhead, although there is no single term in use in Scripture to denote the unified Godhead. In the words of the Athanasian Creed, an early statement of Christian belief, "the Father is God, the Son is God, and the Holy Spirit is God, and yet there are not three Gods but one God". They are distinct from another: the Father has no source, the Son is begotten of the Father, and the Spirit proceeds from the Father. Though distinct, the three persons cannot be divided from one another in being or in operation. While some Christians also believe that God appeared as the Father in the Old Testament, it is agreed that he appeared as the Son in the New Testament, and will still continue to manifest as the Holy Spirit in the present. But still, God still existed as three persons in each of these times. However, traditionally there is a belief that it was the Son who appeared in the Old Testament because, for example, when the Trinity is depicted in art, the Son typically has the distinctive appearance, a cruciform halo identifying Christ, and in depictions of the Garden of Eden, this looks forward to an Incarnation yet to occur. In some Early Christian sarcophagi the Logos is distinguished with a beard, "which allows him to appear ancient, even pre-existent." The Trinity is an essential doctrine of mainstream Christianity. From earlier than the times of the Nicene Creed (325) Christianity advocated the triune mystery-nature of God as a normative profession of faith. According to Roger E. Olson and Christopher Hall, through prayer, meditation, study and practice, the Christian community concluded "that God must exist as both a unity and trinity", codifying this in ecumenical council at the end of the 4th century. According to this doctrine, God is not divided in the sense that each person has a third of the whole; rather, each person is considered to be fully God (see Perichoresis). The distinction lies in their relations, the Father being unbegotten; the Son being begotten of the Father; and the Holy Spirit proceeding from the Father and (in Western Christian theology) from the Son. Regardless of this apparent difference, the three "persons" are each eternal and omnipotent. Other Christian religions including Unitarian Universalism, Jehovah's Witnesses, and Mormonism, do not share those views on the Trinity. The Greek word trias is first seen in this sense in the works of Theophilus of Antioch; his text reads: "of the Trinity, of God, and of His Word, and of His Wisdom". The term may have been in use before this time; its Latin equivalent, trinitas, appears afterwards with an explicit reference to the Father, the Son, and the Holy Spirit, in Tertullian. In the following century, the word was in general use. It is found in many passages of Origen. Trinitarians Trinitarianism denotes Christians who believe in the concept of the Trinity. Almost all Christian denominations and churches hold Trinitarian beliefs. Although the words "Trinity" and "Triune" do not appear in the Bible, beginning in the 3rd century theologians developed the term and concept to facilitate comprehension of the New Testament teachings of God as being Father, Son, and Holy Spirit. Since that time, Christian theologians have been careful to emphasize that Trinity does not imply that there are three gods (the antitrinitarian heresy of Tritheism), nor that each hypostasis of the Trinity is one-third of an infinite God (partialism), nor that the Son and the Holy Spirit are beings created by and subordinate to the Father (Arianism). Rather, the Trinity is defined as one God in three persons. Nontrinitarianism Nontrinitarianism (or antitrinitarianism) refers to theology that rejects the doctrine of the Trinity. Various nontrinitarian views, such as adoptionism or modalism, existed in early Christianity, leading to the disputes about Christology. Nontrinitarianism reappeared in the Gnosticism of the Cathars between the 11th and 13th centuries, among groups with Unitarian theology in the Protestant Reformation of the 16th century, in the 18th-century Enlightenment, amongst some groups arising during the Second Great Awakening of the 19th century, and most recently, in Oneness Pentecostal churches. Eschatology The end of things, whether the end of an individual life, the end of the age, or the end of the world, broadly speaking, is Christian eschatology; the study of the destiny of humans as it is revealed in the Bible. The major issues in Christian eschatology are the Tribulation, death and the afterlife, (mainly for Evangelical groups) the Millennium and the following Rapture, the Second Coming of Jesus, Resurrection of the Dead, Heaven, (for liturgical branches) Purgatory, and Hell, the Last Judgment, the end of the world, and the New Heavens and New Earth. Christians believe that the second coming of Christ will occur at the end of time, after a period of severe persecution (the Great Tribulation). All who have died will be resurrected bodily from the dead for the Last Judgment. Jesus will fully establish the Kingdom of God in fulfillment of scriptural prophecies. Death and afterlife Most Christians believe that human beings experience divine judgment and are rewarded either with eternal life or eternal damnation. This includes the general judgement at the resurrection of the dead as well as the belief (held by Catholics, Orthodox and most Protestants) in a judgment particular to the individual soul upon physical death. In the Catholic branch of Christianity, those who die in a state of grace, i.e., without any mortal sin separating them from God, but are still imperfectly purified from the effects of sin, undergo purification through the intermediate state of purgatory to achieve the holiness necessary for entrance into God's presence. Those who have attained this goal are called saints (Latin sanctus, "holy"). Some Christian groups, such as Seventh-day Adventists, hold to mortalism, the belief that the human soul is not naturally immortal, and is unconscious during the intermediate state between bodily death and resurrection. These Christians also hold to Annihilationism, the belief that subsequent to the final judgement, the wicked will cease to exist rather than suffer everlasting torment. Jehovah's Witnesses hold to a similar view. Practices Depending on the specific denomination of Christianity, practices may include baptism, the Eucharist (Holy Communion or the Lord's Supper), prayer (including the Lord's Prayer), confession, confirmation, burial rites, marriage rites and the religious education of children. Most denominations have ordained clergy who lead regular communal worship services. Christian rites, rituals, and ceremonies are not celebrated in one single sacred language. Many ritualistic Christian churches make a distinction between sacred language, liturgical language and vernacular language. The three important languages in the early Christian era were: Latin, Greek and Syriac. Communal worship Services of worship typically follow a pattern or form known as liturgy. Justin Martyr described 2nd-century Christian liturgy in his First Apology () to Emperor Antoninus Pius, and his description remains relevant to the basic structure of Christian liturgical worship: Thus, as Justin described, Christians assemble for communal worship typically on Sunday, the day of the resurrection, though other liturgical practices often occur outside this setting. Scripture readings are drawn from the Old and New Testaments, but especially the gospels. Instruction is given based on these readings, called a sermon or homily. There are a variety of congregational prayers, including thanksgiving, confession, and intercession, which occur throughout the service and take a variety of forms including recited, responsive, silent, or sung. Psalms, hymns, or worship songs may be sung. Services can be varied for special events like significant feast days. Nearly all forms of worship incorporate the Eucharist, which consists of a meal. It is reenacted in accordance with Jesus' instruction at the Last Supper that his followers do in remembrance of him as when he gave his disciples bread, saying, "This is my body", and gave them wine saying, "This is my blood". In the early church, Christians and those yet to complete initiation would separate for the Eucharistic part of the service. Some denominations such as Confessional Lutheran churches continue to practice 'closed communion'. They offer communion to those who are already united in that denomination or sometimes individual church. Catholics further restrict participation to their members who are not in a state of mortal sin. Many other churches, such as Anglican Communion and United Methodist Church, practice 'open communion' since they view communion as a means to unity, rather than an end, and invite all believing Christians to participate. Sacraments or ordinances In Christian belief and practice, a sacrament is a rite, instituted by Christ, that confers grace, constituting a sacred mystery. The term is derived from the Latin word sacramentum, which was used to translate the Greek word for mystery. Views concerning both which rites are sacramental, and what it means for an act to be a sacrament, vary among Christian denominations and traditions. The most conventional functional definition of a sacrament is that it is an outward sign, instituted by Christ, that conveys an inward, spiritual grace through Christ. The two most widely accepted sacraments are Baptism and the Eucharist; however, the majority of Christians also recognize five additional sacraments: Confirmation (Chrismation in the Eastern tradition), Holy Orders (or ordination), Penance (or Confession), Anointing of the Sick, and Matrimony (see Christian views on marriage). Taken together, these are the Seven Sacraments as recognized by churches in the High Church tradition—notably Catholic, Eastern Orthodox, Oriental Orthodox, Independent Catholic, Old Catholic, many Anglicans, and some Lutherans. Most other denominations and traditions typically affirm only Baptism and Eucharist as sacraments, while some Protestant groups, such as the Quakers, reject sacramental theology. Evangelical churches adhering to the doctrine of the believers' Church mostly use the term "ordinances" to refer to baptism and communion. In addition to this, the Church of the East has two additional sacraments in place of the traditional sacraments of Matrimony and the Anointing of the Sick. These include Holy Leaven (Melka) and the sign of the cross. Liturgical calendar Catholics, Eastern Christians, Lutherans, Anglicans and other traditional Protestant communities frame worship around the liturgical year. The liturgical cycle divides the year into a series of seasons, each with their theological emphases, and modes of prayer, which can be signified by different ways of decorating churches, colors of paraments and vestments for clergy, scriptural readings, themes for preaching and even different traditions and practices often observed personally or in the home. Western Christian liturgical calendars are based on the cycle of the Roman Rite of the Catholic Church, and Eastern Christians use analogous calendars based on the cycle of their respective rites. Calendars set aside holy days, such as solemnities which commemorate an event in the life of Jesus, Mary, or the saints, and periods of fasting, such as Lent and other pious events such as memoria, or lesser festivals commemorating saints. Christian groups that do not follow a liturgical tradition often retain certain celebrations, such as Christmas, Easter, and Pentecost: these are the celebrations of Christ's birth, resurrection, and the descent of the Holy Spirit upon the Church, respectively. A few denominations such as Quaker Christians make no use of a liturgical calendar. Symbols Christianity has not generally practiced aniconism, the avoidance or prohibition of devotional images, even if early Jewish Christians and some modern denominations, invoking the Decalogue's prohibition of idolatry, avoided figures in their symbols. The cross, today one of the most widely recognized symbols, was used by Christians from the earliest times. Tertullian, in his book De Corona, tells how it was already a tradition for Christians to trace the sign of the cross on their foreheads. Although the cross was known to the early Christians, the crucifix did not appear in use until the 5th century. Among the earliest Christian symbols, that of the fish or Ichthys seems to have ranked first in importance, as seen on monumental sources such as tombs from the first decades of the 2nd century. Its popularity seemingly arose from the Greek word ichthys (fish) forming an acronym for the Greek phrase Iesous Christos Theou Yios Soter (Ἰησοῦς Χριστός, Θεοῦ Υἱός, Σωτήρ), (Jesus Christ, Son of God, Savior), a concise summary of Christian faith. Other major Christian symbols include the chi-rho monogram, the dove (symbolic of the Holy Spirit), the sacrificial lamb (representing Christ's sacrifice), the vine (symbolizing the connection of the Christian with Christ) and many others. These all derive from passages of the New Testament. Baptism Baptism is the ritual act, with the use of water, by which a person is admitted to membership of the Church. Beliefs on baptism vary among denominations. Differences occur firstly on whether the act has any spiritual significance. Some, such as the Catholic and Eastern Orthodox churches, as well as Lutherans and Anglicans, hold to the doctrine of baptismal regeneration, which affirms that baptism creates or strengthens a person's faith, and is intimately linked to salvation. Others view baptism as a purely symbolic act, an external public declaration of the inward change which has taken place in the person, but not as spiritually efficacious. Secondly, there are differences of opinion on the methodology of the act. These methods are: by immersion; if immersion is total, by submersion; by affusion (pouring); and by aspersion (sprinkling). Those who hold the first view may also adhere to the tradition of infant baptism; the Orthodox Churches all practice infant baptism and always baptize by total immersion repeated three times in the name of the Father, the Son, and the Holy Spirit. The Catholic Church also practices infant baptism, usually by affusion, and utilizing the Trinitarian formula. Evangelical denominations adhering to the doctrine of the believers' Church, practice the believer's baptism, by immersion in water, after the new birth and a profession of faith. For newborns, there is a ceremony called child dedication. Prayer In the Gospel of Saint Matthew, Jesus taught the Lord's Prayer, which has been seen as a model for Christian prayer. The injunction for Christians to pray the Lord's prayer thrice daily was given in the Didache and came to be recited by Christians at 9 am, 12 pm, and 3 pm. In the second century Apostolic Tradition, Hippolytus instructed Christians to pray at seven fixed prayer times: "on rising, at the lighting of the evening lamp, at bedtime, at midnight" and "the third, sixth and ninth hours of the day, being hours associated with Christ's Passion." Prayer positions, including kneeling, standing, and prostrations have been used for these seven fixed prayer times since the days of the early Church. Breviaries such as the Shehimo and Agpeya are used by Oriental Orthodox Christians to pray these canonical hours while facing in the eastward direction of prayer. The Apostolic Tradition directed that the sign of the cross be used by Christians during the minor exorcism of baptism, during ablutions before praying at fixed prayer times, and in times of temptation. Intercessory prayer is prayer offered for the benefit of other people. There are many intercessory prayers recorded in the Bible, including prayers of the Apostle Peter on behalf of sick persons and by prophets of the Old Testament in favor of other people. In the Epistle of James, no distinction is made between the intercessory prayer offered by ordinary believers and the prominent Old Testament prophet Elijah. The effectiveness of prayer in Christianity derives from the power of God rather than the status of the one praying. The ancient church, in both Eastern and Western Christianity, developed a tradition of asking for the intercession of (deceased) saints, and this remains the practice of most Eastern Orthodox, Oriental Orthodox, Catholic, and some Anglican churches. Churches of the Protestant Reformation, however, rejected prayer to the saints, largely on the basis of the sole mediatorship of Christ. The reformer Huldrych Zwingli admitted that he had offered prayers to the saints until his reading of the Bible convinced him that this was idolatrous. According to the Catechism of the Catholic Church: "Prayer is the raising of one's mind and heart to God or the requesting of good things from God." The Book of Common Prayer in the Anglican tradition is a guide which provides a set order for services, containing set prayers, scripture readings, and hymns or sung Psalms. Frequently in Western Christianity, when praying, the hands are placed palms together and forward as in the feudal commendation ceremony. At other times the older orans posture may be used, with palms up and elbows in. Scriptures Christianity, like other religions, has adherents whose beliefs and biblical interpretations vary. Christianity regards the biblical canon, the Old Testament and the New Testament, as the inspired word of God. The traditional view of inspiration is that God worked through human authors so that what they produced was what God wished to communicate. The Greek word referring to inspiration in is theopneustos, which literally means "God-breathed". Some believe that divine inspiration makes our present Bibles inerrant. Others claim inerrancy for the Bible in its original manuscripts, although none of those are extant. Still others maintain that only a particular translation is inerrant, such as the King James Version. Another closely related view is biblical infallibility or limited inerrancy, which affirms that the Bible is free of error as a guide to salvation, but may include errors on matters such as history, geography, or science. The books of the Bible accepted by the Orthodox, Catholic, and Protestant churches vary somewhat, with Jews accepting only the Hebrew Bible as canonical; however, there is substantial overlap. These variations are a reflection of the range of traditions, and of the councils that |
bundled with the computer and its system software, or may be published separately. Some users are satisfied with the bundled apps and need never install additional applications. Application software is contrasted with system software and middleware, which manage and integrate a computer's capabilities, but typically do not directly apply them in the performance of tasks that benefit the user. The system software serves the application, which in turn serves the user. Application software applies the power of a particular computing platform or system software to a particular purpose. Some apps such as Microsoft Office are available in versions for several different platforms; others have narrower requirements and are thus called, for example, a Geography application for Windows or an Android application for education or Linux gaming. Sometimes a new and popular application arises that only runs on one platform, increasing the desirability of that platform. This is called a killer application. Computer network A computer network, often simply referred to as a network, is a collection of hardware components and computers interconnected by communication channels that allow sharing of resources and information. Where at least one process in one device is able to send/receive data to/from at least one process residing in a remote device, then the two devices are said to be in a network. Networks may be classified according to a wide variety of characteristics such as the medium used to transport the data, communications protocol used, scale, topology, and organizational scope. Communications protocols define the rules and data formats for exchanging information in a computer network, and provide the basis for network programming. Well-known communications protocols include Ethernet, a hardware and link layer standard that is ubiquitous in local area networks, and the Internet Protocol Suite, which defines a set of protocols for internetworking, i.e. for data communication between multiple networks, as well as host-to-host data transfer, and application-specific data transmission formats. Computer networking is sometimes considered a sub-discipline of electrical engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of these disciplines. Internet The Internet is a global system of interconnected computer networks that use the standard Internet Protocol Suite (TCP/IP) to serve billions of users that consists of millions of private, public, academic, business, and government networks, of local to global scope, that are linked by a broad array of electronic, wireless and optical networking technologies. The Internet carries an extensive range of information resources and services, such as the inter-linked hypertext documents of the World Wide Web and the infrastructure to support email. Computer programming Computer programming in general is the process of writing, testing, debugging, and maintaining the source code and documentation of computer programs. This source code is written in a programming language, which is an artificial language often more restrictive or demanding than natural languages, but easily translated by the computer. The purpose of programming is to invoke the desired behavior (customization) from the machine. The process of writing high quality source code requires knowledge of both the application's domain and the computer science domain. The highest-quality software is thus developed by a team of various domain experts, each person a specialist in some area of development. But the term programmer may apply to a range of program quality, from hacker to open source contributor to professional. And a single programmer could do most or all of the computer programming needed to generate the proof of concept to launch a new "killer" application. Computer programmer A programmer, computer programmer, or coder is a person who writes computer software. The term computer programmer can refer to a specialist in one area of computer programming or to a generalist who writes code for many kinds of software. One who practices or professes a formal approach to programming may also be known as a programmer analyst. A programmer's primary computer language (C, C++, Java, Lisp, Python, etc.) is often prefixed to the above titles, and those who work in a web environment often prefix their titles with web. The term programmer can be used to refer to a software developer, software engineer, computer scientist, or software analyst. However, members of these professions typically possess other software engineering skills, beyond programming. Computer industry The computer industry is made up of all of the businesses involved in developing computer software, designing computer hardware and computer networking infrastructures, the manufacture of computer components and the provision of information technology services including system administration and maintenance. Software industry The software industry includes businesses engaged in development, maintenance and publication of software. The industry also includes software services, such as training, documentation, and consulting. Sub-disciplines of computing Computer engineering Computer engineering is a discipline that integrates several fields of electrical engineering and computer science required to develop computer hardware and software. Computer engineers usually have training in electronic engineering (or electrical engineering), software design, and hardware-software integration instead of only software engineering or electronic engineering. Computer engineers are involved in many hardware and software aspects of computing, from the design of individual microprocessors, personal computers, and supercomputers, to circuit design. This field of engineering not only focuses on the design of hardware within its own domain, but as well the interactions between hardware and the world around it. Software engineering Software engineering (SE) is the application of a systematic, disciplined, quantifiable approach to the design, development, operation, and maintenance of software, and the study of these approaches; that is, the application of engineering to software. In layman's terms, it is the act of using insights to conceive, model and scale a solution to a problem. The first reference to the term is the 1968 NATO Software Engineering Conference and was meant to provoke thought regarding the perceived "software crisis" at the time. Software development, a much used and more generic term, does not necessarily subsume the engineering paradigm. The generally accepted concepts of Software Engineering as an engineering discipline have been specified in the Guide to the Software Engineering Body of Knowledge (SWEBOK). The SWEBOK has become an internationally accepted standard ISO/IEC TR 19759:2015. Computer science Computer science or computing science (abbreviated CS or Comp Sci) is the scientific and practical approach to computation and its applications. A computer scientist specializes in the theory of computation and the design of computational systems. Its subfields can be divided into practical techniques for its implementation and application in computer systems and purely theoretical areas. Some, such as computational complexity theory, which studies fundamental properties of computational problems, are highly abstract, while others, such as computer graphics, emphasize real-world applications. Still others focus on the challenges in implementing computations. For example, programming language theory studies approaches to description of computations, while the study of computer programming itself investigates various aspects of the use of programming languages and complex systems, and human–computer interaction focuses on the challenges in making computers and computations useful, usable, and universally accessible to humans. Cybersecurity Data science Information systems "Information systems (IS)" is the study of complementary networks of hardware and software (see information technology) that people and organizations use to collect, filter, process, create, and distribute data. The ACM's Computing Careers website says The study bridges business and computer science using the theoretical foundations of information and computation to study various business models and related algorithmic processes within a computer science discipline. The field of Computer Information System(s) (CIS) studies computers | purpose. Some apps such as Microsoft Office are available in versions for several different platforms; others have narrower requirements and are thus called, for example, a Geography application for Windows or an Android application for education or Linux gaming. Sometimes a new and popular application arises that only runs on one platform, increasing the desirability of that platform. This is called a killer application. Computer network A computer network, often simply referred to as a network, is a collection of hardware components and computers interconnected by communication channels that allow sharing of resources and information. Where at least one process in one device is able to send/receive data to/from at least one process residing in a remote device, then the two devices are said to be in a network. Networks may be classified according to a wide variety of characteristics such as the medium used to transport the data, communications protocol used, scale, topology, and organizational scope. Communications protocols define the rules and data formats for exchanging information in a computer network, and provide the basis for network programming. Well-known communications protocols include Ethernet, a hardware and link layer standard that is ubiquitous in local area networks, and the Internet Protocol Suite, which defines a set of protocols for internetworking, i.e. for data communication between multiple networks, as well as host-to-host data transfer, and application-specific data transmission formats. Computer networking is sometimes considered a sub-discipline of electrical engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of these disciplines. Internet The Internet is a global system of interconnected computer networks that use the standard Internet Protocol Suite (TCP/IP) to serve billions of users that consists of millions of private, public, academic, business, and government networks, of local to global scope, that are linked by a broad array of electronic, wireless and optical networking technologies. The Internet carries an extensive range of information resources and services, such as the inter-linked hypertext documents of the World Wide Web and the infrastructure to support email. Computer programming Computer programming in general is the process of writing, testing, debugging, and maintaining the source code and documentation of computer programs. This source code is written in a programming language, which is an artificial language often more restrictive or demanding than natural languages, but easily translated by the computer. The purpose of programming is to invoke the desired behavior (customization) from the machine. The process of writing high quality source code requires knowledge of both the application's domain and the computer science domain. The highest-quality software is thus developed by a team of various domain experts, each person a specialist in some area of development. But the term programmer may apply to a range of program quality, from hacker to open source contributor to professional. And a single programmer could do most or all of the computer programming needed to generate the proof of concept to launch a new "killer" application. Computer programmer A programmer, computer programmer, or coder is a person who writes computer software. The term computer programmer can refer to a specialist in one area of computer programming or to a generalist who writes code for many kinds of software. One who practices or professes a formal approach to programming may also be known as a programmer analyst. A programmer's primary computer language (C, C++, Java, Lisp, Python, etc.) is often prefixed to the above titles, and those who work in a web environment often prefix their titles with web. The term programmer can be used to refer to a software developer, software engineer, computer scientist, or software analyst. However, members of these professions typically possess other software engineering skills, beyond programming. Computer industry The computer industry is made up of all of the businesses involved in developing computer software, designing computer hardware and computer networking infrastructures, the manufacture of computer components and the provision of information technology services including system administration and maintenance. Software industry The software industry includes businesses engaged in development, maintenance and publication of software. The industry also includes software services, such as training, documentation, and consulting. Sub-disciplines of computing Computer engineering Computer engineering is a discipline that integrates several fields of electrical engineering and computer science required to develop computer hardware and software. Computer engineers usually have training in electronic engineering (or electrical engineering), software design, and hardware-software integration instead of only software engineering or electronic engineering. Computer engineers are involved in many hardware and software aspects of computing, from the design of individual microprocessors, personal computers, and supercomputers, to circuit design. This field of engineering not only focuses on the design of hardware within its own domain, but as well the interactions between hardware and the world around it. Software engineering Software engineering (SE) is the application of a systematic, disciplined, quantifiable approach to the design, development, operation, and maintenance of software, and the study of these approaches; that is, the application of engineering to software. In layman's terms, it is the act of using insights to conceive, model and scale a solution to a problem. The first reference to the term is the 1968 NATO Software Engineering Conference and was meant to provoke thought regarding the perceived "software crisis" at the time. Software development, a much used and more generic term, does not necessarily subsume the engineering paradigm. The generally accepted concepts of Software Engineering as an engineering discipline have been specified in the Guide to the Software Engineering Body of Knowledge (SWEBOK). The SWEBOK has become an internationally accepted standard ISO/IEC TR 19759:2015. Computer science Computer science or computing science (abbreviated CS or Comp Sci) is the scientific and practical approach to computation and its applications. A computer scientist specializes in the theory of computation and the design of computational systems. Its subfields can be divided into practical techniques for its implementation and application in computer systems and purely theoretical areas. Some, such as computational complexity theory, which studies fundamental properties of computational problems, are highly abstract, while others, such as computer graphics, emphasize real-world applications. Still others focus on the challenges in implementing computations. For example, programming language theory studies approaches to description of computations, while the study of computer programming itself investigates various aspects of the use of programming languages and complex systems, and human–computer interaction focuses on the challenges in making computers and computations useful, usable, and universally accessible to humans. Cybersecurity Data science Information systems "Information systems (IS)" is the study of complementary networks of hardware and software (see information technology) that people and organizations use to collect, filter, process, create, and distribute data. The ACM's Computing Careers website says The study bridges business and computer science using the theoretical foundations of information and computation to study various business models and related algorithmic processes within a computer science discipline. The field of Computer Information System(s) (CIS) studies computers and |
visitors wanting to gamble, although there are currently only two casinos (both foreign owned), in Singapore. The Marina Bay Sands is the most expensive standalone casino in the world, at a price of US$8 billion, and is among the world's ten most expensive buildings. The Resorts World Sentosa has the world's largest oceanarium. Russia There are 4 legal gaming zones in Russia: "Siberian Coin" (Altay), "Yantarnaya" (Kaliningrad region), "Azov-city" (Rostov region) and "Primorie" (Primorie region). United States With currently over 1,000 casinos, the United States has the largest number of casinos in the world. The number continues to grow steadily as more states seek to legalize casinos. 40 states now have some form of casino gambling. Interstate competition, such as gaining tourism, has been a driving factor to continuous legalization. Relatively small places such as Las Vegas are best known for gambling; larger cities such as Chicago are not defined by their casinos in spite of the large turnover. The Las Vegas Valley has the largest concentration of casinos in the United States. Based on revenue, Atlantic City, New Jersey ranks second, and the Chicago region third. Top American casino markets by revenue (2015 annual revenues): Las Vegas Strip $6.348 billion Atlantic City $2.426 billion Chicago region $2.002 billion New York City $1.400 billion Detroit $1.376 billion Baltimore–Washington Metropolitan Area $1.306 billion Philadelphia $1.192 billion Mississippi Gulf Coast $1.135 billion St. Louis $1.007 billion The Poconos $965.56 million Lake Charles, Louisiana $907.51 million Boulder Strip $784.35 million Kansas City $782.05 million Shreveport $732.51 million The Nevada Gaming Control Board divides Clark County, which is coextensive with the Las Vegas metropolitan area, into seven market regions for reporting purposes. Native American gaming has been responsible for a rise in the number of casinos outside of Las Vegas and Atlantic City. Security Given the large amounts of currency handled within a casino, both patrons and staff may be tempted to cheat and steal, in collusion or independently; most casinos have security measures to prevent this. Security cameras located throughout the casino are the most basic measure. Modern casino security is usually divided between a physical security force and a specialized surveillance department. The physical security force usually patrols the casino and responds to calls for assistance and reports of suspicious or definite criminal activity. A specialized surveillance department operates the casino's closed circuit television system, known in the industry as the eye in the sky. Both of these specialized casino security departments work very closely with each other to ensure the safety of both guests and the casino's assets, and have been quite successful in preventing crime. Some casinos also have catwalks in the ceiling above the casino floor, which allow surveillance personnel to look directly down, through one way glass, on the activities at the tables and slot machines. When it opened in 1989, The Mirage was the first casino to use cameras full-time on all table games. In addition to cameras and other technological measures, casinos also enforce security through rules of conduct and behavior; for example, players at card games are required to keep the cards they are holding in their hands visible at all times. Business practices Over the past few decades, casinos have developed many different marketing techniques for attracting and maintaining loyal patrons. Many casinos use a loyalty rewards program used to track players' spending habits and target their patrons more effectively, by sending mailings with free slot play and other promotions. Casino Helsinki in Helsinki, Finland, for example, donates all of its profits to charity. Crime Casinos have been linked to organised crime, with early casinos in Las Vegas originally dominated by the American Mafia and in Macau by Triad syndicates. According to some police reports, local incidence of reported crime often doubles or triples within three years of a casino's opening. In a 2004 report by the US Department of Justice, researchers interviewed people who had been arrested in Las Vegas and Des Moines and found that the percentage of problem or pathological gamblers among the arrestees was three to five times higher than in the | following lists major casino markets in the world with casino revenue of over US$1 billion as published in PricewaterhouseCoopers's report on the outlook for the global casino market: By region By markets By company According to Bloomberg, accumulated revenue of the biggest casino operator companies worldwide amounted to almost US$55 billion in 2011. SJM Holdings Ltd. was the leading company in this field, earning $9.7 bn in 2011, followed by Las Vegas Sands Corp. at $7.4 bn. The third-biggest casino operator company (based on revenue) was Caesars Entertainment, with revenue of US$6.2 bn. Significant sites While there are casinos in many places, a few places have become well known specifically for gambling. Perhaps the place almost defined by its casino is Monte Carlo, but other places are known as gambling centers. Monte Carlo, Monaco Monte Carlo Casino, located in Monte Carlo city, in Monaco, is a casino and a tourist attraction. Monte Carlo Casino has been depicted in many books, including Ben Mezrich's Busting Vegas, where a group of Massachusetts Institute of Technology students beat the casino out of nearly $1 million. This book is based on real people and events; however, many of those events are contested by main character Semyon Dukach. Monte Carlo Casino has also been featured in multiple James Bond novels and films. The casino is mentioned in the song "The Man Who Broke the Bank at Monte Carlo" as well as the film of the same name. Campione d'Italia Casinò di Campione is located in the tiny Italian enclave of Campione d'Italia, within Ticino, Switzerland. The casino was founded in 1917 as a site to gather information from foreign diplomats during the First World War. Today it is owned by the Italian government, and operated by the municipality. With gambling laws being less strict than in Italy and Switzerland, it is among the most popular gambling destination besides Monte Carlo. The income from the casino is sufficient for the operation of Campione without the imposition of taxes, or obtaining of other revenue. In 2007, the casino moved into new premises of more than , making it the largest casino in Europe. The new casino was built alongside the old one, which dated from 1933 and has since been demolished. Malta the archipelago of Malta is a particularly famous place for casinos in particular the historic casino at the princely residence of Dragonara Macau The former Portuguese colony of Macau, a special administrative region of the People's Republic of China since 1999, is a popular destination for visitors who wish to gamble. This started in Portuguese times, when Macau was popular with visitors from nearby Hong Kong, where gambling was more closely regulated. The Venetian Macao is currently the largest casino in the world. Macau also surpassed Las Vegas as the largest gambling market in the world. Germany Machine-based gaming is only permitted in land-based casinos, restaurants, bars and gaming halls, and only subject to a licence. Online slots are, at the moment, only permitted if they are operated under a Schleswig-Holstein licence. AWPs are governed by federal law – the Trade Regulation Act and the Gaming Ordinance. Portugal The Casino Estoril, located in the municipality of Cascais, on the Portuguese Riviera, near Lisbon, is the largest casino in Europe by capacity. During the Second World War, it was reputed to be a gathering point for spies, dispossessed royals, and wartime adventurers; it became an inspiration for Ian Fleming's James Bond 007 novel Casino Royale. Singapore Singapore is an up-and-coming destination for visitors wanting to gamble, although there are currently only two casinos (both foreign owned), in Singapore. The Marina Bay Sands is the most expensive standalone casino in the world, at a price of US$8 billion, and is among the world's ten most expensive buildings. The Resorts World Sentosa has the world's largest oceanarium. Russia There are 4 legal gaming zones in Russia: "Siberian Coin" (Altay), "Yantarnaya" (Kaliningrad region), "Azov-city" (Rostov region) and "Primorie" (Primorie region). United States With currently over 1,000 casinos, the United States has the largest number of casinos in the world. The number continues to grow steadily as more states seek to legalize casinos. 40 states now have some form of casino gambling. Interstate competition, such as gaining tourism, has been a driving factor to continuous legalization. Relatively small places such as Las Vegas are best known for gambling; larger cities such as Chicago are not defined by their casinos in spite of the large turnover. The Las Vegas Valley has the largest concentration of casinos in the United States. Based on revenue, Atlantic City, New Jersey ranks second, and the Chicago region third. Top American casino markets by revenue (2015 annual revenues): Las Vegas Strip $6.348 billion Atlantic City $2.426 billion Chicago region $2.002 billion New York City $1.400 billion Detroit $1.376 billion Baltimore–Washington Metropolitan Area $1.306 billion Philadelphia $1.192 billion Mississippi Gulf Coast $1.135 billion St. Louis $1.007 billion The Poconos $965.56 million Lake Charles, Louisiana $907.51 million Boulder Strip $784.35 million Kansas City $782.05 million Shreveport $732.51 million The Nevada Gaming Control Board divides Clark County, which is coextensive with the Las Vegas metropolitan area, into seven market regions for reporting purposes. Native American gaming has been responsible for a rise in the number of casinos outside of Las Vegas and Atlantic City. Security Given the large amounts of currency handled within a casino, both patrons and staff may be tempted to cheat and steal, in collusion or independently; most casinos have security measures to prevent this. Security cameras located throughout the casino are the most basic measure. Modern casino security is usually divided between a physical security force and a specialized surveillance department. The physical security force usually patrols the casino and responds to calls for assistance and reports of suspicious or definite criminal activity. A specialized surveillance department operates the casino's closed circuit television system, known in the industry as the eye in the sky. Both of these specialized casino security departments work very closely with each other to ensure the safety of both guests and the casino's assets, and have been quite successful in preventing crime. Some casinos also have catwalks in the ceiling above the casino floor, which allow surveillance personnel to look directly down, through one way glass, on the activities at the tables and slot machines. When it opened in 1989, The Mirage was the first casino to use cameras full-time on all table games. In addition to cameras and |
about the 14th to 18th centuries, is referred to as Middle Khmer and saw borrowings from Thai in the literary register. Modern Khmer is dated from the 19th century to today. The following table shows the conventionally accepted historical stages of Khmer. Just as modern Khmer was emerging from the transitional period represented by Middle Khmer, Cambodia fell under the influence of French colonialism. Thailand, which had for centuries claimed suzerainty over Cambodia and controlled succession to the Cambodian throne, began losing its influence on the language. In 1887 Cambodia was fully integrated into French Indochina, which brought in a French-speaking aristocracy. This led to French becoming the language of higher education and the intellectual class. By 1907, the French had wrested over half of modern-day Cambodia, including the north and northwest where Thai had been the prestige language, back from Thai control and reintegrated it into the country. Many native scholars in the early 20th century, led by a monk named Chuon Nath, resisted the French and Thai influences on their language. Forming the government sponsored Cultural Committee to define and standardize the modern language, they championed Khmerization, purging of foreign elements, reviving affixation, and the use of Old Khmer roots and historical Pali and Sanskrit to coin new words for modern ideas. Opponents, led by Keng Vannsak, who embraced "total Khmerization" by denouncing the reversion to classical languages and favoring the use of contemporary colloquial Khmer for neologisms, and Ieu Koeus, who favored borrowing from Thai, were also influential. Koeus later joined the Cultural Committee and supported Nath. Nath's views and prolific work won out and he is credited with cultivating modern Khmer-language identity and culture, overseeing the translation of the entire Pali Buddhist canon into Khmer. He also created the modern Khmer language dictionary that is still in use today, helping preserve Khmer during the French colonial period. Phonology The phonological system described here is the inventory of sounds of the standard spoken language, represented using the International Phonetic Alphabet (IPA). Consonants The voiceless plosives may occur with or without aspiration (as vs. , etc.); this difference is contrastive before a vowel. However, the aspirated sounds in that position may be analyzed as sequences of two phonemes: . This analysis is supported by the fact that infixes can be inserted between the stop and the aspiration; for example ('big') becomes ('size') with a nominalizing infix. When one of these plosives occurs initially before another consonant, aspiration is no longer contrastive and can be regarded as mere phonetic detail: slight aspiration is expected when the following consonant is not one of (or if the initial plosive is ). The voiced plosives are pronounced as implosives by most speakers, but this feature is weak in educated speech, where they become . In syllable-final position, and approach and respectively. The stops are unaspirated and have no audible release when occurring as syllable finals. In addition, the consonants , , and occur occasionally in recent loan words in the speech of Cambodians familiar with French and other languages. Vowels Various authors have proposed slightly different analyses of the Khmer vowel system. This may be in part because of the wide degree of variation in pronunciation between individual speakers, even within a dialectal region. The description below follows Huffman (1970). The number of vowel nuclei and their values vary between dialects; differences exist even between the Standard Khmer system and that of the Battambang dialect on which the standard is based. In addition, some diphthongs and triphthongs are analyzed as a vowel nucleus plus a semivowel ( or ) coda because they cannot be followed by a final consonant. These include: (with short monophthongs) , , , , ; (with long monophthongs) , ; (with long diphthongs) , , , , and . Syllable structure A Khmer syllable begins with a single consonant, or else with a cluster of two, or rarely three, consonants. The only possible clusters of three consonants at the start of a syllable are , and (with aspirated consonants analyzed as two-consonant sequences) . There are 85 possible two-consonant clusters (including [pʰ] etc. analyzed as /ph/ etc.). All the clusters are shown in the following table, phonetically, i.e. superscript ʰ can mark either contrastive or non-contrastive aspiration (see above). Slight vowel epenthesis occurs in the clusters consisting of a plosive followed by , in those beginning , and in the cluster . After the initial consonant or consonant cluster comes the syllabic nucleus, which is one of the vowels listed above. This vowel may end the syllable or may be followed by a coda, which is a single consonant. If the syllable is stressed and the vowel is short, there must be a final consonant. All consonant sounds except and the aspirates can appear as the coda (although final /r/ is heard in some dialects, most notably in Northern Khmer). A minor syllable (unstressed syllable preceding the main syllable of a word) has a structure of CV-, CrV-, CVN- or CrVN- (where C is a consonant, V a vowel, and N a nasal consonant). The vowels in such syllables are usually short; in conversation they may be reduced to , although in careful or formal speech, including on television and radio, they are clearly articulated. An example of such a word is mɔnuh, mɔnɨh, mĕəʾnuh ('person'), pronounced , or more casually . Stress Stress in Khmer falls on the final syllable of a word. Because of this predictable pattern, stress is non-phonemic in Khmer (it does not distinguish different meanings). Most Khmer words consist of either one or two syllables. In most native disyllabic words, the first syllable is a minor (fully unstressed) syllable. Such words have been described as sesquisyllabic (i.e. as having one-and-a-half syllables). There are also some disyllabic words in which the first syllable does not behave as a minor syllable, but takes secondary stress. Most such words are compounds, but some are single morphemes (generally loanwords). An example is ('language'), pronounced . Words with three or more syllables, if they are not compounds, are mostly loanwords, usually derived from Pali, Sanskrit, or more recently, French. They are nonetheless adapted to Khmer stress patterns. Primary stress falls on the final syllable, with secondary stress on every second syllable from the end. Thus in a three-syllable word, the first syllable has secondary stress; in a four-syllable word, the second syllable has secondary stress; in a five-syllable word, the first and third syllables have secondary stress, and so on. Long polysyllables are not often used in conversation. Compounds, however, preserve the stress patterns of the constituent words. Thus , the name of a kind of cookie (literally 'bird's nest'), is pronounced , with secondary stress on the second rather than the first syllable, because it is composed of the words ('nest') and ('bird'). Phonation and tone Khmer once had a phonation distinction in its vowels, but this now survives only in the most archaic dialect (Western Khmer). The distinction arose historically when vowels after Old Khmer voiced consonants became breathy voiced and diphthongized; for example became . When consonant voicing was lost, the distinction was maintained by the vowel (); later the phonation disappeared as well (). These processes explain the origin of what are now called a-series and o-series consonants in the Khmer script. Although most Cambodian dialects are not tonal, the colloquial Phnom Penh dialect has developed a tonal contrast (level versus peaking tone) as a by-product of the elision of . Intonation Intonation often conveys semantic context in Khmer, as in distinguishing declarative statements, questions and exclamations. The available grammatical means of making such distinctions are not always used, or may be ambiguous; for example, the final interrogative particle can also serve as an emphasizing (or in some cases negating) particle. The intonation pattern of a typical Khmer declarative phrase is a steady rise throughout followed by an abrupt drop on the last syllable. ('I don't want it') Other intonation contours signify a different type of phrase such as the "full doubt" interrogative, similar to yes-no questions in English. Full doubt interrogatives remain fairly even in tone throughout, but rise sharply towards the end. ('do you want to go to Siem Reap?') Exclamatory phrases follow the typical steadily rising pattern, but rise sharply on the last syllable instead of falling. ('this book is expensive!') Grammar Khmer is primarily an analytic language with no inflection. Syntactic relations are mainly determined by word order. Old and Middle Khmer used particles to mark grammatical categories and many of these have survived in Modern Khmer but are used sparingly, mostly in literary or formal language. Khmer makes extensive use of auxiliary verbs, "directionals" and serial verb construction. Colloquial Khmer is a zero copula language, instead preferring predicative adjectives (and even predicative nouns) unless using a copula for emphasis or to avoid ambiguity in more complex sentences. Basic word order is subject–verb–object (SVO), although subjects are often dropped; prepositions are used rather than postpositions. Topic-Comment constructions are common and the language is generally head-initial (modifiers follow the words they modify). Some grammatical processes are still not fully understood by western scholars. For example, it is not clear if certain features of Khmer grammar, such as actor nominalization, should be treated as a morphological process or a purely syntactic device, and some derivational morphology seems "purely decorative" and performs no known syntactic work. Lexical categories have been hard to define in Khmer. Henri Maspero, an early scholar of Khmer, claimed the language had no parts of speech, while a later scholar, Judith Jacob, posited four parts of speech and innumerable particles. John Haiman, on the other hand, identifies "a couple dozen" parts of speech in Khmer with the caveat that Khmer words have the freedom to perform a variety of syntactic functions depending on such factors as word order, relevant particles, location within a clause, intonation and context. Some of the more important lexical categories and their function are demonstrated in the following example sentence taken from a hospital brochure: Morphology Modern Khmer is an isolating language, which means that it uses little productive morphology. There is some derivation by means of prefixes and infixes, but this is a remnant of Old Khmer and not always productive in the modern language. Khmer morphology is evidence of a historical process through which the language was, at some point in the past, changed from being an agglutinative language to adopting an isolating typology. Affixed forms are lexicalized and cannot be used productively to form new words. Below are some of the most common affixes with examples as given by Huffman. Compounding in Khmer is a common derivational process that takes two forms, coordinate compounds and repetitive compounds. Coordinate compounds join two unbound morphemes (independent words) of similar meaning to form a compound signifying a concept more general than either word alone. Coordinate compounds join either two nouns or two verbs. Repetitive compounds, one of the most productive derivational features of Khmer, use reduplication of an entire word to derive words whose meaning depends on the class of the reduplicated word. A repetitive compound of a noun indicates plurality or generality while that of an adjectival verb could mean either an intensification or plurality. Coordinate compounds: {| style="width:25%; height:50px" | || + || || ⇒ || |- | "father" || || "mother" || ⇒ || "parents" |} {| style="width:25%; height:50px" | || + || || ⇒ || |- | "to transport" || || "to bring" || ⇒ || "to lead" |} Repetitive compounds: {| style="width:40%; height:50px" | || ⇒ || || || || ⇒ || |- | "fast" || || "very fast, quickly" || || "women" || || "women, women in general" |} Nouns and pronouns Khmer nouns do not inflect for grammatical gender or singular/plural. There are no articles, but indefiniteness is often expressed by the word for "one" ( ) following the noun as in ( "a dog"). Plurality can be marked by postnominal particles, numerals, or reduplication of a following adjective, which, although similar to intensification, is usually not ambiguous due to context. {| | |style="width:20%; text-align: center"| or | |style="width:20%; text-align: center"| or | |} Classifying particles are used after numerals, but are not always obligatory as they are in Thai or Chinese, for example, and are often dropped in colloquial speech. Khmer nouns are divided into two groups: mass nouns, which take classifiers; and specific, nouns, which do not. The overwhelming majority are mass nouns. Possession is colloquially expressed by word order. The possessor is placed after the thing that is possessed. Alternatively, in more complex sentences or when emphasis is required, a possessive construction using the word (, "property, object") may be employed. In formal and literary contexts, the possessive particle () is used: {| | | style="width:20%; text-align: center" | or | | style="width:20%; text-align: center" | or | |} Pronouns are subject to a complicated system of social register, the choice of pronoun depending on the perceived relationships between speaker, audience and referent (see Social registers below). Khmer exhibits pronoun avoidance, so kinship terms, nicknames and proper names are often used instead of pronouns (including for the first person) among intimates. Subject pronouns are frequently dropped in colloquial conversation. Adjectives, verbs and | . Syllable structure A Khmer syllable begins with a single consonant, or else with a cluster of two, or rarely three, consonants. The only possible clusters of three consonants at the start of a syllable are , and (with aspirated consonants analyzed as two-consonant sequences) . There are 85 possible two-consonant clusters (including [pʰ] etc. analyzed as /ph/ etc.). All the clusters are shown in the following table, phonetically, i.e. superscript ʰ can mark either contrastive or non-contrastive aspiration (see above). Slight vowel epenthesis occurs in the clusters consisting of a plosive followed by , in those beginning , and in the cluster . After the initial consonant or consonant cluster comes the syllabic nucleus, which is one of the vowels listed above. This vowel may end the syllable or may be followed by a coda, which is a single consonant. If the syllable is stressed and the vowel is short, there must be a final consonant. All consonant sounds except and the aspirates can appear as the coda (although final /r/ is heard in some dialects, most notably in Northern Khmer). A minor syllable (unstressed syllable preceding the main syllable of a word) has a structure of CV-, CrV-, CVN- or CrVN- (where C is a consonant, V a vowel, and N a nasal consonant). The vowels in such syllables are usually short; in conversation they may be reduced to , although in careful or formal speech, including on television and radio, they are clearly articulated. An example of such a word is mɔnuh, mɔnɨh, mĕəʾnuh ('person'), pronounced , or more casually . Stress Stress in Khmer falls on the final syllable of a word. Because of this predictable pattern, stress is non-phonemic in Khmer (it does not distinguish different meanings). Most Khmer words consist of either one or two syllables. In most native disyllabic words, the first syllable is a minor (fully unstressed) syllable. Such words have been described as sesquisyllabic (i.e. as having one-and-a-half syllables). There are also some disyllabic words in which the first syllable does not behave as a minor syllable, but takes secondary stress. Most such words are compounds, but some are single morphemes (generally loanwords). An example is ('language'), pronounced . Words with three or more syllables, if they are not compounds, are mostly loanwords, usually derived from Pali, Sanskrit, or more recently, French. They are nonetheless adapted to Khmer stress patterns. Primary stress falls on the final syllable, with secondary stress on every second syllable from the end. Thus in a three-syllable word, the first syllable has secondary stress; in a four-syllable word, the second syllable has secondary stress; in a five-syllable word, the first and third syllables have secondary stress, and so on. Long polysyllables are not often used in conversation. Compounds, however, preserve the stress patterns of the constituent words. Thus , the name of a kind of cookie (literally 'bird's nest'), is pronounced , with secondary stress on the second rather than the first syllable, because it is composed of the words ('nest') and ('bird'). Phonation and tone Khmer once had a phonation distinction in its vowels, but this now survives only in the most archaic dialect (Western Khmer). The distinction arose historically when vowels after Old Khmer voiced consonants became breathy voiced and diphthongized; for example became . When consonant voicing was lost, the distinction was maintained by the vowel (); later the phonation disappeared as well (). These processes explain the origin of what are now called a-series and o-series consonants in the Khmer script. Although most Cambodian dialects are not tonal, the colloquial Phnom Penh dialect has developed a tonal contrast (level versus peaking tone) as a by-product of the elision of . Intonation Intonation often conveys semantic context in Khmer, as in distinguishing declarative statements, questions and exclamations. The available grammatical means of making such distinctions are not always used, or may be ambiguous; for example, the final interrogative particle can also serve as an emphasizing (or in some cases negating) particle. The intonation pattern of a typical Khmer declarative phrase is a steady rise throughout followed by an abrupt drop on the last syllable. ('I don't want it') Other intonation contours signify a different type of phrase such as the "full doubt" interrogative, similar to yes-no questions in English. Full doubt interrogatives remain fairly even in tone throughout, but rise sharply towards the end. ('do you want to go to Siem Reap?') Exclamatory phrases follow the typical steadily rising pattern, but rise sharply on the last syllable instead of falling. ('this book is expensive!') Grammar Khmer is primarily an analytic language with no inflection. Syntactic relations are mainly determined by word order. Old and Middle Khmer used particles to mark grammatical categories and many of these have survived in Modern Khmer but are used sparingly, mostly in literary or formal language. Khmer makes extensive use of auxiliary verbs, "directionals" and serial verb construction. Colloquial Khmer is a zero copula language, instead preferring predicative adjectives (and even predicative nouns) unless using a copula for emphasis or to avoid ambiguity in more complex sentences. Basic word order is subject–verb–object (SVO), although subjects are often dropped; prepositions are used rather than postpositions. Topic-Comment constructions are common and the language is generally head-initial (modifiers follow the words they modify). Some grammatical processes are still not fully understood by western scholars. For example, it is not clear if certain features of Khmer grammar, such as actor nominalization, should be treated as a morphological process or a purely syntactic device, and some derivational morphology seems "purely decorative" and performs no known syntactic work. Lexical categories have been hard to define in Khmer. Henri Maspero, an early scholar of Khmer, claimed the language had no parts of speech, while a later scholar, Judith Jacob, posited four parts of speech and innumerable particles. John Haiman, on the other hand, identifies "a couple dozen" parts of speech in Khmer with the caveat that Khmer words have the freedom to perform a variety of syntactic functions depending on such factors as word order, relevant particles, location within a clause, intonation and context. Some of the more important lexical categories and their function are demonstrated in the following example sentence taken from a hospital brochure: Morphology Modern Khmer is an isolating language, which means that it uses little productive morphology. There is some derivation by means of prefixes and infixes, but this is a remnant of Old Khmer and not always productive in the modern language. Khmer morphology is evidence of a historical process through which the language was, at some point in the past, changed from being an agglutinative language to adopting an isolating typology. Affixed forms are lexicalized and cannot be used productively to form new words. Below are some of the most common affixes with examples as given by Huffman. Compounding in Khmer is a common derivational process that takes two forms, coordinate compounds and repetitive compounds. Coordinate compounds join two unbound morphemes (independent words) of similar meaning to form a compound signifying a concept more general than either word alone. Coordinate compounds join either two nouns or two verbs. Repetitive compounds, one of the most productive derivational features of Khmer, use reduplication of an entire word to derive words whose meaning depends on the class of the reduplicated word. A repetitive compound of a noun indicates plurality or generality while that of an adjectival verb could mean either an intensification or plurality. Coordinate compounds: {| style="width:25%; height:50px" | || + || || ⇒ || |- | "father" || || "mother" || ⇒ || "parents" |} {| style="width:25%; height:50px" | || + || || ⇒ || |- | "to transport" || || "to bring" || ⇒ || "to lead" |} Repetitive compounds: {| style="width:40%; height:50px" | || ⇒ || || || || ⇒ || |- | "fast" || || "very fast, quickly" || || "women" || || "women, women in general" |} Nouns and pronouns Khmer nouns do not inflect for grammatical gender or singular/plural. There are no articles, but indefiniteness is often expressed by the word for "one" ( ) following the noun as in ( "a dog"). Plurality can be marked by postnominal particles, numerals, or reduplication of a following adjective, which, although similar to intensification, is usually not ambiguous due to context. {| | |style="width:20%; text-align: center"| or | |style="width:20%; text-align: center"| or | |} Classifying particles are used after numerals, but are not always obligatory as they are in Thai or Chinese, for example, and are often dropped in colloquial speech. Khmer nouns are divided into two groups: mass nouns, which take classifiers; and specific, nouns, which do not. The overwhelming majority are mass nouns. Possession is colloquially expressed by word order. The possessor is placed after the thing that is possessed. Alternatively, in more complex sentences or when emphasis is required, a possessive construction using the word (, "property, object") may be employed. In formal and literary contexts, the possessive particle () is used: {| | | style="width:20%; text-align: center" | or | | style="width:20%; text-align: center" | or | |} Pronouns are subject to a complicated system of social register, the choice of pronoun depending on the perceived relationships between speaker, audience and referent (see Social registers below). Khmer exhibits pronoun avoidance, so kinship terms, nicknames and proper names are often used instead of pronouns (including for the first person) among intimates. Subject pronouns are frequently dropped in colloquial conversation. Adjectives, verbs and verb phrases may be made into nouns by the use of nominalization particles. Three of the more common particles used to create nouns are , , and . These particles are prefixed most often to verbs to form abstract nouns. The latter, derived from Sanskrit, also occurs as a suffix in fixed forms borrowed from Sanskrit and Pali such as ("health") from ("to be healthy"). {| | | style="width:20%; text-align: center" | | | style="width:20%; text-align: center" | | |} Adjectives and adverbs Adjectives, demonstratives and numerals follow the noun they modify. Adverbs likewise follow the verb. Morphologically, adjectives and adverbs are not distinguished, with many words often serving either function. Adjectives are also employed as verbs as Khmer sentences rarely use a copula. Degrees of comparison are constructed syntactically. Comparatives are expressed using the word : "A X [B]" (A is more X [than B]). The most common way to express superlatives is with : "A X " (A is the most X). Intensity is also expressed syntactically, similar to other languages of the region, by reduplication or with the use of intensifiers. {| | | style="width:15%; text-align: center" | | | style="width:15%; text-align: center" | | |} Verbs As is typical of most East Asian languages, Khmer verbs do not inflect at all; tense, |
CPUs are primarily von Neumann in design, but CPUs with the Harvard architecture are seen as well, especially in embedded applications; for instance, the Atmel AVR microcontrollers are Harvard architecture processors. Relays and vacuum tubes (thermionic tubes) were commonly used as switching elements; a useful computer requires thousands or tens of thousands of switching devices. The overall speed of a system is dependent on the speed of the switches. Vacuum-tube computers such as EDVAC tended to average eight hours between failures, whereas relay computers like the (slower, but earlier) Harvard Mark I failed very rarely. In the end, tube-based CPUs became dominant because the significant speed advantages afforded generally outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were very common at this time, limited largely by the speed of the switching devices they were built with. Transistor CPUs The design complexity of CPUs increased as various technologies facilitated building smaller and more reliable electronic devices. The first such improvement came with the advent of the transistor. Transistorized CPUs during the 1950s and 1960s no longer had to be built out of bulky, unreliable and fragile switching elements like vacuum tubes and relays. With this improvement, more complex and reliable CPUs were built onto one or several printed circuit boards containing discrete (individual) components. In 1964, IBM introduced its IBM System/360 computer architecture that was used in a series of computers capable of running the same programs with different speed and performance. This was significant at a time when most electronic computers were incompatible with one another, even those made by the same manufacturer. To facilitate this improvement, IBM used the concept of a microprogram (often called "microcode"), which still sees widespread usage in modern CPUs. The System/360 architecture was so popular that it dominated the mainframe computer market for decades and left a legacy that is still continued by similar modern computers like the IBM zSeries. In 1965, Digital Equipment Corporation (DEC) introduced another influential computer aimed at the scientific and research markets, the PDP-8. Transistor-based computers had several distinct advantages over their predecessors. Aside from facilitating increased reliability and lower power consumption, transistors also allowed CPUs to operate at much higher speeds because of the short switching time of a transistor in comparison to a tube or relay. The increased reliability and dramatically increased speed of the switching elements (which were almost exclusively transistors by this time); CPU clock rates in the tens of megahertz were easily obtained during this period. Additionally, while discrete transistor and IC CPUs were in heavy usage, new high-performance designs like single instruction, multiple data (SIMD) vector processors began to appear. These early experimental designs later gave rise to the era of specialized supercomputers like those made by Cray Inc and Fujitsu Ltd. Small-scale integration CPUs During this period, a method of manufacturing many interconnected transistors in a compact space was developed. The integrated circuit (IC) allowed a large number of transistors to be manufactured on a single semiconductor-based die, or "chip". At first, only very basic non-specialized digital circuits such as NOR gates were miniaturized into ICs. CPUs based on these "building block" ICs are generally referred to as "small-scale integration" (SSI) devices. SSI ICs, such as the ones used in the Apollo Guidance Computer, usually contained up to a few dozen transistors. To build an entire CPU out of SSI ICs required thousands of individual chips, but still consumed much less space and power than earlier discrete transistor designs. IBM's System/370, follow-on to the System/360, used SSI ICs rather than Solid Logic Technology discrete-transistor modules. DEC's PDP-8/I and KI10 PDP-10 also switched from the individual transistors used by the PDP-8 and PDP-10 to SSI ICs, and their extremely popular PDP-11 line was originally built with SSI ICs but was eventually implemented with LSI components once these became practical. Large-scale integration CPUs Lee Boysel published influential articles, including a 1967 "manifesto", which described how to build the equivalent of a 32-bit mainframe computer from a relatively small number of large-scale integration circuits (LSI). The only way to build LSI chips, which are chips with a hundred or more gates, was to build them using a metal–oxide–semiconductor (MOS) semiconductor manufacturing process (either PMOS logic, NMOS logic, or CMOS logic). However, some companies continued to build processors out of bipolar transistor–transistor logic (TTL) chips because bipolar junction transistors were faster than MOS chips up until the 1970s (a few companies such as Datapoint continued to build processors out of TTL chips until the early 1980s). In the 1960s, MOS ICs were slower and initially considered useful only in applications that required low power. Following the development of silicon-gate MOS technology by Federico Faggin at Fairchild Semiconductor in 1968, MOS ICs largely replaced bipolar TTL as the standard chip technology in the early 1970s. As the microelectronic technology advanced, an increasing number of transistors were placed on ICs, decreasing the number of individual ICs needed for a complete CPU. MSI and LSI ICs increased transistor counts to hundreds, and then thousands. By 1968, the number of ICs required to build a complete CPU had been reduced to 24 ICs of eight different types, with each IC containing roughly 1000 MOSFETs. In stark contrast with its SSI and MSI predecessors, the first LSI implementation of the PDP-11 contained a CPU composed of only four LSI integrated circuits. Microprocessors Since the introduction of the first commercially available microprocessor, the Intel 4004 in 1971, and the first widely used microprocessor, the Intel 8080 in 1974, this class of CPUs has almost completely overtaken all other central processing unit implementation methods. Mainframe and minicomputer manufacturers of the time launched proprietary IC development programs to upgrade their older computer architectures, and eventually produced instruction set compatible microprocessors that were backward-compatible with their older hardware and software. Combined with the advent and eventual success of the ubiquitous personal computer, the term CPU is now applied almost exclusively to microprocessors. Several CPUs (denoted cores) can be combined in a single processing chip. Previous generations of CPUs were implemented as discrete components and numerous small integrated circuits (ICs) on one or more circuit boards. Microprocessors, on the other hand, are CPUs manufactured on a very small number of ICs; usually just one. The overall smaller CPU size, as a result of being implemented on a single die, means faster switching time because of physical factors like decreased gate parasitic capacitance. This has allowed synchronous microprocessors to have clock rates ranging from tens of megahertz to several gigahertz. Additionally, the ability to construct exceedingly small transistors on an IC has increased the complexity and number of transistors in a single CPU many fold. This widely observed trend is described by Moore's law, which had proven to be a fairly accurate predictor of the growth of CPU (and other IC) complexity until 2016. While the complexity, size, construction and general form of CPUs have changed enormously since 1950, the basic design and function has not changed much at all. Almost all common CPUs today can be very accurately described as von Neumann stored-program machines. As Moore's law no longer holds, concerns have arisen about the limits of integrated circuit transistor technology. Extreme miniaturization of electronic gates is causing the effects of phenomena like electromigration and subthreshold leakage to become much more significant. These newer concerns are among the many factors causing researchers to investigate new methods of computing such as the quantum computer, as well as to expand the usage of parallelism and other methods that extend the usefulness of the classical von Neumann model. Operation The fundamental operation of most CPUs, regardless of the physical form they take, is to execute a sequence of stored instructions that is called a program. The instructions to be executed are kept in some kind of computer memory. Nearly all CPUs follow the fetch, decode and execute steps in their operation, which are collectively known as the instruction cycle. After the execution of an instruction, the entire process repeats, with the next instruction cycle normally fetching the next-in-sequence instruction because of the incremented value in the program counter. If a jump instruction was executed, the program counter will be modified to contain the address of the instruction that was jumped to and program execution continues normally. In more complex CPUs, multiple instructions can be fetched, decoded and executed simultaneously. This section describes what is generally referred to as the "classic RISC pipeline", which is quite common among the simple CPUs used in many electronic devices (often called microcontrollers). It largely ignores the important role of CPU cache, and therefore the access stage of the pipeline. Some instructions manipulate the program counter rather than producing result data directly; such instructions are generally called "jumps" and facilitate program behavior like loops, conditional program execution (through the use of a conditional jump), and existence of functions. In some processors, some other instructions change the state of bits in a "flags" register. These flags can be used to influence how a program behaves, since they often indicate the outcome of various operations. For example, in such processors a "compare" instruction evaluates two values and sets or clears bits in the flags register to indicate which one is greater or whether they are equal; one of these flags could then be used by a later jump instruction to determine program flow. Fetch The first step, fetch, involves retrieving an instruction (which is represented by a number or sequence of numbers) from program memory. The instruction's location (address) in program memory is determined by the program counter (PC; called the "instruction pointer" in Intel x86 microprocessors), which stores a number that identifies the address of the next instruction to be fetched. After an instruction is fetched, the PC is incremented by the length of the instruction so that it will contain the address of the next instruction in the sequence. Often, the instruction to be fetched must be retrieved from relatively slow memory, causing the CPU to stall while waiting for the instruction to be returned. This issue is largely addressed in modern processors by caches and pipeline architectures (see below). Decode The instruction that the CPU fetches from memory determines what the CPU will do. In the decode step, performed by binary decoder circuitry known as the instruction decoder, the instruction is converted into signals that control other parts of the CPU. The way in which the instruction is interpreted is defined by the CPU's instruction set architecture (ISA). Often, one group of bits (that is, a "field") within the instruction, called the opcode, indicates which operation is to be performed, while the remaining fields usually provide supplemental information required for the operation, such as the operands. Those operands may be specified as a constant value (called an immediate value), or as the location of a value that may be a processor register or a memory address, as determined by some addressing mode. In some CPU designs the instruction decoder is implemented as a hardwired, unchangeable binary decoder circuit. In others, a microprogram is used to translate instructions into sets of CPU configuration signals that are applied sequentially over multiple clock pulses. In some cases the memory that stores the microprogram is rewritable, making it possible to change the way in which the CPU decodes instructions. Execute After the fetch and decode steps, the execute step is performed. Depending on the CPU architecture, this may consist of a single action or a sequence of actions. During each action, control signals electrically enable or disable various parts of the CPU so they can perform all or part of the desired operation. The action is then completed, typically in response to a clock pulse. Very often the results are written to an internal CPU register for quick access by subsequent instructions. In other cases results may be written to slower, but less expensive and higher capacity main memory. For example, if an addition instruction is to be executed, registers containing operands (numbers to be summed) are activated, as are the parts of the arithmetic logic unit (ALU) that perform addition. When the clock pulse occurs, the operands flow from the source registers into the ALU, and the sum appears at its output. On subsequent clock pulses, other components are enabled (and disabled) to move the output (the sum of the operation) to storage (e.g., a register or memory). If the resulting sum is too large (i.e., it is larger than the ALU's output word size), an arithmetic overflow flag will be set, influencing the next operation. Structure and implementation Hardwired into a CPU's circuitry is a set of basic operations it can perform, called an instruction set. Such operations may involve, for example, adding or subtracting two numbers, comparing two numbers, or jumping to a different part of a program. Each instruction is represented by a unique combination of bits, known as the machine language opcode. While processing an instruction, the CPU decodes the opcode (via a binary decoder) into control signals, which orchestrate the behavior of the CPU. A complete machine language instruction consists of an opcode and, in many cases, additional bits that specify arguments for the operation (for example, the numbers to be summed in the case of an addition operation). Going up the complexity scale, a machine language program is a collection of machine language instructions that the CPU executes. The actual mathematical operation for each instruction is performed by a combinational logic circuit within the CPU's processor known as the arithmetic–logic unit or ALU. In general, a CPU executes an instruction by fetching it from memory, using its ALU to perform an operation, and then storing the result to memory. Beside the instructions for integer mathematics and logic operations, various other machine instructions exist, such as those for loading data from memory and storing it back, branching operations, and mathematical operations on floating-point numbers performed by the CPU's floating-point unit (FPU). Control unit The control unit (CU) is a component of the CPU that directs the operation of the processor. It tells the computer's memory, arithmetic and logic unit and input and output devices how to respond to the instructions that have been sent to the processor. It directs the operation of the other units by providing timing and control signals. Most computer resources are managed by the CU. It directs the flow of data between the CPU and the other devices. John von Neumann included the control unit as part of the von Neumann architecture. In modern computer designs, the control unit is typically an internal part of the CPU with its overall role and operation unchanged since its introduction. Arithmetic logic unit The arithmetic logic unit (ALU) is a digital circuit within the processor that performs integer arithmetic and bitwise logic operations. The inputs to the ALU are the data words to be operated on (called operands), status information from previous operations, and a code from the control unit indicating which operation to perform. Depending on the instruction being executed, the operands may come from internal CPU registers or external memory, or they may be constants generated by the ALU itself. When all input signals have settled and propagated through the ALU circuitry, the result of the performed operation appears at the ALU's outputs. The result consists of both a data word, which may be stored in a register or memory, and status information that is typically stored in a special, internal CPU register reserved for this purpose. Address generation unit Address generation unit (AGU), sometimes also called address computation unit (ACU), is an execution unit inside the CPU that calculates addresses used by the CPU to access main memory. By having address calculations handled by separate circuitry that operates in parallel with the rest of the CPU, the number of CPU cycles required for executing various machine instructions can be reduced, bringing performance improvements. While performing various operations, CPUs need to calculate memory addresses required for fetching data from the memory; for example, in-memory positions of array elements must be calculated before the CPU can fetch the data from actual memory locations. Those address-generation calculations involve different integer arithmetic operations, such as addition, subtraction, modulo operations, or bit shifts. Often, calculating a memory address involves more than one general-purpose machine instruction, which do not necessarily decode and execute quickly. By incorporating an AGU into a CPU design, together with introducing specialized instructions that use the AGU, various address-generation calculations can be offloaded from the rest of the CPU, and can often be executed quickly in a single CPU cycle. Capabilities of an AGU depend on a particular CPU and its architecture. Thus, some AGUs implement and expose more address-calculation operations, while some also include more advanced specialized instructions that can operate on multiple operands at a time. Furthermore, some CPU architectures include multiple AGUs so more than one address-calculation operation can be executed simultaneously, bringing further performance improvements by capitalizing on the superscalar nature of advanced CPU designs. For example, Intel incorporates multiple AGUs into its Sandy Bridge and Haswell microarchitectures, which increase bandwidth of the CPU memory subsystem by allowing multiple memory-access instructions to be executed in parallel. Memory management unit (MMU) Many microprocessors (in smartphones and desktop, laptop, server computers) have a memory management unit, translating logical addresses into physical RAM addresses, providing memory protection and paging abilities, useful for virtual memory. Simpler processors, especially microcontrollers, usually don't include an MMU. Cache A CPU cache is a hardware cache used by the central processing unit (CPU) of a computer to reduce the average cost (time or energy) to access data from the main memory. A cache is a smaller, faster memory, closer to a processor core, which stores copies of the data from frequently used main memory locations. Most CPUs have different independent caches, including instruction and data caches, where the data cache is usually organized as a hierarchy of more cache levels (L1, L2, L3, L4, etc.). All modern (fast) CPUs (with few specialized exceptions) have multiple levels of CPU caches. The first CPUs that used a cache had only one level of cache; unlike later level 1 caches, it was not split into L1d (for data) and L1i (for instructions). Almost all current CPUs with caches have a split L1 cache. They also have L2 caches and, for larger processors, L3 caches as well. The L2 cache is usually not split and acts as a common repository for the already split L1 cache. Every core of a multi-core processor has a dedicated L2 cache and is usually not shared between the cores. The L3 cache, and higher-level caches, are shared between the cores and are not split. An L4 cache is currently uncommon, and is generally on dynamic random-access memory (DRAM), rather than on static random-access memory (SRAM), on a separate die or chip. That was also the case historically with L1, while bigger chips have allowed integration of it and generally all cache levels, with the possible exception of the last level. Each extra level of cache tends to be bigger and be optimized differently. Other types of caches exist (that are not counted towards the "cache size" of the most important caches mentioned above), such as the translation lookaside buffer (TLB) that is part of the memory management unit (MMU) that most CPUs have. Caches are generally sized in powers of two: 2, 8, 16 etc. KiB or MiB (for larger non-L1) sizes, although the IBM z13 has a 96 KiB L1 instruction cache. Clock rate Most CPUs are synchronous circuits, which means they employ a clock signal to pace their sequential operations. The clock signal is produced by an external oscillator circuit that generates a consistent number of pulses each second in the form of a periodic square wave. The frequency of the clock pulses determines the rate at which a CPU executes instructions and, consequently, the faster the clock, the more instructions the CPU will execute each second. To ensure proper operation of the CPU, the clock period is longer than the maximum time needed for all signals to propagate (move) through the CPU. In setting the clock period to | a 32-bit mainframe computer from a relatively small number of large-scale integration circuits (LSI). The only way to build LSI chips, which are chips with a hundred or more gates, was to build them using a metal–oxide–semiconductor (MOS) semiconductor manufacturing process (either PMOS logic, NMOS logic, or CMOS logic). However, some companies continued to build processors out of bipolar transistor–transistor logic (TTL) chips because bipolar junction transistors were faster than MOS chips up until the 1970s (a few companies such as Datapoint continued to build processors out of TTL chips until the early 1980s). In the 1960s, MOS ICs were slower and initially considered useful only in applications that required low power. Following the development of silicon-gate MOS technology by Federico Faggin at Fairchild Semiconductor in 1968, MOS ICs largely replaced bipolar TTL as the standard chip technology in the early 1970s. As the microelectronic technology advanced, an increasing number of transistors were placed on ICs, decreasing the number of individual ICs needed for a complete CPU. MSI and LSI ICs increased transistor counts to hundreds, and then thousands. By 1968, the number of ICs required to build a complete CPU had been reduced to 24 ICs of eight different types, with each IC containing roughly 1000 MOSFETs. In stark contrast with its SSI and MSI predecessors, the first LSI implementation of the PDP-11 contained a CPU composed of only four LSI integrated circuits. Microprocessors Since the introduction of the first commercially available microprocessor, the Intel 4004 in 1971, and the first widely used microprocessor, the Intel 8080 in 1974, this class of CPUs has almost completely overtaken all other central processing unit implementation methods. Mainframe and minicomputer manufacturers of the time launched proprietary IC development programs to upgrade their older computer architectures, and eventually produced instruction set compatible microprocessors that were backward-compatible with their older hardware and software. Combined with the advent and eventual success of the ubiquitous personal computer, the term CPU is now applied almost exclusively to microprocessors. Several CPUs (denoted cores) can be combined in a single processing chip. Previous generations of CPUs were implemented as discrete components and numerous small integrated circuits (ICs) on one or more circuit boards. Microprocessors, on the other hand, are CPUs manufactured on a very small number of ICs; usually just one. The overall smaller CPU size, as a result of being implemented on a single die, means faster switching time because of physical factors like decreased gate parasitic capacitance. This has allowed synchronous microprocessors to have clock rates ranging from tens of megahertz to several gigahertz. Additionally, the ability to construct exceedingly small transistors on an IC has increased the complexity and number of transistors in a single CPU many fold. This widely observed trend is described by Moore's law, which had proven to be a fairly accurate predictor of the growth of CPU (and other IC) complexity until 2016. While the complexity, size, construction and general form of CPUs have changed enormously since 1950, the basic design and function has not changed much at all. Almost all common CPUs today can be very accurately described as von Neumann stored-program machines. As Moore's law no longer holds, concerns have arisen about the limits of integrated circuit transistor technology. Extreme miniaturization of electronic gates is causing the effects of phenomena like electromigration and subthreshold leakage to become much more significant. These newer concerns are among the many factors causing researchers to investigate new methods of computing such as the quantum computer, as well as to expand the usage of parallelism and other methods that extend the usefulness of the classical von Neumann model. Operation The fundamental operation of most CPUs, regardless of the physical form they take, is to execute a sequence of stored instructions that is called a program. The instructions to be executed are kept in some kind of computer memory. Nearly all CPUs follow the fetch, decode and execute steps in their operation, which are collectively known as the instruction cycle. After the execution of an instruction, the entire process repeats, with the next instruction cycle normally fetching the next-in-sequence instruction because of the incremented value in the program counter. If a jump instruction was executed, the program counter will be modified to contain the address of the instruction that was jumped to and program execution continues normally. In more complex CPUs, multiple instructions can be fetched, decoded and executed simultaneously. This section describes what is generally referred to as the "classic RISC pipeline", which is quite common among the simple CPUs used in many electronic devices (often called microcontrollers). It largely ignores the important role of CPU cache, and therefore the access stage of the pipeline. Some instructions manipulate the program counter rather than producing result data directly; such instructions are generally called "jumps" and facilitate program behavior like loops, conditional program execution (through the use of a conditional jump), and existence of functions. In some processors, some other instructions change the state of bits in a "flags" register. These flags can be used to influence how a program behaves, since they often indicate the outcome of various operations. For example, in such processors a "compare" instruction evaluates two values and sets or clears bits in the flags register to indicate which one is greater or whether they are equal; one of these flags could then be used by a later jump instruction to determine program flow. Fetch The first step, fetch, involves retrieving an instruction (which is represented by a number or sequence of numbers) from program memory. The instruction's location (address) in program memory is determined by the program counter (PC; called the "instruction pointer" in Intel x86 microprocessors), which stores a number that identifies the address of the next instruction to be fetched. After an instruction is fetched, the PC is incremented by the length of the instruction so that it will contain the address of the next instruction in the sequence. Often, the instruction to be fetched must be retrieved from relatively slow memory, causing the CPU to stall while waiting for the instruction to be returned. This issue is largely addressed in modern processors by caches and pipeline architectures (see below). Decode The instruction that the CPU fetches from memory determines what the CPU will do. In the decode step, performed by binary decoder circuitry known as the instruction decoder, the instruction is converted into signals that control other parts of the CPU. The way in which the instruction is interpreted is defined by the CPU's instruction set architecture (ISA). Often, one group of bits (that is, a "field") within the instruction, called the opcode, indicates which operation is to be performed, while the remaining fields usually provide supplemental information required for the operation, such as the operands. Those operands may be specified as a constant value (called an immediate value), or as the location of a value that may be a processor register or a memory address, as determined by some addressing mode. In some CPU designs the instruction decoder is implemented as a hardwired, unchangeable binary decoder circuit. In others, a microprogram is used to translate instructions into sets of CPU configuration signals that are applied sequentially over multiple clock pulses. In some cases the memory that stores the microprogram is rewritable, making it possible to change the way in which the CPU decodes instructions. Execute After the fetch and decode steps, the execute step is performed. Depending on the CPU architecture, this may consist of a single action or a sequence of actions. During each action, control signals electrically enable or disable various parts of the CPU so they can perform all or part of the desired operation. The action is then completed, typically in response to a clock pulse. Very often the results are written to an internal CPU register for quick access by subsequent instructions. In other cases results may be written to slower, but less expensive and higher capacity main memory. For example, if an addition instruction is to be executed, registers containing operands (numbers to be summed) are activated, as are the parts of the arithmetic logic unit (ALU) that perform addition. When the clock pulse occurs, the operands flow from the source registers into the ALU, and the sum appears at its output. On subsequent clock pulses, other components are enabled (and disabled) to move the output (the sum of the operation) to storage (e.g., a register or memory). If the resulting sum is too large (i.e., it is larger than the ALU's output word size), an arithmetic overflow flag will be set, influencing the next operation. Structure and implementation Hardwired into a CPU's circuitry is a set of basic operations it can perform, called an instruction set. Such operations may involve, for example, adding or subtracting two numbers, comparing two numbers, or jumping to a different part of a program. Each instruction is represented by a unique combination of bits, known as the machine language opcode. While processing an instruction, the CPU decodes the opcode (via a binary decoder) into control signals, which orchestrate the behavior of the CPU. A complete machine language instruction consists of an opcode and, in many cases, additional bits that specify arguments for the operation (for example, the numbers to be summed in the case of an addition operation). Going up the complexity scale, a machine language program is a collection of machine language instructions that the CPU executes. The actual mathematical operation for each instruction is performed by a combinational logic circuit within the CPU's processor known as the arithmetic–logic unit or ALU. In general, a CPU executes an instruction by fetching it from memory, using its ALU to perform an operation, and then storing the result to memory. Beside the instructions for integer mathematics and logic operations, various other machine instructions exist, such as those for loading data from memory and storing it back, branching operations, and mathematical operations on floating-point numbers performed by the CPU's floating-point unit (FPU). Control unit The control unit (CU) is a component of the CPU that directs the operation of the processor. It tells the computer's memory, arithmetic and logic unit and input and output devices how to respond to the instructions that have been sent to the processor. It directs the operation of the other units by providing timing and control signals. Most computer resources are managed by the CU. It directs the flow of data between the CPU and the other devices. John von Neumann included the control unit as part of the von Neumann architecture. In modern computer designs, the control unit is typically an internal part of the CPU with its overall role and operation unchanged since its introduction. Arithmetic logic unit The arithmetic logic unit (ALU) is a digital circuit within the processor that performs integer arithmetic and bitwise logic operations. The inputs to the ALU are the data words to be operated on (called operands), status information from previous operations, and a code from the control unit indicating which operation to perform. Depending on the instruction being executed, the operands may come from internal CPU registers or external memory, or they may be constants generated by the ALU itself. When all input signals have settled and propagated through the ALU circuitry, the result of the performed operation appears at the ALU's outputs. The result consists of both a data word, which may be stored in a register or memory, and status information that is typically stored in a special, internal CPU register reserved for this purpose. Address generation unit Address generation unit (AGU), sometimes also called address computation unit (ACU), is an execution unit inside the CPU that calculates addresses used by the CPU to access main memory. By having address calculations handled by separate circuitry that operates in parallel with the rest of the CPU, the number of CPU cycles required for executing various machine instructions can be reduced, bringing performance improvements. While performing various operations, CPUs need to calculate memory addresses required for fetching data from the memory; for example, in-memory positions of array elements must be calculated before the CPU can fetch the data from actual memory locations. Those address-generation calculations involve different integer arithmetic operations, such as addition, subtraction, modulo operations, or bit shifts. Often, calculating a memory address involves more than one general-purpose machine instruction, which do not necessarily decode and execute quickly. By incorporating an AGU into a CPU design, together with introducing specialized instructions that use the AGU, various address-generation calculations can be offloaded from the rest of the CPU, and can often be executed quickly in a single CPU cycle. Capabilities of an AGU depend on a particular CPU and its architecture. Thus, some AGUs implement and expose more address-calculation operations, while some also include more advanced specialized instructions that can operate on multiple operands at a time. Furthermore, some CPU architectures include multiple AGUs so more than one address-calculation operation can be executed simultaneously, bringing further performance improvements by capitalizing on the superscalar nature of advanced CPU designs. For example, Intel incorporates multiple AGUs into its Sandy Bridge and Haswell microarchitectures, which increase bandwidth of the CPU memory subsystem by allowing multiple memory-access instructions to be executed in parallel. Memory management unit (MMU) Many microprocessors (in smartphones and desktop, laptop, server computers) have a memory management unit, translating logical addresses into physical RAM addresses, providing memory protection and paging abilities, useful for virtual memory. Simpler processors, especially microcontrollers, usually don't include an MMU. Cache A CPU cache is a hardware cache used by the central processing unit (CPU) of a computer to reduce the average cost (time or energy) to access data from the main memory. A cache is a smaller, faster memory, closer to a processor core, which stores copies of the data from frequently used main memory locations. Most CPUs have different independent caches, including instruction and data caches, where the data cache is usually organized as a hierarchy of more cache levels (L1, L2, L3, L4, etc.). All modern (fast) CPUs (with few specialized exceptions) have multiple levels of CPU caches. The first CPUs that used a cache had only one level of cache; unlike later level 1 caches, it was not split into L1d (for data) and L1i (for instructions). Almost all current CPUs with caches have a split L1 cache. They also have L2 caches and, for larger processors, L3 caches as well. The L2 cache is usually not split and acts as a common repository for the already split L1 cache. Every core of a multi-core processor has a dedicated L2 cache and is usually not shared between the cores. The L3 cache, and higher-level caches, are shared between the cores and are not split. An L4 cache is currently uncommon, and is generally on dynamic random-access memory (DRAM), rather than on static random-access memory (SRAM), on a separate die or chip. That was also the case historically with L1, while bigger chips have allowed integration of it and generally all cache levels, with the possible exception of the last level. Each extra level of cache tends to be bigger and be optimized differently. Other types of caches exist (that are not counted towards the "cache size" of the most important caches mentioned above), such as the translation lookaside buffer (TLB) that is part of the memory management unit (MMU) that most CPUs have. Caches are generally sized in powers of two: 2, 8, 16 etc. KiB or MiB (for larger non-L1) sizes, although the IBM z13 has a 96 KiB L1 instruction cache. Clock rate Most CPUs are synchronous circuits, which means they employ a clock signal to pace their sequential operations. The clock signal is produced by an external oscillator circuit that generates a consistent number of pulses each second in the form of a periodic square wave. The frequency of the clock pulses determines the rate at which a CPU executes instructions and, consequently, the faster the clock, the more instructions the CPU will execute each second. To ensure proper operation of the CPU, the clock period is longer than the maximum time needed for all signals to propagate (move) through the CPU. In setting the clock period to a value well above the worst-case propagation delay, it is possible to design the entire CPU and the way it moves data around the "edges" of the rising and falling clock signal. This has the advantage of simplifying the CPU significantly, both from a design perspective and a component-count perspective. However, it also carries the disadvantage that the entire CPU must wait on its slowest elements, even though some portions of it are much faster. This limitation has largely been compensated for by various methods of increasing CPU parallelism (see below). However, architectural improvements alone do not solve all of the drawbacks of globally synchronous CPUs. For example, a clock signal is subject to the delays of any other electrical signal. Higher clock rates in increasingly complex CPUs make it more difficult to keep the clock signal in phase (synchronized) throughout the entire unit. This has led many modern CPUs to require multiple identical clock signals to be provided to avoid delaying a single signal significantly enough to cause the CPU to malfunction. Another major issue, as clock rates increase dramatically, is the amount of heat that is dissipated by the CPU. The constantly changing clock causes many components to switch regardless of whether they are being used at that time. In general, a component that is switching uses more energy than an element in a static state. Therefore, as clock rate increases, so does energy consumption, causing the CPU to require more heat dissipation in the form of CPU cooling solutions. One method of dealing with the switching of unneeded components is called clock gating, which involves turning off the clock signal to unneeded components (effectively disabling them). However, this is often regarded as difficult to implement and therefore does not see common usage outside of very low-power designs. One notable recent CPU design that uses extensive clock gating is the IBM PowerPC-based Xenon used in the Xbox 360; that way, power requirements of the Xbox 360 are greatly reduced. Clockless CPUs Another method of addressing some of the problems with a global clock signal is the removal of the clock signal altogether. While removing the global clock signal makes the design process considerably more complex in many ways, asynchronous (or clockless) designs carry marked advantages in power consumption and heat dissipation in comparison with similar synchronous designs. While somewhat uncommon, entire asynchronous CPUs have been built without using a global clock signal. Two notable examples of this are the ARM compliant AMULET and the MIPS R3000 compatible MiniMIPS. Rather than totally removing the clock signal, some CPU designs allow certain portions of the device to be asynchronous, such as using asynchronous ALUs in conjunction with superscalar pipelining to achieve some arithmetic performance gains. While it is not altogether clear whether totally asynchronous designs can perform at a comparable or better level than their synchronous counterparts, it is evident that they do at least excel in simpler math operations. This, combined with their excellent power consumption and heat dissipation properties, makes them very suitable for embedded computers. Voltage regulator module Many modern CPUs have a die-integrated power managing module which regulates on-demand voltage supply to the CPU circuitry allowing it to keep balance between performance and power consumption. Integer range Every CPU represents numerical values in a specific way. For example, some early digital computers represented numbers as familiar decimal (base 10) numeral system values, and others have employed more unusual representations such as ternary (base three). Nearly all modern CPUs represent numbers in binary form, with each digit being represented by some two-valued physical quantity such as a "high" or "low" voltage. Related to numeric representation is the size and precision of integer numbers that a CPU can represent. In the case of a binary CPU, this is measured by the number of bits (significant digits of a binary encoded integer) that the CPU can process in one operation, which is commonly called word size, bit width, data path width, integer precision, or integer size. A CPU's integer size determines the range of integer values it can directly operate on. For example, an 8-bit CPU can directly manipulate integers represented by eight bits, which have a range of 256 (28) discrete integer values. Integer range can also affect the number of memory locations the CPU can directly address (an address is an integer value representing a specific memory location). For example, if a binary CPU uses 32 bits to represent a memory address then it can directly address 232 memory locations. To circumvent this limitation and for various other reasons, some CPUs use mechanisms (such as bank switching) that allow additional memory to be addressed. CPUs with larger word sizes require more circuitry and consequently are physically larger, cost more and consume more power (and therefore generate more heat). As a result, smaller 4- or 8-bit microcontrollers are commonly used in modern applications even though CPUs with much larger word sizes (such as 16, 32, 64, even 128-bit) are available. When higher performance is required, however, the benefits of a larger word size (larger data ranges and address spaces) may outweigh the disadvantages. A CPU can have internal data paths shorter than the word size to reduce size and cost. For example, even though the IBM System/360 instruction set was a 32-bit instruction set, the System/360 Model 30 and Model 40 had 8-bit data paths in the arithmetic logical unit, so that a 32-bit add required four cycles, one for each 8 bits of the operands, and, even though the Motorola 68000 series instruction set was a 32-bit instruction set, the Motorola 68000 and Motorola 68010 had 16-bit data paths in the arithmetic logical unit, so that a 32-bit add required two cycles. To gain some of the advantages afforded by both lower and higher bit lengths, many instruction sets have different bit widths for integer and floating-point data, allowing CPUs implementing that instruction set to have different bit widths for different portions of the device. For example, the IBM System/360 instruction set was primarily 32 bit, but supported 64-bit floating point values to facilitate greater accuracy and range in floating point numbers. The System/360 Model 65 had an 8-bit adder for decimal and fixed-point binary arithmetic and a 60-bit adder for floating-point arithmetic. Many later CPU designs use similar mixed bit width, especially when the processor is meant for general-purpose usage where a reasonable balance of integer and floating point capability is required. Parallelism The description of the basic operation of a CPU offered in the previous section describes the simplest form that a CPU can take. This type of CPU, usually referred to as subscalar, operates on and executes one instruction on one or two pieces of data at a time, that is less than one instruction per clock cycle (). This process gives rise to an inherent inefficiency in subscalar CPUs. Since only one instruction |
(not to be confused with the modern concept of Ferae which also includes pangolins) in the tenth edition of his book Systema Naturae. He recognized six genera: Canis (canids and hyaenids), Phoca (pinnipeds), Felis (felids), Viverra (viverrids, herpestids, and mephitids), Mustela (non-badger mustelids), Ursus (ursids, large species of mustelids, and procyonids). It wasn't until 1821 that the English writer and traveler Thomas Edward Bowdich gave the group its modern and accepted name. Initially the modern concept of Carnivora was divided into two suborders: the terrestrial Fissipedia and the marine Pinnipedia. Below is the classification of how the extant families were related to each other after American paleontologist George Gaylord Simpson in 1945: Order Carnivora Bowdich, 1821 Suborder Fissipedia Blumenbach, 1791 Superfamily Canoidea G. Fischer de Waldheim, 1817 Family Canidae G. Fischer de Waldheim, 1817 – dogs Family Ursidae G. Fischer de Waldheim, 1817 – bears Family Procyonidae Bonaparte, 1850 – raccoons and pandas Family Mustelidae G. Fischer de Waldheim, 1817 – skunks, badgers, otters and weasels Superfamily Feloidea G. Fischer de Waldheim, 1817 Family Viverridae J. E. Gray, 1821 – civets and mongooses Family Hyaenidae J. E. Gray, 1821 – hyenas Family Felidae G. Fischer de Waldheim, 1817 – cats Suborder Pinnipedia Iliger, 1811 Family Otariidae J. E. Gray, 1825 – eared seals Family Odobenidae J. A. Allen, 1880 – walrus Family Phocidae J. E. Gray, 1821 – earless seals Since then, however, the methods in which mammalogists use to assess the phylogenetic relationships among the carnivoran families has been improved with using more complicated and intensive incorporation of genetics, morphology and the fossil record. Research into Carnivora phylogeny since 1945 has found Fisspedia to be paraphlyetic in respect to Pinnipedia, with pinnipeds being either more closely related to bears or to weasels. The small carnivoran families Viverridae, Procyonidae, and Mustelidae have been found to be polyphyletic: Mongooses and a handful of Malagasy endemic species are found to be in a clade with hyenas, with the Malagasy species being in their own family Eupleridae. The African palm civet is a basal cat-like carnivoran. The linsang is more closely related to cats. Pandas are not procyonids nor are they a natural grouping. The giant panda is a true bear while the red panda is a distinct family. Skunks and stink badgers are placed in their own family, and are the sister group to a clade containing Ailuridae, Procyonidae and Mustelidae sensu stricto. Below is a table chart of the extant carnivoran families and number of extant species recognized by various authors of the first and fourth volumes of Handbook of the Mammals of the World published in 2009 and 2014 respectively: Anatomy and physiology Craniodental region The canine teeth are usually large and conical. The canines are thick and incredibly stress resistant. All of the terrestrial species of carnivorans have three incisors on the top and bottom row of the dentition (the exception being is the sea otter (Enhydra lutris) which only has two lower incisor teeth). The third molar has been lost. The carnassial pair is made up by the fourth upper premolar and the first lower molar teeth. Like most mammals the dentition is heterodont in nature, though in some species like the aardwolf (Proteles cristata) the teeth have been greatly reduced and the cheek teeth are specialised for eating insects. In pinnipeds the teeth are homodont as they have evolved to grasp or to catch fish, and the cheek teeth are often lost. In bears and raccoons the carnassial pair is secondarily reduced. The skulls are heavily built with a strong zygomatic arch. Often a sagittal crest is present, sometimes more evident in sexual dimorphic species like sea lions and fur seals, though it has also been greatly reduced seen in some small carnivorans. The braincase is enlarged and the frontoparietal is position at the front of it. In most species the eyes are position at the front of the face. In caniforms the rostrum is usually longer with many teeth, where in comparison with felifoms the rostrum is shorter and have fewer teeth. The carnassial teeth in feliforms, however is more sectional. The turbinates are large and complex in comparison to other mammals, providing a large surface area for olfactory receptors. Postcranial region Aside from an accumulation of characteristics in the dental and cranial features, not much of their overall anatomy unites them as a group. All species of carnivorans have quadrupedal limbs with usually five digits at the front feet and four digits at the back feet. In terrestrial carnivorans the feet have soft pads. The feet can either be digitigrade seen in cats, hyenas and dogs or | North America, with various lineages being successful in megafaunal faunivorous niches at different intervals during the Miocene and later epochs. Systematics Evolution The order Carnivora belongs to a group of mammals known as Laurasiatheria, which also includes other groups such as bats and ungulates. Within this group the carnivorans are placed in the clade Ferae. Ferae includes the closest extant relative of carnivorans, the pangolins, as well as several extinct groups of mostly Paleogene carnivorous placentals such as the creodonts, the arctocyonians, and mesonychians. The creodonts were originally thought of as the sister taxon to the carnivorans, perhaps even ancestral to, based on the presence of the carnassial teeth. but the nature of the carnassial teeth is different between the two groups. In carnivorans the carnassials are positioned near the front of the molar row, while in the creodonts they are positioned near the back of the molar row. and this suggests a separate evolutionary history and an order-level distinction. In addition recent phylogenetic analysis suggests that creodonts are more closely related to pangolins while mesonychians might be the sister group to carnivorans and their stem-relatives. The closest stem-carnivorans are the miacoids. The miacoids include the families Viverravidae and Miacidae, and together the Carnivora and Miacoidea form the stem-clade Carnivoramorpha. The miacoids were small, genet-like carnivoramorphs that occupy a variety of niches such as terrestrial and arboreal habitats. Recent studies have shown a supporting amount of evidence that Miacoidea is an evolutionary grade of carnivoramorphs that, while viverravids are monophyletic basal group, the miacids are paraphyletic in respect to Carnivora (as shown in the phylogeny below). Carnivoramorpha as a whole first appeared in the Paleocene of North America about 60 million years ago. Crown carnivorans first appeared around 42 million years ago in the Middle Eocene. Their molecular phylogeny shows the extant Carnivora are a monophyletic group, the crown group of the Carnivoramorpha. From there carnivorans have split into two clades based on the composition of the bony structures that surround the middle ear of the skull, the cat-like feliforms and the dog-like caniforms. In feliforms, the auditory bullae are double-chambered, composed of two bones joined by a septum. Caniforms have single-chambered or partially divided auditory bullae, composed of a single bone. Initially the early representatives of carnivorans were small as the creodonts (specifically, the oxyaenids) and mesonychians dominated the apex predator niches during the Eocene, but in the Oligocene carnivorans became a dominant group of apex predators with the nimravids, and by the Miocene most of the extant carnivoran families have diversified and become the primary terrestrial predators in the Northern Hemisphere. The phylogenetic relationships of the carnivorans are shown in the following cladogram: Classification of the extant carnivorans In 1758 the Swedish botanist Carl Linnaeus placed all carnivorans known at the time into the group Ferae (not to be confused with the modern concept of Ferae which also includes pangolins) in the tenth edition of his book Systema Naturae. He recognized six genera: Canis (canids and hyaenids), Phoca (pinnipeds), Felis (felids), Viverra (viverrids, herpestids, and mephitids), Mustela (non-badger mustelids), Ursus (ursids, large species of mustelids, and procyonids). It wasn't until 1821 that the English writer and traveler Thomas Edward Bowdich gave the group its modern and accepted name. Initially the modern concept of Carnivora was divided into two suborders: the terrestrial Fissipedia and the marine Pinnipedia. Below is the classification of how the extant families were related to each other after American paleontologist George Gaylord Simpson in 1945: Order Carnivora Bowdich, 1821 Suborder Fissipedia Blumenbach, 1791 Superfamily Canoidea G. Fischer de Waldheim, 1817 Family Canidae G. Fischer de Waldheim, 1817 – dogs Family Ursidae G. Fischer de Waldheim, 1817 – bears Family Procyonidae Bonaparte, 1850 – raccoons and pandas Family Mustelidae G. Fischer de Waldheim, 1817 – skunks, badgers, otters and weasels Superfamily Feloidea G. Fischer de Waldheim, 1817 Family Viverridae J. E. Gray, 1821 – civets and mongooses Family Hyaenidae J. E. Gray, 1821 – hyenas Family Felidae G. Fischer de Waldheim, 1817 – cats Suborder Pinnipedia Iliger, 1811 Family Otariidae J. E. Gray, 1825 – eared seals Family Odobenidae J. A. Allen, 1880 – walrus Family Phocidae J. E. Gray, 1821 – earless seals Since then, however, the methods in which mammalogists use to assess the phylogenetic relationships among the carnivoran families has been improved with using more complicated and intensive incorporation of genetics, morphology and the fossil record. Research into Carnivora phylogeny since 1945 has found Fisspedia to be paraphlyetic in respect to Pinnipedia, with pinnipeds being either more closely related to bears or to weasels. The small carnivoran families Viverridae, Procyonidae, and Mustelidae have been found to be polyphyletic: Mongooses and a handful of Malagasy endemic species are found to be in a clade with hyenas, with the Malagasy species being in their own family Eupleridae. The African palm civet is a basal cat-like carnivoran. The linsang is more closely related to cats. Pandas are not procyonids nor are they a natural grouping. The giant panda is a true bear while the red panda is a distinct family. Skunks and stink badgers are placed in their own family, and are the sister group to a clade containing Ailuridae, Procyonidae and Mustelidae sensu stricto. Below is a table chart of the extant carnivoran families and number of extant species recognized by various authors of the first and fourth volumes of Handbook of the Mammals of the World published in 2009 and 2014 respectively: Anatomy and physiology Craniodental region The canine teeth are usually large and conical. The canines are thick and incredibly stress resistant. All of the terrestrial species of carnivorans have three incisors on the top and bottom row of the dentition (the exception being is the sea otter (Enhydra lutris) which only has two lower incisor teeth). The third molar has been lost. The carnassial pair is made up by the fourth upper premolar and the first lower molar teeth. Like most mammals the dentition is heterodont in nature, though in some species like the aardwolf (Proteles cristata) the teeth have been greatly reduced and the cheek teeth are specialised for eating insects. In pinnipeds the teeth are homodont as they have evolved to grasp or to catch fish, and the cheek teeth are often lost. In bears |
to those two cities in several economic and logistical ways. Great Britain declared war on Spain in 1739, and the city of Cartagena quickly became a top target for the British. A massive British expeditionary force was dispatched to capture the city, but after initial inroads devastating outbreaks of disease crippled their numbers and the British were forced to withdraw. The battle became one of Spain's most decisive victories in the conflict, and secured Spanish dominance in the Caribbean until the Seven Years' War. The 18th-century priest, botanist and mathematician José Celestino Mutis was delegated by Viceroy Antonio Caballero y Góngora to conduct an inventory of the nature of New Granada. Started in 1783, this became known as the Royal Botanical Expedition to New Granada. It classified plants and wildlife, and founded the first astronomical observatory in the city of Santa Fe de Bogotá. In July 1801 the Prussian scientist Alexander von Humboldt reached Santa Fe de Bogotá where he met with Mutis. In addition, historical figures in the process of independence in New Granada emerged from the expedition as the astronomer Francisco José de Caldas, the scientist Francisco Antonio Zea, the zoologist Jorge Tadeo Lozano and the painter Salvador Rizo. Independence Since the beginning of the periods of conquest and colonization, there were several rebel movements against Spanish rule, but most were either crushed or remained too weak to change the overall situation. The last one that sought outright independence from Spain sprang up around 1810 and culminated in the Colombian Declaration of Independence, issued on 20 July 1810, the day that is now celebrated as the nation's Independence Day. This movement followed the independence of St. Domingue (present-day Haiti) in 1804, which provided some support to an eventual leader of this rebellion: Simón Bolívar. Francisco de Paula Santander also would play a decisive role. A movement was initiated by Antonio Nariño, who opposed Spanish centralism and led the opposition against the Viceroyalty. Cartagena became independent in November 1811. In 1811, the United Provinces of New Granada were proclaimed, headed by Camilo Torres Tenorio. The emergence of two distinct ideological currents among the patriots (federalism and centralism) gave rise to a period of instability. Shortly after the Napoleonic Wars ended, Ferdinand VII, recently restored to the throne in Spain, unexpectedly decided to send military forces to retake most of northern South America. The viceroyalty was restored under the command of Juan Sámano, whose regime punished those who participated in the patriotic movements, ignoring the political nuances of the juntas. The retribution stoked renewed rebellion, which, combined with a weakened Spain, made possible a successful rebellion led by the Venezuelan-born Simón Bolívar, who finally proclaimed independence in 1819. The pro-Spanish resistance was defeated in 1822 in the present territory of Colombia and in 1823 in Venezuela. The territory of the Viceroyalty of New Granada became the Republic of Colombia, organized as a union of the current territories of Colombia, Panama, Ecuador, Venezuela, parts of Guyana and Brazil and north of Marañón River. The Congress of Cúcuta in 1821 adopted a constitution for the new Republic. Simón Bolívar became the first President of Colombia, and Francisco de Paula Santander was made Vice President. However, the new republic was unstable and the Gran Colombia ultimately collapsed. Modern Colombia comes from one of the countries that emerged after the dissolution of la Gran Colombia, the other two being Ecuador and Venezuela. Colombia was the first constitutional government in South America, and the Liberal and Conservative parties, founded in 1848 and 1849, respectively, are two of the oldest surviving political parties in the Americas. Slavery was abolished in the country in 1851. Internal political and territorial divisions led to the dissolution of Gran Colombia in 1830. The so-called "Department of Cundinamarca" adopted the name "New Granada", which it kept until 1858 when it became the "Confederación Granadina" (Granadine Confederation). After a two-year civil war in 1863, the "United States of Colombia" was created, lasting until 1886, when the country finally became known as the Republic of Colombia. Internal divisions remained between the bipartisan political forces, occasionally igniting very bloody civil wars, the most significant being the Thousand Days' War (1899–1902). 20th century The United States of America's intentions to influence the area (especially the Panama Canal construction and control) led to the separation of the Department of Panama in 1903 and the establishment of it as a nation. The United States paid Colombia $25,000,000 in 1921, seven years after completion of the canal, for redress of President Roosevelt's role in the creation of Panama, and Colombia recognized Panama under the terms of the Thomson–Urrutia Treaty. Colombia and Peru went to war because of territory disputes far in the Amazon basin. The war ended with a peace deal brokered by the League of Nations. The League finally awarded the disputed area to Colombia in June 1934. Soon after, Colombia achieved some degree of political stability, which was interrupted by a bloody conflict that took place between the late 1940s and the early 1950s, a period known as La Violencia ("The Violence"). Its cause was mainly mounting tensions between the two leading political parties, which subsequently ignited after the assassination of the Liberal presidential candidate Jorge Eliécer Gaitán on 9 April 1948. The ensuing riots in Bogotá, known as El Bogotazo, spread throughout the country and claimed the lives of at least 180,000 Colombians. Colombia entered the Korean War when Laureano Gómez was elected president. It was the only Latin American country to join the war in a direct military role as an ally of the United States. Particularly important was the resistance of the Colombian troops at Old Baldy. The violence between the two political parties decreased first when Gustavo Rojas deposed the President of Colombia in a coup d'état and negotiated with the guerrillas, and then under the military junta of General Gabriel París. After Rojas' deposition, the Colombian Conservative Party and Colombian Liberal Party agreed to create the National Front, a coalition that would jointly govern the country. Under the deal, the presidency would alternate between conservatives and liberals every 4 years for 16 years; the two parties would have parity in all other elective offices. The National Front ended "La Violencia", and National Front administrations attempted to institute far-reaching social and economic reforms in cooperation with the Alliance for Progress. Despite the progress in certain sectors, many social and political problems continued, and guerrilla groups were formally created such as the FARC, the ELN and the M-19 to fight the government and political apparatus. Since the 1960s, the country has suffered from an asymmetric low-intensity armed conflict between government forces, leftist guerrilla groups and right wing paramilitaries. The conflict escalated in the 1990s, mainly in remote rural areas. Since the beginning of the armed conflict, human rights defenders have fought for the respect for human rights, despite staggering opposition. Several guerrillas' organizations decided to demobilize after peace negotiations in 1989–1994. The United States has been heavily involved in the conflict since its beginnings, when in the early 1960s the U.S. government encouraged the Colombian military to attack leftist militias in rural Colombia. This was part of the U.S. fight against communism. Mercenaries and multinational corporations such as Chiquita Brands International are some of the international actors that have contributed to the violence of the conflict. Beginning in the mid-1970s Colombian drug cartels became major producers, processors and exporters of illegal drugs, primarily marijuana and cocaine. On 4 July 1991, a new Constitution was promulgated. The changes generated by the new constitution are viewed as positive by Colombian society. 21st century The administration of President Álvaro Uribe (2002–10), adopted the democratic security policy which included an integrated counter-terrorism and counter-insurgency campaign. The Government economic plan also promoted confidence in investors. As part of a controversial peace process the AUC (right-wing paramilitaries) as a formal organization had ceased to function. In February 2008, millions of Colombians demonstrated against FARC and other outlawed groups. After peace negotiations in Cuba, the Colombian government of President Juan Manuel Santos and the guerrillas of the FARC-EP announced a final agreement to end the conflict. However, a referendum to ratify the deal was unsuccessful. Afterward, the Colombian government and the FARC signed a revised peace deal in November 2016, which the Colombian congress approved. In 2016, President Santos was awarded the Nobel Peace Prize. The Government began a process of attention and comprehensive reparation for victims of conflict. Colombia shows modest progress in the struggle to defend human rights, as expressed by HRW. A Special Jurisdiction of Peace has been created to investigate, clarify, prosecute and punish serious human rights violations and grave breaches of international humanitarian law which occurred during the armed conflict and to satisfy victims' right to justice. During his visit to Colombia, Pope Francis paid tribute to the victims of the conflict. In June 2018, Ivan Duque, the candidate of the right-wing Democratic Center party, won the presidential election. On 7 August 2018, he was sworn in as the new President of Colombia to succeed Juan Manuel Santos. Colombia's relations with Venezuela have fluctuated due to ideological differences between the two governments. Colombia has offered humanitarian support with food and medicines to mitigate the shortage of supplies in Venezuela. Colombia's Foreign Ministry said that all efforts to resolve Venezuela's crisis should be peaceful. Colombia proposed the idea of the Sustainable Development Goals and a final document was adopted by the United Nations. In February 2019, Venezuelan president Nicolás Maduro cut off diplomatic relations with Colombia after Colombian President Ivan Duque had helped Venezuelan opposition politicians deliver humanitarian aid to their country. Colombia recognized Venezuelan opposition leader Juan Guaidó as the country's legitimate president. In January 2020, Colombia rejected Maduro's proposal that the two countries would restore diplomatic relations. Protests started on 28 April 2021 when the government proposed a tax bill which would greatly expand the range of the 19 percent value-added tax. Geography The geography of Colombia is characterized by its six main natural regions that present their own unique characteristics, from the Andes mountain range region shared with Ecuador and Venezuela; the Pacific Coastal region shared with Panama and Ecuador; the Caribbean coastal region shared with Venezuela and Panama; the Llanos (plains) shared with Venezuela; the Amazon Rainforest region shared with Venezuela, Brazil, Peru and Ecuador; to the insular area, comprising islands in both the Atlantic and Pacific oceans. It shares its maritime limits with Costa Rica, Nicaragua, Honduras, Jamaica, Haiti, and the Dominican Republic. Colombia is bordered to the northwest by Panama, to the east by Venezuela and Brazil, and to the south by Ecuador and Peru; it established its maritime boundaries with neighboring countries through seven agreements on the Caribbean Sea and three on the Pacific Ocean. It lies between latitudes 12°N and 4°S and between longitudes 67° and 79°W. Part of the Ring of Fire, a region of the world subject to earthquakes and volcanic eruptions, in the interior of Colombia the Andes are the prevailing geographical feature. Most of Colombia's population centers are located in these interior highlands. Beyond the Colombian Massif (in the southwestern departments of Cauca and Nariño), these are divided into three branches known as cordilleras (mountain ranges): the Cordillera Occidental, running adjacent to the Pacific coast and including the city of Cali; the Cordillera Central, running between the Cauca and Magdalena River valleys (to the west and east, respectively) and including the cities of Medellín, Manizales, Pereira, and Armenia; and the Cordillera Oriental, extending northeast to the Guajira Peninsula and including Bogotá, Bucaramanga, and Cúcuta. Peaks in the Cordillera Occidental exceed , and in the Cordillera Central and Cordillera Oriental they reach . At , Bogotá is the highest city of its size in the world. East of the Andes lies the savanna of the Llanos, part of the Orinoco River basin, and in the far southeast, the jungle of the Amazon rainforest. Together these lowlands make up over half Colombia's territory, but they contain less than 6% of the population. To the north the Caribbean coast, home to 21.9% of the population and the location of the major port cities of Barranquilla and Cartagena, generally consists of low-lying plains, but it also contains the Sierra Nevada de Santa Marta mountain range, which includes the country's tallest peaks (Pico Cristóbal Colón and Pico Simón Bolívar), and the La Guajira Desert. By contrast the narrow and discontinuous Pacific coastal lowlands, backed by the Serranía de Baudó mountains, are sparsely populated and covered in dense vegetation. The principal Pacific port is Buenaventura. The main rivers of Colombia are Magdalena, Cauca, Guaviare, Atrato, Meta, Putumayo and Caquetá. Colombia has four main drainage systems: the Pacific drain, the Caribbean drain, the Orinoco Basin and the Amazon Basin. The Orinoco and Amazon Rivers mark limits with Colombia to Venezuela and Peru respectively. Protected areas and the "National Park System" cover an area of about and account for 12.77% of the Colombian territory. Compared to neighboring countries, rates of deforestation in Colombia are still relatively low. Colombia had a 2018 Forest Landscape Integrity Index mean score of 8.26/10, ranking it 25th globally out of 172 countries. Colombia is the sixth country in the world by magnitude of total renewable freshwater supply, and still has large reserves of freshwater. Climate The climate of Colombia is characterized for being tropical presenting variations within six natural regions and depending on the altitude, temperature, humidity, winds and rainfall. Colombia has a diverse range of climate zones, including tropical rainforests, savannas, steppes, deserts and mountain climates. Mountain climate is one of the unique features of the Andes and other high altitude reliefs where climate is determined by elevation. Below in elevation is the warm altitudinal zone, where temperatures are above . About 82.5% of the country's total area lies in the warm altitudinal zone. The temperate climate altitudinal zone located between is characterized for presenting an average temperature ranging between . The cold climate is present between and the temperatures vary between . Beyond lies the alpine conditions of the forested zone and then the treeless grasslands of the páramos. Above , where temperatures are below freezing, the climate is glacial, a zone of permanent snow and ice. Biodiversity Colombia is one of the megadiverse countries in biodiversity, ranking first in bird species. As for plants, the country has between 40,000 and 45,000 plant species, equivalent to 10 or 20% of total global species, which is even more remarkable given that Colombia is considered a country of intermediate size. Colombia is the second most biodiverse country in the world, lagging only after Brazil which is approximately 7 times bigger. Colombia is the country with the planet's highest biodiversity, having the highest rate of species by area as well as the largest number of endemisms (species that are not found naturally anywhere else) of any country. About 10% of the species of the Earth live in Colombia, including over 1,900 species of bird, more than in Europe and North America combined. Colombia has 10% of the world's mammals species, 14% of the amphibian species and 18% of the bird species of the world. Colombia has about 2,000 species of marine fish and is the second most diverse country in freshwater fish. It is also the country with the most endemic species of butterflies, is first in orchid species, and has approximately 7,000 species of beetles. Colombia is second in the number of amphibian species and is the third most diverse country in reptiles and palms. There are about 1,900 species of mollusks and according to estimates there are about 300,000 species of invertebrates in the country. In Colombia there are 32 terrestrial biomes and 314 types of ecosystems. Government and politics The government of Colombia takes place within the framework of a presidential participatory democratic republic as established in the Constitution of 1991. In accordance with the principle of separation of powers, government is divided into three branches: the executive branch, the legislative branch and the judicial branch. As the head of the executive branch, the President of Colombia serves as both head of state and head of government, followed by the Vice President and the Council of Ministers. The president is elected by popular vote to serve a single four-year term (In 2015, Colombia's Congress approved the repeal of a 2004 constitutional amendment that changed the one-term limit for presidents to a two-term limit). At the provincial level executive power is vested in department governors, municipal mayors and local administrators for smaller administrative subdivisions, such as corregimientos or comunas. All regional elections are held one year and five months after the presidential election. The legislative branch of government is represented nationally by the Congress, a bicameral institution comprising a 166-seat Chamber of Representatives and a 102-seat Senate. The Senate is elected nationally and the Chamber of Representatives is elected in electoral districts. Members of both houses are elected to serve four-year terms two months before the president, also by popular vote. The judicial branch is headed by four high courts, consisting of the Supreme Court which deals with penal and civil matters, the Council of State, which has special responsibility for administrative law and also provides legal advice to the executive, the Constitutional Court, responsible for assuring the integrity of the Colombian constitution, and the Superior Council of Judicature, responsible for auditing the judicial branch. Colombia operates a system of civil law, which since 2005 has been applied through an adversarial system. Despite a number of controversies, the democratic security policy has ensured that former President Uribe remained popular among Colombian people, with his approval rating peaking at 76%, according to a poll in 2009. However, having served two terms, he was constitutionally barred from seeking re-election in 2010. In the run-off elections on 20 June 2010 the former Minister of defense Juan Manuel Santos won with 69% of the vote against the second most popular candidate, Antanas Mockus. A second round was required since no candidate received over the 50% winning threshold of votes. Santos won nearly 51% of the vote in second-round elections on 15 June 2014, beating right-wing rival Óscar Iván Zuluaga, who won 45%. Iván Duque won in the second round with 54% of the vote, against 42% for his left-wing rival, Gustavo Petro. His term as Colombia's president runs for four years beginning 7 August 2018. Foreign affairs The foreign affairs of Colombia are headed by the President, as head of state, and managed by the Minister of Foreign Affairs. Colombia has diplomatic missions in all continents. Colombia was one of the 4 founding members of the Pacific Alliance, which is a political, economic and co-operative integration mechanism that promotes the free circulation of goods, services, capital and persons between the members, as well as a common stock exchange and joint embassies in several countries. Colombia is also a member of the United Nations, the World Trade Organization, the Organisation for Economic Co-operation and Development, the Organization of American States, the Organization of Ibero-American States, and the Andean Community of Nations. Colombia is a global partner of NATO. Military The executive branch of government is responsible for managing the defense of Colombia, with the President commander-in-chief of the armed forces. The Ministry of Defence exercises day-to-day control of the military and the Colombian National Police. Colombia has 455,461 active military personnel. In 2016, 3.4% of the country's GDP went towards military expenditure, placing it 24th in the world. Colombia's armed forces are the largest in Latin America, and it is the second largest spender on its military after Brazil. In 2018, Colombia signed the UN treaty on the Prohibition of Nuclear Weapons. The Colombian military is | in 1823 in Venezuela. The territory of the Viceroyalty of New Granada became the Republic of Colombia, organized as a union of the current territories of Colombia, Panama, Ecuador, Venezuela, parts of Guyana and Brazil and north of Marañón River. The Congress of Cúcuta in 1821 adopted a constitution for the new Republic. Simón Bolívar became the first President of Colombia, and Francisco de Paula Santander was made Vice President. However, the new republic was unstable and the Gran Colombia ultimately collapsed. Modern Colombia comes from one of the countries that emerged after the dissolution of la Gran Colombia, the other two being Ecuador and Venezuela. Colombia was the first constitutional government in South America, and the Liberal and Conservative parties, founded in 1848 and 1849, respectively, are two of the oldest surviving political parties in the Americas. Slavery was abolished in the country in 1851. Internal political and territorial divisions led to the dissolution of Gran Colombia in 1830. The so-called "Department of Cundinamarca" adopted the name "New Granada", which it kept until 1858 when it became the "Confederación Granadina" (Granadine Confederation). After a two-year civil war in 1863, the "United States of Colombia" was created, lasting until 1886, when the country finally became known as the Republic of Colombia. Internal divisions remained between the bipartisan political forces, occasionally igniting very bloody civil wars, the most significant being the Thousand Days' War (1899–1902). 20th century The United States of America's intentions to influence the area (especially the Panama Canal construction and control) led to the separation of the Department of Panama in 1903 and the establishment of it as a nation. The United States paid Colombia $25,000,000 in 1921, seven years after completion of the canal, for redress of President Roosevelt's role in the creation of Panama, and Colombia recognized Panama under the terms of the Thomson–Urrutia Treaty. Colombia and Peru went to war because of territory disputes far in the Amazon basin. The war ended with a peace deal brokered by the League of Nations. The League finally awarded the disputed area to Colombia in June 1934. Soon after, Colombia achieved some degree of political stability, which was interrupted by a bloody conflict that took place between the late 1940s and the early 1950s, a period known as La Violencia ("The Violence"). Its cause was mainly mounting tensions between the two leading political parties, which subsequently ignited after the assassination of the Liberal presidential candidate Jorge Eliécer Gaitán on 9 April 1948. The ensuing riots in Bogotá, known as El Bogotazo, spread throughout the country and claimed the lives of at least 180,000 Colombians. Colombia entered the Korean War when Laureano Gómez was elected president. It was the only Latin American country to join the war in a direct military role as an ally of the United States. Particularly important was the resistance of the Colombian troops at Old Baldy. The violence between the two political parties decreased first when Gustavo Rojas deposed the President of Colombia in a coup d'état and negotiated with the guerrillas, and then under the military junta of General Gabriel París. After Rojas' deposition, the Colombian Conservative Party and Colombian Liberal Party agreed to create the National Front, a coalition that would jointly govern the country. Under the deal, the presidency would alternate between conservatives and liberals every 4 years for 16 years; the two parties would have parity in all other elective offices. The National Front ended "La Violencia", and National Front administrations attempted to institute far-reaching social and economic reforms in cooperation with the Alliance for Progress. Despite the progress in certain sectors, many social and political problems continued, and guerrilla groups were formally created such as the FARC, the ELN and the M-19 to fight the government and political apparatus. Since the 1960s, the country has suffered from an asymmetric low-intensity armed conflict between government forces, leftist guerrilla groups and right wing paramilitaries. The conflict escalated in the 1990s, mainly in remote rural areas. Since the beginning of the armed conflict, human rights defenders have fought for the respect for human rights, despite staggering opposition. Several guerrillas' organizations decided to demobilize after peace negotiations in 1989–1994. The United States has been heavily involved in the conflict since its beginnings, when in the early 1960s the U.S. government encouraged the Colombian military to attack leftist militias in rural Colombia. This was part of the U.S. fight against communism. Mercenaries and multinational corporations such as Chiquita Brands International are some of the international actors that have contributed to the violence of the conflict. Beginning in the mid-1970s Colombian drug cartels became major producers, processors and exporters of illegal drugs, primarily marijuana and cocaine. On 4 July 1991, a new Constitution was promulgated. The changes generated by the new constitution are viewed as positive by Colombian society. 21st century The administration of President Álvaro Uribe (2002–10), adopted the democratic security policy which included an integrated counter-terrorism and counter-insurgency campaign. The Government economic plan also promoted confidence in investors. As part of a controversial peace process the AUC (right-wing paramilitaries) as a formal organization had ceased to function. In February 2008, millions of Colombians demonstrated against FARC and other outlawed groups. After peace negotiations in Cuba, the Colombian government of President Juan Manuel Santos and the guerrillas of the FARC-EP announced a final agreement to end the conflict. However, a referendum to ratify the deal was unsuccessful. Afterward, the Colombian government and the FARC signed a revised peace deal in November 2016, which the Colombian congress approved. In 2016, President Santos was awarded the Nobel Peace Prize. The Government began a process of attention and comprehensive reparation for victims of conflict. Colombia shows modest progress in the struggle to defend human rights, as expressed by HRW. A Special Jurisdiction of Peace has been created to investigate, clarify, prosecute and punish serious human rights violations and grave breaches of international humanitarian law which occurred during the armed conflict and to satisfy victims' right to justice. During his visit to Colombia, Pope Francis paid tribute to the victims of the conflict. In June 2018, Ivan Duque, the candidate of the right-wing Democratic Center party, won the presidential election. On 7 August 2018, he was sworn in as the new President of Colombia to succeed Juan Manuel Santos. Colombia's relations with Venezuela have fluctuated due to ideological differences between the two governments. Colombia has offered humanitarian support with food and medicines to mitigate the shortage of supplies in Venezuela. Colombia's Foreign Ministry said that all efforts to resolve Venezuela's crisis should be peaceful. Colombia proposed the idea of the Sustainable Development Goals and a final document was adopted by the United Nations. In February 2019, Venezuelan president Nicolás Maduro cut off diplomatic relations with Colombia after Colombian President Ivan Duque had helped Venezuelan opposition politicians deliver humanitarian aid to their country. Colombia recognized Venezuelan opposition leader Juan Guaidó as the country's legitimate president. In January 2020, Colombia rejected Maduro's proposal that the two countries would restore diplomatic relations. Protests started on 28 April 2021 when the government proposed a tax bill which would greatly expand the range of the 19 percent value-added tax. Geography The geography of Colombia is characterized by its six main natural regions that present their own unique characteristics, from the Andes mountain range region shared with Ecuador and Venezuela; the Pacific Coastal region shared with Panama and Ecuador; the Caribbean coastal region shared with Venezuela and Panama; the Llanos (plains) shared with Venezuela; the Amazon Rainforest region shared with Venezuela, Brazil, Peru and Ecuador; to the insular area, comprising islands in both the Atlantic and Pacific oceans. It shares its maritime limits with Costa Rica, Nicaragua, Honduras, Jamaica, Haiti, and the Dominican Republic. Colombia is bordered to the northwest by Panama, to the east by Venezuela and Brazil, and to the south by Ecuador and Peru; it established its maritime boundaries with neighboring countries through seven agreements on the Caribbean Sea and three on the Pacific Ocean. It lies between latitudes 12°N and 4°S and between longitudes 67° and 79°W. Part of the Ring of Fire, a region of the world subject to earthquakes and volcanic eruptions, in the interior of Colombia the Andes are the prevailing geographical feature. Most of Colombia's population centers are located in these interior highlands. Beyond the Colombian Massif (in the southwestern departments of Cauca and Nariño), these are divided into three branches known as cordilleras (mountain ranges): the Cordillera Occidental, running adjacent to the Pacific coast and including the city of Cali; the Cordillera Central, running between the Cauca and Magdalena River valleys (to the west and east, respectively) and including the cities of Medellín, Manizales, Pereira, and Armenia; and the Cordillera Oriental, extending northeast to the Guajira Peninsula and including Bogotá, Bucaramanga, and Cúcuta. Peaks in the Cordillera Occidental exceed , and in the Cordillera Central and Cordillera Oriental they reach . At , Bogotá is the highest city of its size in the world. East of the Andes lies the savanna of the Llanos, part of the Orinoco River basin, and in the far southeast, the jungle of the Amazon rainforest. Together these lowlands make up over half Colombia's territory, but they contain less than 6% of the population. To the north the Caribbean coast, home to 21.9% of the population and the location of the major port cities of Barranquilla and Cartagena, generally consists of low-lying plains, but it also contains the Sierra Nevada de Santa Marta mountain range, which includes the country's tallest peaks (Pico Cristóbal Colón and Pico Simón Bolívar), and the La Guajira Desert. By contrast the narrow and discontinuous Pacific coastal lowlands, backed by the Serranía de Baudó mountains, are sparsely populated and covered in dense vegetation. The principal Pacific port is Buenaventura. The main rivers of Colombia are Magdalena, Cauca, Guaviare, Atrato, Meta, Putumayo and Caquetá. Colombia has four main drainage systems: the Pacific drain, the Caribbean drain, the Orinoco Basin and the Amazon Basin. The Orinoco and Amazon Rivers mark limits with Colombia to Venezuela and Peru respectively. Protected areas and the "National Park System" cover an area of about and account for 12.77% of the Colombian territory. Compared to neighboring countries, rates of deforestation in Colombia are still relatively low. Colombia had a 2018 Forest Landscape Integrity Index mean score of 8.26/10, ranking it 25th globally out of 172 countries. Colombia is the sixth country in the world by magnitude of total renewable freshwater supply, and still has large reserves of freshwater. Climate The climate of Colombia is characterized for being tropical presenting variations within six natural regions and depending on the altitude, temperature, humidity, winds and rainfall. Colombia has a diverse range of climate zones, including tropical rainforests, savannas, steppes, deserts and mountain climates. Mountain climate is one of the unique features of the Andes and other high altitude reliefs where climate is determined by elevation. Below in elevation is the warm altitudinal zone, where temperatures are above . About 82.5% of the country's total area lies in the warm altitudinal zone. The temperate climate altitudinal zone located between is characterized for presenting an average temperature ranging between . The cold climate is present between and the temperatures vary between . Beyond lies the alpine conditions of the forested zone and then the treeless grasslands of the páramos. Above , where temperatures are below freezing, the climate is glacial, a zone of permanent snow and ice. Biodiversity Colombia is one of the megadiverse countries in biodiversity, ranking first in bird species. As for plants, the country has between 40,000 and 45,000 plant species, equivalent to 10 or 20% of total global species, which is even more remarkable given that Colombia is considered a country of intermediate size. Colombia is the second most biodiverse country in the world, lagging only after Brazil which is approximately 7 times bigger. Colombia is the country with the planet's highest biodiversity, having the highest rate of species by area as well as the largest number of endemisms (species that are not found naturally anywhere else) of any country. About 10% of the species of the Earth live in Colombia, including over 1,900 species of bird, more than in Europe and North America combined. Colombia has 10% of the world's mammals species, 14% of the amphibian species and 18% of the bird species of the world. Colombia has about 2,000 species of marine fish and is the second most diverse country in freshwater fish. It is also the country with the most endemic species of butterflies, is first in orchid species, and has approximately 7,000 species of beetles. Colombia is second in the number of amphibian species and is the third most diverse country in reptiles and palms. There are about 1,900 species of mollusks and according to estimates there are about 300,000 species of invertebrates in the country. In Colombia there are 32 terrestrial biomes and 314 types of ecosystems. Government and politics The government of Colombia takes place within the framework of a presidential participatory democratic republic as established in the Constitution of 1991. In accordance with the principle of separation of powers, government is divided into three branches: the executive branch, the legislative branch and the judicial branch. As the head of the executive branch, the President of Colombia serves as both head of state and head of government, followed by the Vice President and the Council of Ministers. The president is elected by popular vote to serve a single four-year term (In 2015, Colombia's Congress approved the repeal of a 2004 constitutional amendment that changed the one-term limit for presidents to a two-term limit). At the provincial level executive power is vested in department governors, municipal mayors and local administrators for smaller administrative subdivisions, such as corregimientos or comunas. All regional elections are held one year and five months after the presidential election. The legislative branch of government is represented nationally by the Congress, a bicameral institution comprising a 166-seat Chamber of Representatives and a 102-seat Senate. The Senate is elected nationally and the Chamber of Representatives is elected in electoral districts. Members of both houses are elected to serve four-year terms two months before the president, also by popular vote. The judicial branch is headed by four high courts, consisting of the Supreme Court which deals with penal and civil matters, the Council of State, which has special responsibility for administrative law and also provides legal advice to the executive, the Constitutional Court, responsible for assuring the integrity of the Colombian constitution, and the Superior Council of Judicature, responsible for auditing the judicial branch. Colombia operates a system of civil law, which since 2005 has been applied through an adversarial system. Despite a number of controversies, the democratic security policy has ensured that former President Uribe remained popular among Colombian people, with his approval rating peaking at 76%, according to a poll in 2009. However, having served two terms, he was constitutionally barred from seeking re-election in 2010. In the run-off elections on 20 June 2010 the former Minister of defense Juan Manuel Santos won with 69% of the vote against the second most popular candidate, Antanas Mockus. A second round was required since no candidate received over the 50% winning threshold of votes. Santos won nearly 51% of the vote in second-round elections on 15 June 2014, beating right-wing rival Óscar Iván Zuluaga, who won 45%. Iván Duque won in the second round with 54% of the vote, against 42% for his left-wing rival, Gustavo Petro. His term as Colombia's president runs for four years beginning 7 August 2018. Foreign affairs The foreign affairs of Colombia are headed by the President, as head of state, and managed by the Minister of Foreign Affairs. Colombia has diplomatic missions in all continents. Colombia was one of the 4 founding members of the Pacific Alliance, which is a political, economic and co-operative integration mechanism that promotes the free circulation of goods, services, capital and persons between the members, as well as a common stock exchange and joint embassies in several countries. Colombia is also a member of the United Nations, the World Trade Organization, the Organisation for Economic Co-operation and Development, the Organization of American States, the Organization of Ibero-American States, and the Andean Community of Nations. Colombia is a global partner of NATO. Military The executive branch of government is responsible for managing the defense of Colombia, with the President commander-in-chief of the armed forces. The Ministry of Defence exercises day-to-day control of the military and the Colombian National Police. Colombia has 455,461 active military personnel. In 2016, 3.4% of the country's GDP went towards military expenditure, placing it 24th in the world. Colombia's armed forces are the largest in Latin America, and it is the second largest spender on its military after Brazil. In 2018, Colombia signed the UN treaty on the Prohibition of Nuclear Weapons. The Colombian military is divided into three branches: the National Army of Colombia; the Colombian Air Force; and the Colombian Navy. The National Police functions as a gendarmerie, operating independently from the military as the law enforcement agency for the entire country. Each of these operates with their own intelligence apparatus separate from the National Intelligence Directorate (DNI, in Spanish). The National Army is formed by divisions, brigades, special brigades, and special units, the Colombian Navy by the Naval Infantry, the Naval Force of the Caribbean, the Naval Force of the Pacific, the Naval Force of the South, the Naval Force of the East, Colombia Coast Guards, Naval Aviation, and the Specific Command of San Andres y Providencia and the Air Force by 15 air units. The National Police has a presence in all municipalities. Administrative divisions Colombia is divided into 32 departments and one capital district, which is treated as a department (Bogotá also serves as the capital of the department of Cundinamarca). Departments are subdivided into municipalities, each of which is assigned a municipal seat, and municipalities are in turn subdivided into corregimientos in rural areas and into comunas in urban areas. Each department has a local government with a governor and assembly directly elected to four-year terms, and each municipality is headed by a mayor and council. There is a popularly elected local administrative board in each of the corregimientos or comunas. In addition to the capital, four other cities have been designated districts (in effect special municipalities), on the basis of special distinguishing features. These are Barranquilla, Cartagena, Santa Marta and Buenaventura. Some departments have local administrative subdivisions, where towns have a large concentration of population and municipalities are near each other (for example, in Antioquia and Cundinamarca). Where departments have a low population (for example Amazonas, Vaupés and Vichada), special administrative divisions are employed, such as "department corregimientos", which are a hybrid of a municipality and a corregimiento. Click on a department on the map below to go to its article. Largest cities and towns Colombia is a highly urbanized country with 77.1% of the population living in urban areas. The largest cities in the country are Bogotá, with 7,387,400 inhabitants, Medellín, with 2,382,399 inhabitants, Cali, with 2,172,527 inhabitants, and Barranquilla, with 1,205,284 inhabitants. Economy Historically an agrarian economy, Colombia urbanized rapidly in the 20th century, by the end of which just 15.8% of the workforce were employed in agriculture, generating just 6.6% of GDP; 19.6% of the workforce were employed in industry and 64.6% in services, responsible for 33.4% and 59.9% of GDP respectively. The country's economic production is dominated by its strong domestic demand. Consumption expenditure by households is the largest component of GDP. Colombia's market economy grew steadily in the latter part of the 20th century, with gross domestic product (GDP) increasing at an average rate of over 4% per year between 1970 and 1998. The country suffered a recession in 1999 (the first full year of negative growth since the Great Depression), and the recovery from that recession was long and painful. However, in recent years growth has been impressive, reaching 6.9% in 2007, one of the highest rates of growth in Latin America. According to International Monetary Fund estimates, in 2012, Colombia's GDP (PPP) was US$500 billion (28th in the world and third in South America). Total government expenditures account for 27.9 percent of the domestic economy. External debt equals 39.9 percent of gross domestic product. A strong fiscal climate was reaffirmed by a boost in bond ratings. Annual inflation closed 2017 at 4.09% YoY (vs. 5.75% YoY in 2016). The average national unemployment rate in 2017 was 9.4%, although the informality is the biggest problem facing the labour market (the income of formal workers climbed 24.8% in 5 years while labor incomes of informal workers rose only 9%). Colombia has free-trade zones (FTZ), such as Zona Franca del Pacifico, located in the Valle del Cauca, one of the most striking areas for foreign investment. The financial sector has grown favorably due to good liquidity in the economy, the growth of credit and the positive performance of the Colombian economy. The Colombian Stock Exchange through the Latin American Integrated Market (MILA) offers a regional market to trade equities. Colombia is now one of only three economies with a perfect score on the strength of legal rights index, according to the World Bank. The electricity production in Colombia comes mainly from Renewable energy sources. 69.93% is obtained from the hydroelectric generation. Colombia's commitment to renewable energy was recognized in the 2014 Global Green Economy Index (GGEI), ranking among the top 10 nations in the world in terms of greening efficiency sectors. Colombia is rich in natural resources, and it is heavily dependent on energy and mining exports. Colombia's main exports include mineral fuels, oils, distillation products, fruit and other agricultural products, sugars and sugar confectionery, food products, plastics, precious stones, metals, forest products, chemical goods, pharmaceuticals, vehicles, electronic products, electrical equipment, perfumery and cosmetics, machinery, manufactured articles, textile and fabrics, clothing and footwear, glass and glassware, furniture, prefabricated buildings, military products, home and office material, construction equipment, software, among others. Principal trading partners are the United States, China, the European Union and some Latin American countries. Non-traditional exports have boosted the growth of Colombian foreign sales as well as the diversification of destinations of export thanks to new free trade agreements. In 2017, the National Administrative Department of Statistics (DANE) reported that 26.9% of the population were living below the poverty line, of which 7.4% were in "extreme poverty". The multidimensional poverty rate stands at 17.0 percent of the population. The Government has also been developing a process of financial inclusion within the country's most vulnerable population. Recent economic growth has led to a considerable increase of new millionaires, including the new entrepreneurs, Colombians with a net worth exceeding US$1 billion. The contribution of Travel & Tourism to GDP was US$5,880.3bn (2.0% of total GDP) in 2016. Tourism generated 556,135 jobs (2.5% of total employment) in 2016. Foreign tourist visits were predicted to have risen from 0.6 million in 2007 to 4 million in 2017. Science and technology Colombia has more than 3,950 research groups in science and technology. iNNpulsa, a government body that promotes entrepreneurship and innovation in the country, provides grants to startups, in addition to other services it and institutions provide. Colombia was ranked 68th in the Global Innovation Index in 2020, down from 67th in 2019. Co-working spaces have arisen to serve as communities for startups large and small. Organizations such as the Corporation for Biological Research (CIB) for the support of young people interested in scientific work has been successfully developed in Colombia. The International Center for Tropical Agriculture based in Colombia investigates the increasing challenge of global warming and food security. Important inventions related to medicine have been made in Colombia, such as the first external artificial pacemaker with internal electrodes, invented by the electronics engineer Jorge Reynolds Pombo, invention of great importance for those who suffer from heart failure. Also invented in Colombia were the microkeratome and keratomileusis technique, which form the fundamental basis of what now is known as LASIK (one of the most important techniques for the correction of refractive errors of vision) and the Hakim valve for the treatment of Hydrocephalus. Colombia has begun to innovate in military technology for its army and other armies of the world; especially in the design and creation of personal ballistic protection products, military hardware, military robots, bombs, simulators and radar. Some leading Colombian scientists are Joseph M. Tohme, researcher recognized for his work on the genetic diversity of food, Manuel Elkin Patarroyo who is known for his groundbreaking work on synthetic vaccines for malaria, Francisco Lopera who discovered the "Paisa Mutation" or a type of early-onset Alzheimer's, Rodolfo Llinás known for his study of the intrinsic neurons properties and the theory of a syndrome that had changed the way of understanding the functioning of the brain, Jairo Quiroga Puello recognized for his studies on the characterization of synthetic substances which can be used to fight fungus, tumors, tuberculosis and even some viruses and Ángela Restrepo who established accurate diagnoses and treatments to combat the effects of a disease caused by the Paracoccidioides brasiliensis. Transportation Transportation in Colombia is regulated within the functions of the Ministry of Transport and entities such as the National Roads Institute (INVÍAS) responsible for the Highways in Colombia, the Aerocivil, responsible for civil aviation and airports, the National Infrastructure Agency, in charge of concessions through public–private partnerships, for the design, construction, maintenance, operation, and administration of the transport infrastructure, the General Maritime Directorate (Dimar) has the responsibility of coordinating maritime traffic control |
began officially making the film. Wise said that Welles "had an older editor assigned to him for those tests and evidently he was not too happy and asked to have somebody else. I was roughly Orson's age and had several good credits." Wise and Robson began editing the film while it was still shooting and said that they "could tell certainly that we were getting something very special. It was outstanding film day in and day out." Welles gave Wise detailed instructions and was usually not present during the film's editing. The film was very well planned out and intentionally shot for such post-production techniques as slow dissolves. The lack of coverage made editing easy since Welles and Toland edited the film "in camera" by leaving few options of how it could be put together. Wise said the breakfast table sequence took weeks to edit and get the correct "timing" and "rhythm" for the whip pans and overlapping dialogue. The News on the March sequence was edited by RKO's newsreel division to give it authenticity. They used stock footage from Pathé News and the General Film Library. During post-production Welles and special effects artist Linwood G. Dunn experimented with an optical printer to improve certain scenes that Welles found unsatisfactory from the footage. Whereas Welles was often immediately pleased with Wise's work, he would require Dunn and post-production audio engineer James G. Stewart to re-do their work several times until he was satisfied. Welles hired Bernard Herrmann to compose the film's score. Where most Hollywood film scores were written quickly, in as few as two or three weeks after filming was completed, Herrmann was given 12 weeks to write the music. He had sufficient time to do his own orchestrations and conducting, and worked on the film reel by reel as it was shot and cut. He wrote complete musical pieces for some of the montages, and Welles edited many of the scenes to match their length. Trailer Written and directed by Welles at Toland's suggestion, the theatrical trailer for Citizen Kane differs from other trailers in that it did not feature a single second of footage of the actual film itself, but acts as a wholly original, tongue-in-cheek, pseudo-documentary piece on the film's production. Filmed at the same time as Citizen Kane itself, it offers the only existing behind-the-scenes footage of the film. The trailer, shot by Wild instead of Toland, follows an unseen Welles as he provides narration for a tour around the film set, introductions to the film's core cast members, and a brief overview of Kane's character. The trailer also contains a number of trick shots, including one of Everett Sloane appearing at first to be running into the camera, which turns out to be the reflection of the camera in a mirror. At the time, it was almost unprecedented for a film trailer to not actually feature anything of the film itself; and while Citizen Kane is frequently cited as a groundbreaking, influential film, Simon Callow argues its trailer was no less original in its approach. Callow writes that it has "great playful charm ... it is a miniature documentary, almost an introduction to the cinema ... Teasing, charming, completely original, it is a sort of conjuring trick: Without his face appearing once on the screen, Welles entirely dominates its five [sic] minutes' duration." Style Film scholars and historians view Citizen Kane as Welles's attempt to create a new style of filmmaking by studying various forms of it and combining them into one. However, Welles stated that his love for cinema began only when he started working on the film. When asked where he got the confidence as a first-time director to direct a film so radically different from contemporary cinema, he responded, "Ignorance, ignorance, sheer ignorance—you know there's no confidence to equal it. It's only when you know something about a profession, I think, that you're timid or careful." David Bordwell wrote that "The best way to understand Citizen Kane is to stop worshipping it as a triumph of technique." Bordwell argues that the film did not invent any of its famous techniques such as deep focus cinematography, shots of the ceilings, chiaroscuro lighting and temporal jump-cuts, and that many of these stylistics had been used in German Expressionist films of the 1920s, such as The Cabinet of Dr. Caligari. But Bordwell asserts that the film did put them all together for the first time and perfected the medium in one single film. In a 1948 interview, D. W. Griffith said, "I loved Citizen Kane and particularly loved the ideas he took from me." Arguments against the film's cinematic innovations were made as early as 1946 when French historian Georges Sadoul wrote, "The film is an encyclopedia of old techniques." He pointed out such examples as compositions that used both the foreground and the background in the films of Auguste and Louis Lumière, special effects used in the films of Georges Méliès, shots of the ceiling in Erich von Stroheim's Greed and newsreel montages in the films of Dziga Vertov. French film critic André Bazin defended the film, writing: "In this respect, the accusation of plagiarism could very well be extended to the film's use of panchromatic film or its exploitation of the properties of gelatinous silver halide." Bazin disagreed with Sadoul's comparison to Lumière's cinematography since Citizen Kane used more sophisticated lenses, but acknowledged that it had similarities to such previous works as The 49th Parallel and The Power and the Glory. Bazin stated that "even if Welles did not invent the cinematic devices employed in Citizen Kane, one should nevertheless credit him with the invention of their meaning." Bazin championed the techniques in the film for its depiction of heightened reality, but Bordwell believed that the film's use of special effects contradicted some of Bazin's theories. Storytelling techniques Citizen Kane rejects the traditional linear, chronological narrative and tells Kane's story entirely in flashbacks using different points of view, many of them from Kane's aged and forgetful associates, the cinematic equivalent of the unreliable narrator in literature. Welles also dispenses with the idea of a single storyteller and uses multiple narrators to recount Kane's life, a technique not used previously in Hollywood films. Each narrator recounts a different part of Kane's life, with each story overlapping another. The film depicts Kane as an enigma, a complicated man who leaves viewers with more questions than answers as to his character, such as the newsreel footage where he is attacked for being both a communist and a fascist. The technique of flashbacks had been used in earlier films, notably The Power and the Glory (1933), but no film was as immersed in it as Citizen Kane. Thompson the reporter acts as a surrogate for the audience, questioning Kane's associates and piecing together his life. Films typically had an "omniscient perspective" at the time, which Marilyn Fabe says give the audience the "illusion that we are looking with impunity into a world which is unaware of our gaze". Citizen Kane also begins in that fashion until the News on the March sequence, after which we the audience see the film through the perspectives of others. The News on the March sequence gives an overview of Kane's entire life (and the film's entire story) at the beginning of the film, leaving the audience without the typical suspense of wondering how it will end. Instead, the film's repetitions of events compels the audience to analyze and wonder why Kane's life happened the way that it did, under the pretext of finding out what "Rosebud" means. The film then returns to the omniscient perspective in the final scene, when only the audience discovers what "Rosebud" is. Cinematography The most innovative technical aspect of Citizen Kane is the extended use of deep focus, where the foreground, background, and everything in between are all in sharp focus. Cinematographer Toland did this through his experimentation with lenses and lighting. Toland described the achievement in an article for Theatre Arts magazine, made possible by the sensitivity of modern speed film: New developments in the science of motion picture photography are not abundant at this advanced stage of the game but periodically one is perfected to make this a greater art. Of these I am in an excellent position to discuss what is termed "Pan-focus", as I have been active for two years in its development and used it for the first time in Citizen Kane. Through its use, it is possible to photograph action from a range of eighteen inches from the camera lens to over two hundred feet away, with extreme foreground and background figures and action both recorded in sharp relief. Hitherto, the camera had to be focused either for a close or a distant shot, all efforts to encompass both at the same time resulting in one or the other being out of focus. This handicap necessitated the breaking up of a scene into long and short angles, with much consequent loss of realism. With pan-focus, the camera, like the human eye, sees an entire panorama at once, with everything clear and lifelike. Another unorthodox method used in the film was the low-angle shots facing upwards, thus allowing ceilings to be shown in the background of several scenes. Every set was built with a ceiling which broke with studio convention, and many were constructed of fabric that concealed microphones. Welles felt that the camera should show what the eye sees, and that it was a bad theatrical convention to pretend that there was no ceiling—"a big lie in order to get all those terrible lights up there," he said. He became fascinated with the look of low angles, which made even dull interiors look interesting. One extremely low angle is used to photograph the encounter between Kane and Leland after Kane loses the election. A hole was dug for the camera, which required drilling into the concrete floor. Welles credited Toland on the same title card as himself. "It's impossible to say how much I owe to Gregg," he said. "He was superb." He called Toland "the best director of photography that ever existed." Sound Citizen Kanes sound was recorded by Bailey Fesler and re-recorded in post-production by audio engineer James G. Stewart, both of whom had worked in radio. Stewart said that Hollywood films never deviated from a basic pattern of how sound could be recorded or used, but with Welles "deviation from the pattern was possible because he demanded it." Although the film is known for its complex soundtrack, much of the audio is heard as it was recorded by Fesler and without manipulation. Welles used techniques from radio like overlapping dialogue. The scene in which characters sing "Oh, Mr. Kane" was especially complicated and required mixing several soundtracks together. He also used different "sound perspectives" to create the illusion of distances, such as in scenes at Xanadu where characters speak to each other at far distances. Welles experimented with sound in post-production, creating audio montages, and chose to create all of the sound effects for the film instead of using RKO's library of sound effects. Welles used an aural technique from radio called the "lightning-mix". Welles used this technique to link complex montage sequences via a series of related sounds or phrases. For example, Kane grows from a child into a young man in just two shots. As Thatcher hands eight-year-old Kane a sled and wishes him a Merry Christmas, the sequence suddenly jumps to a shot of Thatcher fifteen years later, completing the sentence he began in both the previous shot and the chronological past. Other radio techniques include using a number of voices, each saying a sentence or sometimes merely a fragment of a sentence, and splicing the dialogue together in quick succession, such as the projection room scene. The film's sound cost $16,996, but was originally budgeted at $7,288. Film critic and director François Truffaut wrote that "Before Kane, nobody in Hollywood knew how to set music properly in movies. Kane was the first, in fact the only, great film that uses radio techniques. ... A lot of filmmakers know enough to follow Auguste Renoir's advice to fill the eyes with images at all costs, but only Orson Welles understood that the sound track had to be filled in the same way." Cedric Belfrage of The Clipper wrote "of all of the delectable flavours that linger on the palate after seeing Kane, the use of sound is the strongest." Make-up The make-up for Citizen Kane was created and applied by Maurice Seiderman (1907–1989), a junior member of the RKO make-up department. He had not been accepted into the union, which recognized him as only an apprentice, but RKO nevertheless used him to make up principal actors. "Apprentices were not supposed to make up any principals, only extras, and an apprentice could not be on a set without a journeyman present," wrote make-up artist Dick Smith, who became friends with Seiderman in 1979. "During his years at RKO I suspect these rules were probably overlooked often." "Seiderman had gained a reputation as one of the most inventive and creatively precise up-and-coming makeup men in Hollywood," wrote biographer Frank Brady. On an early tour of RKO, Welles met Seiderman in the small make-up lab that he created for himself in an unused dressing room. "Welles fastened on to him at once," wrote biographer Charles Higham, as Seiderman had developed his own makeup methods "that ensured complete naturalness of expression—a naturalness unrivaled in Hollywood." Seiderman developed a thorough plan for aging the principal characters, first making a plaster cast of the face of each of the actors who aged. He made a plaster mold of Welles's body down to the hips. "My sculptural techniques for the characters' aging were handled by adding pieces of white modeling clay, which matched the plaster, onto the surface of each bust," Seiderman told Norman Gambill. When Seiderman achieved the desired effect, he cast the clay pieces in a soft plastic material that he formulated himself. These appliances were then placed onto the plaster bust and a four-piece mold was made for each phase of aging. The castings were then fully painted and paired with the appropriate wig for evaluation. Before the actors went before the cameras each day, the pliable pieces were applied directly to their faces to recreate Seiderman's sculptural image. The facial surface was underpainted in a flexible red plastic compound; The red ground resulted in a warmth of tone that was picked up by the panchromatic film. Over that was applied liquid grease paint, and finally a colorless translucent talcum. Seiderman created the effect of skin pores on Kane's face by stippling the surface with a negative cast made from an orange peel. Welles often arrived on the set at 2:30 am, as application of the sculptural make-up took 3½ hours for the oldest incarnation of Kane. The make-up included appliances to age Welles's shoulders, breast, and stomach. "In the film and production photographs, you can see that Kane had a belly that overhung," Seiderman said. "That was not a costume, it was the rubber sculpture that created the image. You could see how Kane's silk shirt clung wetly to the character's body. It could not have been done any other way." Seiderman worked with Charles Wright on the wigs. These went over a flexible skull cover that Seiderman created and sewed into place with elastic thread. When he found the wigs too full, he untied one hair at a time to alter their shape. Kane's mustache was inserted into the makeup surface a few hairs at a time, to realistically vary the color and texture. He also made scleral lenses for Welles, Dorothy Comingore, George Coulouris, and Everett Sloane to dull the brightness of their young eyes. The lenses took a long time to fit properly, and Seiderman began work on them before devising any of the other makeup. "I painted them to age in phases, ending with the blood vessels and the arcus senilis of old age." Seiderman's tour de force was the breakfast montage, shot all in one day. "Twelve years, two years shot at each scene," he said. The major studios gave screen credit for make-up only to the department head. When RKO make-up department head Mel Berns refused to share credit with Seiderman, who was only an apprentice, Welles told Berns that there would be no make-up credit. Welles signed a large advertisement in the Los Angeles newspaper: THANKS TO EVERYBODY WHO GETS SCREEN CREDIT FOR "CITIZEN KANE"AND THANKS TO THOSE WHO DON'TTO ALL THE ACTORS, THE CREW, THE OFFICE, THE MUSICIANS, EVERYBODYAND PARTICULARLY TO MAURICE SEIDERMAN, THE BEST MAKE-UP MAN IN THE WORLD Sets Although credited as an assistant, the film's art direction was done by Perry Ferguson. Welles and Ferguson got along during their collaboration. In the weeks before production began Welles, Toland and Ferguson met regularly to discuss the film and plan every shot, set design and prop. Ferguson would take notes during these discussions and create rough designs of the sets and story boards for individual shots. After Welles approved the rough sketches, Ferguson made miniature models for Welles and Toland to experiment on with a periscope in order to rehearse and perfect each shot. Ferguson then had detailed drawings made for the set design, including the film's lighting design. The set design was an integral part of the film's overall look and Toland's cinematography. In the original script the Great Hall at Xanadu was modeled after the Great Hall in Hearst Castle and its design included a mixture of Renaissance and Gothic styles. "The Hearstian element is brought out in the almost perverse juxtaposition of incongruous architectural styles and motifs," wrote Carringer. Before RKO cut the film's budget, Ferguson's designs were more elaborate and resembled the production designs of early Cecil B. DeMille films and Intolerance. The budget cuts reduced Ferguson's budget by 33 percent and his work cost $58,775 total, which was below average at that time. To save costs Ferguson and Welles re-wrote scenes in Xanadu's living room and transported them to the Great Hall. A large staircase from another film was found and used at no additional cost. When asked about the limited budget, Ferguson said "Very often—as in that much-discussed 'Xanadu' set in Citizen Kane—we can make a foreground piece, a background piece, and imaginative lighting suggests a great deal more on the screen than actually exists on the stage." According to the film's official budget there were 81 sets built, but Ferguson said there were between 106 and 116. Still photographs of Oheka Castle in Huntington, New York, were used in the opening montage, representing Kane's Xanadu estate. Ferguson also designed statues from Kane's collection with styles ranging from Greek to German Gothic. The sets were also built to accommodate Toland's camera movements. Walls were built to fold and furniture could quickly be moved. The film's famous ceilings were made out of muslin fabric and camera boxes were built into the floors for low angle shots. Welles later said that he was proud that the film production value looked much more expensive than the film's budget. Although neither worked with Welles again, Toland and Ferguson collaborated in several films in the 1940s. Special effects The film's special effects were supervised by RKO department head Vernon L. Walker. Welles pioneered several visual effects to cheaply shoot things like crowd scenes and large interior spaces. For example, the scene in which the camera in the opera house rises dramatically to the rafters, to show the workmen showing a lack of appreciation for Susan Alexander Kane's performance, was shot by a camera craning upwards over the performance scene, then a curtain wipe to a miniature of the upper regions of the house, and then another curtain wipe matching it again with the scene of the workmen. Other scenes effectively employed miniatures to make the film look much more expensive than it truly was, such as various shots of Xanadu. Some shots included rear screen projection in the background, such as Thompson's interview of Leland and some of the ocean backgrounds at Xanadu. Bordwell claims that the scene where Thatcher agrees to be Kane's guardian used rear screen projection to depict young Kane in the background, despite this scene being cited as a prime example of Toland's deep focus cinematography. A special effects camera crew from Walker's department was required for the extreme close-up shots such as Kane's lips when he says "Rosebud" and the shot of the typewriter typing Susan's bad review. Optical effects artist Dunn claimed that "up to 80 percent of some reels was optically printed." These shots were traditionally attributed to Toland for years. The optical printer improved some of the deep focus shots. One problem with the optical printer was that it sometimes created excessive graininess, such as the optical zoom out of the snow globe. Welles decided to superimpose snow falling to mask the graininess in these shots. Toland said that he disliked the results of the optical printer, but acknowledged that "RKO special effects expert Vernon Walker, ASC, and his staff handled their part of the production—a by no means inconsiderable assignment—with ability and fine understanding." Any time deep focus was impossible—as in the scene in which Kane finishes a negative review of Susan's opera while at the same time firing the person who began writing the review—an optical printer was used to make the whole screen appear in focus, visually layering one piece of film onto another. However, some apparently deep-focus shots were the result of in-camera effects, as in the famous scene in which Kane breaks into Susan's room after her suicide attempt. In the background, Kane and another man break into the room, while simultaneously the medicine bottle and a glass with a spoon in it are in closeup in the foreground. The shot was an in-camera matte shot. The foreground was shot first, with the background dark. Then the background was lit, the foreground darkened, the film rewound, and the scene re-shot with the background action. Music The film's music was composed by Bernard Herrmann. Herrmann had composed for Welles for his Mercury Theatre radio broadcasts. Because it was Herrmann's first motion picture score, RKO wanted to pay him only a small fee, but Welles insisted he be paid at the same rate as Max Steiner. The score established Herrmann as an important new composer of film soundtracks and eschewed the typical Hollywood practice of scoring a film with virtually non-stop music. Instead Herrmann used what he later described as "radio scoring", musical cues typically 5–15 seconds in length that bridge the action or suggest a different emotional response. The breakfast montage sequence begins with a graceful waltz theme and gets darker with each variation on that theme as the passage of time leads to the hardening of Kane's personality and the breakdown of his first marriage. Herrmann realized that musicians slated to play his music were hired for individual unique sessions; there was no need to write for existing ensembles. This meant that he was free to score for unusual combinations of instruments, even instruments that are not commonly heard. In the opening sequence, for example, the tour of Kane's estate Xanadu, Herrmann introduces a recurring leitmotif played by low woodwinds, including a quartet of alto flutes. For Susan Alexander Kane's operatic sequence, Welles suggested that Herrmann compose a witty parody of a Mary Garden vehicle, an aria from Salammbô. "Our problem was to create something that would give the audience the feeling of the quicksand into which this simple little girl, having a charming but small voice, is suddenly thrown," Herrmann said. Writing in the style of a 19th-century French Oriental opera, Herrmann put the aria in a key that would force the singer to strain to reach the high notes, culminating in a high D, well outside the range of Susan Alexander. Soprano Jean Forward dubbed the vocal part for Comingore. Houseman claimed to have written the libretto, based on Jean Racine's Athalie and Phedre, although some confusion remains since Lucille Fletcher remembered preparing the lyrics. Fletcher, then Herrmann's wife, wrote the libretto for his opera Wuthering Heights. Music enthusiasts consider the scene in which Susan Alexander Kane attempts to sing the famous cavatina "Una voce poco fa" from Il barbiere di Siviglia by Gioachino Rossini with vocal coach Signor Matiste as especially memorable for depicting the horrors of learning music through mistakes. In 1972, Herrmann said, "I was fortunate to start my career with a film like Citizen Kane, it's been a downhill run ever since!" Welles loved Herrmann's score and told director Henry Jaglom that it was 50 percent responsible for the film's artistic success. Some incidental music came from other sources. Welles heard the tune used for the publisher's theme, "Oh, Mr. Kane", in Mexico. Called "A Poco No", the song was written by Pepe Guízar and special lyrics were written by Herman Ruby. "In a Mizz", a 1939 jazz song by Charlie Barnet and Haven Johnson, bookends Thompson's second interview of Susan Alexander Kane. "I kind of based the whole scene around that song," Welles said. "The music is by Nat Cole—it's his trio." Later—beginning with the lyrics, "It can't be love"—"In a Mizz" is performed at the Everglades picnic, framing the fight in the tent between Susan and Kane. Musicians including bandleader Cee Pee Johnson (drums), Alton Redd (vocals), Raymond Tate (trumpet), Buddy Collette (alto sax) and Buddy Banks (tenor sax) are featured. All of the music used in the newsreel came from the RKO music library, edited at Welles's request by the newsreel department to achieve what Herrmann called "their own crazy way of cutting". The News on the March theme that accompanies the newsreel titles is "Belgian March" by Anthony Collins, from the film Nurse Edith Cavell. Other examples are an excerpt from Alfred Newman's score for Gunga Din (the exploration of Xanadu), Roy Webb's theme for the film Reno (the growth of Kane's empire), and bits of Webb's score for Five Came Back (introducing Walter Parks Thatcher). Editing One of the editing techniques used in Citizen Kane was the use of montage to collapse time and space, using an episodic sequence on the same set while the characters changed costume and make-up between cuts so that the scene following each cut would look as if it took place in the same location, but at a time long after the previous cut. In the breakfast montage, Welles chronicles the breakdown of Kane's first marriage in five vignettes that condense 16 years of story time into two minutes of screen time. Welles said that the idea for the breakfast scene "was stolen from The Long Christmas Dinner by Thornton Wilder ... a one-act play, which is a long Christmas dinner that takes you through something like 60 years of a family's life." The film often uses long dissolves to signify the passage of time and its psychological effect of the characters, such as the scene in which the abandoned sled is covered with snow after the young Kane is sent away with Thatcher. Welles was influenced by the editing theories of Sergei Eisenstein by using jarring cuts that caused "sudden graphic or associative contrasts", such as the cut from Kane's deathbed to the beginning of the News on the March sequence and a sudden shot of a shrieking cockatoo at the beginning of Raymond's flashback. Although the film typically favors mise-en-scène over montage, the scene in which Kane goes to Susan Alexander's apartment after first meeting her is the only one that is primarily cut as close-ups with shots and counter shots between Kane and Susan. Fabe says that "by using a standard Hollywood technique sparingly, [Welles] revitalizes its psychological expressiveness." Political themes Laura Mulvey explored the anti-fascist themes of Citizen Kane in her 1992 monograph for the British Film Institute. The News on the March newsreel presents Kane keeping company with Hitler and other dictators while he smugly assures the public that there will be no war. She wrote that the film reflects "the battle between intervention and isolationism" then being waged in the United States; the film was released six months before the attack on Pearl Harbor, while President Franklin D. Roosevelt was laboring to win public opinion for entering World War II. "In the rhetoric of Citizen Kane," Mulvey writes, "the destiny of isolationism is realised in metaphor: in Kane's own fate, dying wealthy and lonely, surrounded by the detritus of European culture and history." Journalist Ignacio Ramonet has cited the film as an early example of mass media manipulation of public opinion and the power that media conglomerates have on influencing the democratic process. He believes that this early example of a media mogul influencing politics is outdated and that today "there are media groups with the power of a thousand Citizen Kanes." Media mogul Rupert Murdoch is sometimes labeled as a latter-day Citizen Kane. Comparisons have also been made between the career and character of Donald Trump and Charles Foster Kane. Citizen Kane is reported to be one of Trump's favorite films, and his biographer Tim O’Brien has said that Trump is fascinated by and identifies with Kane. Reception Pre-release controversy To ensure that Hearst's life's influence on Citizen Kane was a secret, Welles limited access to dailies and managed the film's publicity. A December 1940 feature story in Stage magazine compared the film's narrative to Faust and made no mention of Hearst. The film was scheduled to premiere at RKO's flagship theater Radio City Music Hall on February 14, but in early January 1941 Welles was not finished with post-production work and told RKO that it still needed its musical score. Writers for national magazines had early deadlines and so a rough cut was previewed for a select few on January 3, 1941 for such magazines as Life, Look and Redbook. Gossip columnist Hedda Hopper (an arch-rival of Louella Parsons, the Hollywood correspondent for Hearst papers) showed up to the screening uninvited. Most of the critics at the preview said that they liked the film and gave it good advanced reviews. Hopper wrote negatively about it, calling the film a "vicious and irresponsible attack on a great man" and criticizing its corny writing and old fashioned photography. Friday magazine ran an article drawing point-by-point comparisons between Kane and Hearst and documented how Welles had led on Parsons. Up until this Welles had been friendly with Parsons. The magazine quoted Welles as saying that he couldn't understand why she was so nice to him and that she should "wait until the woman finds out that the picture's about her boss." Welles immediately denied making the statement and the editor of Friday admitted that it might be false. Welles apologized to Parsons and assured her that he had never made that remark. Shortly after Fridays article, | When Kane's parents introduced him to Thatcher, the boy struck Thatcher with his sled and attempted to run away. By the time Kane gained control of his trust at the age of 25, the mine's productivity and Thatcher's prudent investing had made him one of the richest men in the world. He took control of the New York Inquirer newspaper and embarked on a career of yellow journalism, publishing scandalous articles that attacked Thatcher's (and his own) business interests. Kane sold his newspaper empire to Thatcher after the 1929 stock market crash left him short of cash. Thompson interviews Kane's personal business manager, Mr. Bernstein. Bernstein recalls that Kane hired the best journalists available to build the Inquirers circulation. Kane rose to power by successfully manipulating public opinion regarding the Spanish–American War and marrying Emily Norton, the niece of the President of the United States. Thompson interviews Kane's estranged best friend, Jedediah Leland, in a retirement home. Leland says that Kane's marriage to Emily disintegrated over the years, and he began an affair with amateur singer Susan Alexander while running for Governor of New York. Both his wife and his political opponent discovered the affair and the public scandal ended his political career. Kane married Susan and forced her into a humiliating operatic career for which she had neither the talent nor the ambition, even building a large opera house for her. After Leland began to write a negative review of Susan's opera debut, Kane fired him but finished the negative review and printed it. Susan consents to an interview with Thompson and describes the aftermath of her opera career. Kane finally allowed her to abandon singing after she attempted suicide. After years spent dominated by Kane and living in isolation at Xanadu, she left him. Kane's butler Raymond recounts that, after Susan left him, he began violently destroying the contents of her bedroom. When he happened upon a snow globe, he grew calm and said "Rosebud". Thompson concludes that he is unable to solve the mystery and that the meaning of Kane's last word will forever remain a mystery. Back at Xanadu, Kane's belongings are cataloged or discarded by the staff. They find the sled on which the eight-year-old Kane was playing on the day that he was taken from his home in Colorado. They throw it with other junk into a furnace and, as it burns, the camera reveals its trade name, not noticed by the staff: "Rosebud". Cast The beginning of the film's ending credits state that "Most of the principal actors in Citizen Kane are new to motion pictures. The Mercury Theatre is proud to introduce them." The cast is listed in the following order: Joseph Cotten as Jedediah Leland, Kane's best friend and a reporter for The Inquirer. Cotten also appears (hidden in darkness) in the News on the March screening room. Dorothy Comingore as Susan Alexander Kane, Kane's mistress and second wife. Agnes Moorehead as Mary Kane, Kane's mother. Ruth Warrick as Emily Monroe Norton Kane, Kane's first wife. Ray Collins as Jim W. Gettys, Kane's political rival for the post of Governor of New York. Erskine Sanford as Herbert Carter, editor of The Inquirer. Sanford also appears (hidden in darkness) in the News on the March screening room. Everett Sloane as Mr. Bernstein, Kane's friend and employee at The Inquirer. William Alland as Jerry Thompson, a reporter for News on the March. Alland also voices the narrator of the News on the March newsreel. Paul Stewart as Raymond, Kane's butler. George Coulouris as Walter Parks Thatcher, a banker who becomes Kane's legal guardian. Fortunio Bonanova as Signor Matiste, vocal coach of Susan Alexander Kane. Gus Schilling as John, headwaiter at the El Rancho nightclub. Schilling also appears (hidden in darkness) in the News on the March screening room. Philip Van Zandt as Mr. Rawlston, News on the March open at the producer. Georgia Backus as Bertha Anderson, attendant at the library of Walter Parks Thatcher. Harry Shannon as Jim Kane, Kane's father. Sonny Bupp as Charles Foster Kane III, Kane's son. Buddy Swan as Charles Foster Kane, age eight. Orson Welles as Charles Foster Kane, a wealthy newspaper publisher. Additionally, Charles Bennett appears as the entertainer at the head of the chorus line in the Inquirer party sequence, and cinematographer Gregg Toland makes a cameo appearance as an interviewer depicted in part of the News on the March newsreel. Actor Alan Ladd, still unknown at that time, makes a small appearance as a reporter smoking a pipe at the end of the film. Pre-production Development Hollywood had shown interest in Welles as early as 1936. He turned down three scripts sent to him by Warner Bros. In 1937, he declined offers from David O. Selznick, who asked him to head his film company's story department, and William Wyler, who wanted him for a supporting role in Wuthering Heights. "Although the possibility of making huge amounts of money in Hollywood greatly attracted him," wrote biographer Frank Brady, "he was still totally, hopelessly, insanely in love with the theater, and it is there that he had every intention of remaining to make his mark." Following "The War of the Worlds" broadcast of his CBS radio series The Mercury Theatre on the Air, Welles was lured to Hollywood with a remarkable contract. RKO Pictures studio head George J. Schaefer wanted to work with Welles after the notorious broadcast, believing that Welles had a gift for attracting mass attention. RKO was also uncharacteristically profitable and was entering into a series of independent production contracts that would add more artistically prestigious films to its roster. Throughout the spring and early summer of 1939, Schaefer constantly tried to lure the reluctant Welles to Hollywood. Welles was in financial trouble after failure of his plays Five Kings and The Green Goddess. At first he simply wanted to spend three months in Hollywood and earn enough money to pay his debts and fund his next theatrical season. Welles first arrived on July 20, 1939 and on his first tour, he called the movie studio "the greatest electric train set a boy ever had". Welles signed his contract with RKO on August 21, which stipulated that Welles would act in, direct, produce and write two films. Mercury would get $100,000 for the first film by January 1, 1940, plus 20% of profits after RKO recouped $500,000, and $125,000 for a second film by January 1, 1941, plus 20% of profits after RKO recouped $500,000. The most controversial aspect of the contract was granting Welles complete artistic control of the two films so long as RKO approved both projects' stories and so long as the budget did not exceed $500,000. RKO executives would not be allowed to see any footage until Welles chose to show it to them, and no cuts could be made to either film without Welles's approval. Welles was allowed to develop the story without interference, select his own cast and crew, and have the right of final cut. Granting final cut privilege was unprecedented for a studio since it placed artistic considerations over financial investment. The contract was deeply resented in the film industry, and the Hollywood press took every opportunity to mock RKO and Welles. Schaefer remained a great supporter and saw the unprecedented contract as good publicity. Film scholar Robert L. Carringer wrote: "The simple fact seems to be that Schaefer believed Welles was going to pull off something really big almost as much as Welles did himself." Welles spent the first five months of his RKO contract trying to get his first project going, without success. "They are laying bets over on the RKO lot that the Orson Welles deal will end up without Orson ever doing a picture there," wrote The Hollywood Reporter. It was agreed that Welles would film Heart of Darkness, previously adapted for The Mercury Theatre on the Air, which would be presented entirely through a first-person camera. After elaborate pre-production and a day of test shooting with a hand-held camera—unheard of at the time—the project never reached production because Welles was unable to trim $50,000 from its budget. Schaefer told Welles that the $500,000 budget could not be exceeded; as war loomed, revenue was declining sharply in Europe by the fall of 1939. He then started work on the idea that became Citizen Kane. Knowing the script would take time to prepare, Welles suggested to RKO that while that was being done—"so the year wouldn't be lost"—he make a humorous political thriller. Welles proposed The Smiler with a Knife, from a novel by Cecil Day-Lewis. When that project stalled in December 1939, Welles began brainstorming other story ideas with screenwriter Herman J. Mankiewicz, who had been writing Mercury radio scripts. "Arguing, inventing, discarding, these two powerful, headstrong, dazzlingly articulate personalities thrashed toward Kane", wrote biographer Richard Meryman. Screenplay One of the long-standing controversies about Citizen Kane has been the authorship of the screenplay. Welles conceived the project with screenwriter Herman J. Mankiewicz, who was writing radio plays for Welles's CBS Radio series, The Campbell Playhouse. Mankiewicz based the original outline on the life of William Randolph Hearst, whom he knew socially and came to hate after being exiled from Hearst's circle. In February 1940 Welles supplied Mankiewicz with 300 pages of notes and put him under contract to write the first draft screenplay under the supervision of John Houseman, Welles's former partner in the Mercury Theatre. Welles later explained, "I left him on his own finally, because we'd started to waste too much time haggling. So, after mutual agreements on storyline and character, Mank went off with Houseman and did his version, while I stayed in Hollywood and wrote mine." Taking these drafts, Welles drastically condensed and rearranged them, then added scenes of his own. The industry accused Welles of underplaying Mankiewicz's contribution to the script, but Welles countered the attacks by saying, "At the end, naturally, I was the one making the picture, after all—who had to make the decisions. I used what I wanted of Mank's and, rightly or wrongly, kept what I liked of my own." The terms of the contract stated that Mankiewicz was to receive no credit for his work, as he was hired as a script doctor. Before he signed the contract Mankiewicz was particularly advised by his agents that all credit for his work belonged to Welles and the Mercury Theatre, the "author and creator". As the film neared release, however, Mankiewicz began wanting a writing credit for the film and even threatened to take out full-page ads in trade papers and to get his friend Ben Hecht to write an exposé for The Saturday Evening Post. Mankiewicz also threatened to go to the Screen Writers Guild and claim full credit for writing the entire script by himself. After lodging a protest with the Screen Writers Guild, Mankiewicz withdrew it, then vacillated. The question was resolved in January 1941 when the studio, RKO Pictures, awarded Mankiewicz credit. The guild credit form listed Welles first, Mankiewicz second. Welles's assistant Richard Wilson said that the person who circled Mankiewicz's name in pencil, then drew an arrow that put it in first place, was Welles. The official credit reads, "Screenplay by Herman J. Mankiewicz and Orson Welles". Mankiewicz's rancor toward Welles grew over the remaining 12 years of his life. Questions over the authorship of the Citizen Kane screenplay were revived in 1971 by influential film critic Pauline Kael, whose controversial 50,000-word essay "Raising Kane" was commissioned as an introduction to the shooting script in The Citizen Kane Book, published in October 1971. The book-length essay first appeared in February 1971, in two consecutive issues of The New Yorker magazine. In the ensuing controversy, Welles was defended by colleagues, critics, biographers and scholars, but his reputation was damaged by its charges. The essay's thesis was later questioned and some of Kael's findings were also contested in later years. Questions of authorship continued to come into sharper focus with Carringer's 1978 thoroughly-researched essay, "The Scripts of Citizen Kane". Carringer studied the collection of script records—"almost a day-to-day record of the history of the scripting"—that was then still intact at RKO. He reviewed all seven drafts and concluded that "the full evidence reveals that Welles's contribution to the Citizen Kane script was not only substantial but definitive." Sources Welles never confirmed a principal source for the character of Charles Foster Kane. Houseman wrote that Kane is a synthesis of different personalities, with Hearst's life used as the main source. Some events and details were invented, and Houseman wrote that he and Mankiewicz also "grafted anecdotes from other giants of journalism, including Pulitzer, Northcliffe and Mank's first boss, Herbert Bayard Swope." Welles said, "Mr. Hearst was quite a bit like Kane, although Kane isn't really founded on Hearst in particular. Many people sat for it, so to speak". He specifically acknowledged that aspects of Kane were drawn from the lives of two business tycoons familiar from his youth in Chicago—Samuel Insull and Harold Fowler McCormick. The character of Jedediah Leland was based on drama critic Ashton Stevens, George Stevens's uncle and Welles's close boyhood friend. Some detail came from Mankiewicz's own experience as a drama critic in New York. Many assumed that the character of Susan Alexander Kane was based on Marion Davies, Hearst's mistress whose career he managed and whom Hearst promoted as a motion picture actress. This assumption was a major reason Hearst tried to destroy Citizen Kane. Welles denied that the character was based on Davies, whom he called "an extraordinary woman—nothing like the character Dorothy Comingore played in the movie." He cited Insull's building of the Chicago Opera House, and McCormick's lavish promotion of the opera career of his second wife, Ganna Walska, as direct influences on the screenplay. As a known supporter of President Roosevelt, whom both McCormick and Hearst opposed based on his successful attempts to control the content of radio programs and his ongoing efforts to control print, Welles may have had incentive to use the film to smear both men. The character of political boss Jim W. Gettys is based on Charles F. Murphy, a leader in New York City's infamous Tammany Hall political machine. Welles credited "Rosebud" to Mankiewicz. Biographer Richard Meryman wrote that the symbol of Mankiewicz's own damaged childhood was a treasured bicycle, stolen while he visited the public library and not replaced by his family as punishment. He regarded it as the prototype of Charles Foster Kane's sled. In his 2015 Welles biography, Patrick McGilligan reported that Mankiewicz himself stated that the word "Rosebud" was taken from the name of a famous racehorse, Old Rosebud. Mankiewicz had a bet on the horse in the 1914 Kentucky Derby, which he won, and McGilligan wrote that "Old Rosebud symbolized his lost youth, and the break with his family". In testimony for the Lundberg suit, Mankiewicz said, "I had undergone psycho-analysis, and Rosebud, under circumstances slightly resembling the circumstances in [Citizen Kane], played a prominent part." The News on the March sequence that begins the film satirizes the journalistic style of The March of Time, the news documentary and dramatization series presented in movie theaters by Time Inc. From 1935 to 1938 Welles was a member of the uncredited company of actors that presented the original radio version. Houseman claimed that banker Walter P. Thatcher was loosely based on J. P. Morgan. Bernstein was named for Dr. Maurice Bernstein, appointed Welles's guardian; Sloane's portrayal was said to be based on Bernard Herrmann. Herbert Carter, editor of The Inquirer, was named for actor Jack Carter. Production Casting Citizen Kane was a rare film in that its principal roles were played by actors new to motion pictures. Ten were billed as Mercury Actors, members of the skilled repertory company assembled by Welles for the stage and radio performances of the Mercury Theatre, an independent theater company he founded with Houseman in 1937. "He loved to use the Mercury players," wrote biographer Charles Higham, "and consequently he launched several of them on movie careers." The film represents the feature film debuts of William Alland, Ray Collins, Joseph Cotten, Agnes Moorehead, Erskine Sanford, Everett Sloane, Paul Stewart, and Welles himself. Despite never having appeared in feature films, some of the cast members were already well known to the public. Cotten had recently become a Broadway star in the hit play The Philadelphia Story with Katharine Hepburn and Sloane was well known for his role on the radio show The Goldbergs. Mercury actor George Coulouris was a star of the stage in New York and London. Not all of the cast came from the Mercury Players. Welles cast Dorothy Comingore, an actress who played supporting parts in films since 1934 using the name "Linda Winters", as Susan Alexander Kane. A discovery of Charlie Chaplin, Comingore was recommended to Welles by Chaplin, who then met Comingore at a party in Los Angeles and immediately cast her. Welles had met stage actress Ruth Warrick while visiting New York on a break from Hollywood and remembered her as a good fit for Emily Norton Kane, later saying that she looked the part. Warrick told Carringer that she was struck by the extraordinary resemblance between herself and Welles's mother when she saw a photograph of Beatrice Ives Welles. She characterized her own personal relationship with Welles as motherly. "He trained us for films at the same time that he was training himself," recalled Agnes Moorehead. "Orson believed in good acting, and he realized that rehearsals were needed to get the most from his actors. That was something new in Hollywood: nobody seemed interested in bringing in a group to rehearse before scenes were shot. But Orson knew it was necessary, and we rehearsed every sequence before it was shot." When The March of Time narrator Westbrook Van Voorhis asked for $25,000 to narrate the News on the March sequence, Alland demonstrated his ability to imitate Van Voorhis and Welles cast him. Welles later said that casting character actor Gino Corrado in the small part of the waiter at the El Rancho broke his heart. Corrado had appeared in many Hollywood films, often as a waiter, and Welles wanted all of the actors to be new to films. Other uncredited roles went to Thomas A. Curran as Teddy Roosevelt in the faux newsreel; Richard Baer as Hillman, a man at Madison Square Garden, and a man in the News on the March screening room; and Alan Ladd, Arthur O'Connell and Louise Currie as reporters at Xanadu. Ruth Warrick (died 2005) was the last surviving member of the principal cast. Sonny Bupp (died 2007), who played Kane's young son, was the last surviving credited cast member. Kathryn Trosper Popper (died March 6, 2016) was reported to have been the last surviving actor to have appeared in Citizen Kane. Jean Forward (died September 2016), a soprano who dubbed the singing voice of Susan Alexander, was the last surviving performer from the film. Filming Production advisor Miriam Geiger quickly compiled a handmade film textbook for Welles, a practical reference book of film techniques that he studied carefully. He then taught himself filmmaking by matching its visual vocabulary to The Cabinet of Dr. Caligari, which he ordered from the Museum of Modern Art, and films by Frank Capra, René Clair, Fritz Lang, King Vidor and Jean Renoir. The one film he genuinely studied was John Ford's Stagecoach, which he watched 40 times. "As it turned out, the first day I ever walked onto a set was my first day as a director," Welles said. "I'd learned whatever I knew in the projection room—from Ford. After dinner every night for about a month, I'd run Stagecoach, often with some different technician or department head from the studio, and ask questions. 'How was this done?' 'Why was this done?' It was like going to school." Welles's cinematographer for the film was Gregg Toland, described by Welles as "just then, the number-one cameraman in the world." To Welles's astonishment, Toland visited him at his office and said, "I want you to use me on your picture." He had seen some of the Mercury stage productions (including Caesar) and said he wanted to work with someone who had never made a movie. RKO hired Toland on loan from Samuel Goldwyn Productions in the first week of June 1940. "And he never tried to impress us that he was doing any miracles," Welles recalled. "I was calling for things only a beginner would have been ignorant enough to think anybody could ever do, and there he was, doing them." Toland later explained that he wanted to work with Welles because he anticipated the first-time director's inexperience and reputation for audacious experimentation in the theater would allow the cinematographer to try new and innovative camera techniques that typical Hollywood films would never have allowed him to do. Unaware of filmmaking protocol, Welles adjusted the lights on set as he was accustomed to doing in the theater; Toland quietly re-balanced them, and was angry when one of the crew informed Welles that he was infringing on Toland's responsibilities. During the first few weeks of June, Welles had lengthy discussions about the film with Toland and art director Perry Ferguson in the morning, and in the afternoon and evening he worked with actors and revised the script. On June 29, 1940—a Saturday morning when few inquisitive studio executives would be around—Welles began filming Citizen Kane. After the disappointment of having Heart of Darkness canceled, Welles followed Ferguson's suggestion and deceived RKO into believing that he was simply shooting camera tests. "But we were shooting the picture," Welles said, "because we wanted to get started and be already into it before anybody knew about it." At the time RKO executives were pressuring him to agree to direct a film called The Men from Mars, to capitalize on "The War of the Worlds" radio broadcast. Welles said that he would consider making the project but wanted to make a different film first. At this time he did not inform them that he had already begun filming Citizen Kane. The early footage was called "Orson Welles Tests" on all paperwork. The first "test" shot was the News on the March projection room scene, economically filmed in a real studio projection room in darkness that masked many actors who appeared in other roles later in the film. "At $809 Orson did run substantially beyond the test budget of $528—to create one of the most famous scenes in movie history," wrote Barton Whaley. The next scenes were the El Rancho nightclub scenes and the scene in which Susan attempts suicide. Welles later said that the nightclub set was available after another film had wrapped and that filming took 10 to 12 days to complete. For these scenes Welles had Comingore's throat sprayed with chemicals to give her voice a harsh, raspy tone. Other scenes shot in secret included those in which Thompson interviews Leland and Bernstein, which were also shot on sets built for other films. During production, the film was referred to as RKO 281. Most of the filming took place in what is now Stage 19 on the Paramount Pictures lot in Hollywood. There was some location filming at Balboa Park in San Diego and the San Diego Zoo. In the end of July, RKO approved the film and Welles was allowed to officially begin shooting, despite having already been filming "tests" for several weeks. Welles leaked stories to newspaper reporters that the tests had been so good that there was no need to re-shoot them. The first official scene to be shot was the breakfast montage sequence between Kane and his first wife Emily. To strategically save money and appease the RKO executives who opposed him, Welles rehearsed scenes extensively before actually shooting and filmed very few takes of each shot set-up. Welles never shot master shots for any scene after Toland told him that Ford never shot them. To appease the increasingly curious press, Welles threw a cocktail party for selected reporters, promising that they could watch a scene being filmed. When the journalists arrived Welles told them they had "just finished" shooting for the day but still had the party. Welles told the press that he was ahead of schedule (without factoring in the month of "test shooting"), thus discrediting claims that after a year in Hollywood without making a film he was a failure in the film industry. Welles usually worked 16 to 18 hours a day on the film. He often began work at 4 a.m. since the special effects make-up used to age him for certain scenes took up to four hours to apply. Welles used this time to discuss the day's shooting with Toland and other crew members. The special contact lenses used to make Welles look elderly proved very painful, and a doctor was employed to place them into Welles's eyes. Welles had difficulty seeing clearly while wearing them, which caused him to badly cut his wrist when shooting the scene in which Kane breaks up the furniture in Susan's bedroom. While shooting the scene in which Kane shouts at Gettys on the stairs of Susan Alexander's apartment building, Welles fell ten feet; an X-ray revealed two bone chips in his ankle. The injury required him to direct the film from a wheelchair for two weeks. He eventually wore a steel brace to resume performing on camera; it is visible in the low-angle scene between Kane and Leland after Kane loses the election. For the final scene, a stage at the Selznick studio was equipped with a working furnace, and multiple takes were required to show the sled being put into the fire and the word "Rosebud" consumed. Paul Stewart recalled that on the ninth take the Culver City Fire Department arrived in full gear because the furnace had grown so hot the flue caught fire. "Orson was delighted with the commotion", he said. When "Rosebud" was burned, Welles choreographed the scene while he had composer Bernard Herrmann's cue playing on the set. Unlike Schaefer, many members of RKO's board of governors did not like Welles or the control that his contract gave him. However such board members as Nelson Rockefeller and NBC chief David Sarnoff were sympathetic to Welles. Throughout production Welles had problems with these executives not respecting his contract's stipulation of non-interference and several spies arrived on set to report what they saw to the executives. When the executives would sometimes arrive on set unannounced the entire cast and crew would suddenly start playing softball until they left. Before official shooting began the executives intercepted all copies of the script and delayed their delivery to Welles. They had one copy sent to their office in New York, resulting in it being leaked to press. Principal shooting wrapped October 24. Welles then took several weeks away from the film for a lecture tour, during which he also scouted additional locations with Toland and Ferguson. Filming resumed November 15 with some re-shoots. Toland had to leave due to a commitment to shoot Howard Hughes' The Outlaw, but Toland's camera crew continued working on the film and Toland was replaced by RKO cinematographer Harry J. Wild. The final day of shooting on November 30 was Kane's death scene. Welles boasted that he only went 21 days over his official shooting schedule, without factoring in the month of "camera tests". According to RKO records, the film cost $839,727. Its estimated budget had been $723,800. Post-production Citizen Kane was edited by Robert Wise and assistant editor Mark Robson. Both would become successful film directors. Wise was hired after Welles finished shooting the "camera tests" and began officially making the film. Wise said that Welles "had an older editor assigned to him for those tests and evidently he was not too happy and asked to have somebody else. I was roughly Orson's age and had several good credits." Wise and Robson began editing the film while it was still shooting and said that they "could tell certainly that we were getting something very special. It was outstanding film day in and day out." Welles gave Wise detailed instructions and was usually not present during the film's editing. The film was very well planned out and intentionally shot for such post-production techniques as slow dissolves. The lack of coverage made editing easy since Welles and Toland edited the film "in camera" by leaving few options of how it could be put together. Wise said the breakfast table sequence took weeks to edit and get the correct "timing" and "rhythm" for the whip pans and overlapping dialogue. The News on the March sequence was edited by RKO's newsreel division to give it authenticity. They used stock footage from Pathé News and the General Film Library. During post-production Welles and special effects artist Linwood G. Dunn experimented with an optical printer to improve certain scenes that Welles found unsatisfactory from the footage. Whereas Welles was often immediately pleased with Wise's work, he would require Dunn and post-production audio engineer James G. Stewart to re-do their work several times until he was satisfied. Welles hired Bernard Herrmann to compose the film's score. Where most Hollywood film scores were written quickly, in as few as two or three weeks after filming was completed, Herrmann was given 12 weeks to write the music. He had sufficient time to do his own orchestrations and conducting, and worked on the film reel by reel as it was shot and cut. He wrote complete musical pieces for some of the montages, and Welles edited many of the scenes to match their length. Trailer Written and directed by Welles at Toland's suggestion, the theatrical trailer for Citizen Kane differs from other trailers in that it did not feature a single second of footage of the actual film itself, but acts as a wholly original, tongue-in-cheek, pseudo-documentary piece on the film's production. Filmed at the same time as Citizen Kane itself, it offers the only existing behind-the-scenes footage of the film. The trailer, shot by Wild instead of Toland, follows an unseen Welles as he provides narration for a tour around the film set, introductions to the film's core cast members, and a brief overview of Kane's character. The trailer also contains a number of trick shots, including one of Everett Sloane appearing at first to be running into the camera, which turns out to be the reflection of the camera in a mirror. At the time, it was almost unprecedented for a film trailer to not actually feature anything of the film itself; and while Citizen Kane is frequently cited as a groundbreaking, influential film, Simon Callow argues its trailer was no less original in its approach. Callow writes that it has "great playful charm ... it is a miniature documentary, almost an introduction to the cinema ... Teasing, charming, completely original, it is a sort of conjuring trick: Without his face appearing once on the screen, Welles entirely dominates its five [sic] minutes' duration." Style Film scholars and historians view Citizen Kane as Welles's attempt to create a new style of filmmaking by studying various forms of it and combining them into one. However, Welles stated that his love for cinema began only when he started working on the film. When asked where he got the confidence as a first-time director to direct a film so radically different from contemporary cinema, he responded, "Ignorance, ignorance, sheer ignorance—you know there's no confidence to equal it. It's only when you know something about a profession, I think, that you're timid or careful." David Bordwell wrote that "The best way to understand Citizen Kane is to stop worshipping it as a triumph of technique." Bordwell argues that the film did not invent any of its famous techniques such as deep focus cinematography, shots of the ceilings, chiaroscuro lighting and temporal jump-cuts, and that many of these stylistics had been used in German Expressionist films of the 1920s, such as The Cabinet of Dr. Caligari. But Bordwell asserts that the film did put them all together for the first time and perfected the medium in one single film. In a 1948 interview, D. W. Griffith said, "I loved Citizen Kane and particularly loved the ideas he took from me." Arguments against the film's cinematic innovations were made as early as 1946 when French historian Georges Sadoul wrote, "The film is an encyclopedia of old techniques." He pointed out such examples as compositions that used both the foreground and the background in the films of Auguste and Louis Lumière, special effects used in the films of Georges Méliès, shots of the ceiling in Erich von Stroheim's Greed and newsreel montages in the films of Dziga Vertov. French film critic André Bazin defended the film, writing: "In this respect, the accusation of plagiarism could very well be extended to the film's use of panchromatic film or its exploitation of the properties of gelatinous silver halide." Bazin disagreed with Sadoul's comparison to Lumière's cinematography since Citizen Kane used more sophisticated lenses, but acknowledged that it had similarities to such previous works as The 49th Parallel and The Power and the Glory. Bazin stated that "even if Welles did not invent the cinematic devices employed in Citizen Kane, one should nevertheless credit him with the invention of their meaning." Bazin championed the techniques in the film for its depiction of heightened reality, but Bordwell believed that the film's use of special effects contradicted some of Bazin's theories. Storytelling techniques Citizen Kane rejects the traditional linear, chronological narrative and tells Kane's story entirely in flashbacks using different points of view, many of them from Kane's aged and forgetful associates, the cinematic equivalent of the unreliable narrator in literature. Welles also dispenses with the idea of a single storyteller and uses multiple narrators to recount Kane's life, a technique not used previously in Hollywood films. Each narrator recounts a different part of Kane's life, with each story overlapping another. The film depicts Kane as an enigma, a complicated man who leaves viewers with more questions than answers as to his character, such as the newsreel footage where he is attacked for being both a communist and a fascist. The technique of flashbacks had been used in earlier films, notably The Power and the Glory (1933), but no film was as immersed in it as Citizen Kane. Thompson the reporter acts as a surrogate for the audience, questioning Kane's associates and piecing together his life. Films typically had an "omniscient perspective" at the time, which Marilyn Fabe says give the audience the "illusion that we are looking with impunity into a world which is unaware of our gaze". Citizen Kane also begins in that fashion until the News on the March sequence, after which we the audience see the film through the perspectives of others. The News on the March sequence gives an overview of Kane's entire life (and the film's entire story) at the beginning of the film, leaving the audience without the typical suspense of wondering how it will end. Instead, the film's repetitions of events compels the audience to analyze and wonder why Kane's life happened the way that it did, under the pretext of finding out what "Rosebud" means. The film then returns to the omniscient perspective in the final scene, when only the audience discovers what "Rosebud" is. Cinematography The most innovative technical aspect of Citizen Kane is the extended use of deep focus, where the foreground, background, and everything in between are all in sharp focus. Cinematographer Toland did this through his experimentation with lenses and lighting. Toland described the achievement in an article for Theatre Arts magazine, made possible by the sensitivity of modern speed film: New developments in the science of motion picture photography are not abundant at this advanced stage of the game but periodically one is perfected to make this a greater art. Of these I am in an excellent position to discuss what is termed "Pan-focus", as I have been active for two years in its development and used it for the first time in Citizen Kane. Through its use, it is possible to photograph action from a range of eighteen inches from the camera lens to over two hundred feet away, with extreme foreground and background figures and action both recorded in sharp relief. Hitherto, the camera had to be focused either for a close or a distant shot, all efforts to encompass both at the same time resulting in one or the other being out of focus. This handicap necessitated the breaking up of a scene into long and short angles, with much consequent loss of realism. With pan-focus, the camera, like the human eye, sees an entire panorama at once, with everything clear and lifelike. Another unorthodox method used in the film was the low-angle shots facing upwards, thus allowing ceilings to be shown in the background of several scenes. Every set was built with a ceiling which broke with studio convention, and many were constructed of fabric that concealed microphones. Welles felt that the camera should show what the eye sees, and that it was a bad theatrical convention to pretend that there was no ceiling—"a big lie in order to get all those terrible lights up there," he said. He became fascinated with the look of low angles, which made even dull interiors look interesting. One extremely low angle is used to photograph the encounter between Kane and Leland after Kane loses the election. A hole was dug for the camera, which required drilling into the concrete floor. Welles credited Toland on the same title card as himself. "It's impossible to say how much I owe to Gregg," he said. "He was superb." He called Toland "the best director of photography that ever existed." Sound Citizen Kanes sound was recorded by Bailey Fesler and re-recorded in post-production by audio engineer James G. Stewart, both of whom had worked in radio. Stewart said that Hollywood films never deviated from a basic pattern of how sound could be recorded or used, but with Welles "deviation from the pattern was possible because he demanded it." Although the film is known for its complex soundtrack, much of the audio is heard as it was recorded by Fesler and without manipulation. Welles used techniques from radio like overlapping dialogue. The scene in which characters sing "Oh, Mr. Kane" was especially complicated and required mixing several soundtracks together. He also used different "sound perspectives" to create the illusion of distances, such as in scenes at Xanadu where characters speak to each other at far distances. Welles experimented with sound in post-production, creating audio montages, and chose to create all of the sound effects for the film instead of using RKO's library of sound effects. Welles used an aural technique from radio called the "lightning-mix". Welles used this technique to link complex montage sequences via a series of related sounds or phrases. For example, Kane grows from a child into a young man in just two shots. As Thatcher hands eight-year-old Kane a sled and wishes him a Merry Christmas, the sequence suddenly jumps to a shot of Thatcher fifteen years later, completing the sentence he began in both the previous shot and the chronological past. Other radio techniques include using a number of voices, each saying a sentence or sometimes merely a fragment of a sentence, and splicing the dialogue together in quick succession, such as the projection room scene. The film's sound cost $16,996, but was originally budgeted at $7,288. Film critic and director François Truffaut wrote that "Before Kane, nobody in Hollywood knew how to set music properly in movies. Kane was the first, in fact the only, great film that uses radio techniques. ... A lot of filmmakers know enough to follow Auguste Renoir's advice to fill the eyes with images at all costs, but only Orson Welles understood that the sound track had to be filled in the same way." Cedric Belfrage of The Clipper wrote "of all of the delectable flavours that linger on the palate after seeing Kane, the use of sound is the strongest." Make-up The make-up for Citizen Kane was created and applied by Maurice Seiderman (1907–1989), a junior member of the RKO make-up department. He had not been accepted into the union, which recognized him as only an apprentice, but RKO nevertheless used him to make up principal actors. "Apprentices were not supposed to make up any principals, only extras, and an apprentice could not be on a set without a journeyman present," wrote make-up artist Dick Smith, who became friends with Seiderman in 1979. "During his years at RKO I suspect these rules were probably overlooked often." "Seiderman had gained a reputation as one of the most inventive and creatively precise up-and-coming makeup men in Hollywood," wrote biographer Frank Brady. On an early tour of RKO, Welles met Seiderman in the small make-up lab that he created for himself in an unused dressing room. "Welles fastened on to him at once," wrote biographer Charles Higham, as Seiderman had developed his own makeup methods "that ensured complete naturalness of expression—a naturalness unrivaled in Hollywood." Seiderman developed a thorough plan for aging the principal characters, first making a plaster cast of the face of each of the actors who aged. He made a plaster mold of Welles's body down to the hips. "My sculptural techniques for the characters' aging were handled by adding pieces of white modeling clay, which matched the plaster, onto the surface of each bust," Seiderman told Norman Gambill. When Seiderman achieved the desired effect, he cast the clay pieces in a soft plastic material that he formulated himself. These appliances were then placed onto the plaster bust and a four-piece mold was made for each phase of aging. The castings were then fully painted and paired with the appropriate wig for evaluation. Before the actors went before the cameras each day, the pliable pieces were applied directly to their faces to recreate Seiderman's sculptural image. The facial surface was underpainted in a flexible red plastic compound; The red ground resulted in a warmth of tone that was picked up by the panchromatic film. Over that was applied liquid grease paint, and finally a colorless translucent talcum. Seiderman created the effect of skin pores on Kane's face by stippling the surface with a negative cast made from an orange peel. Welles often arrived on the set at 2:30 am, as application of the sculptural make-up took 3½ hours for the oldest incarnation of Kane. The make-up included appliances to age Welles's shoulders, breast, and stomach. "In the film and production photographs, you can see that Kane had a belly that overhung," Seiderman said. "That was not a costume, it was the rubber sculpture that created the image. You could see how Kane's silk shirt clung wetly to the character's body. It could not have been done any other way." Seiderman worked with Charles Wright on the wigs. These went over a flexible skull cover that Seiderman created and sewed into place with elastic thread. When he found the wigs too full, he untied one hair at a time to alter their shape. Kane's mustache was inserted into the makeup surface a few hairs at a time, to realistically vary the color and texture. He also made scleral lenses for Welles, Dorothy Comingore, George Coulouris, and Everett Sloane to dull the brightness of their young eyes. The lenses took a long time to fit properly, and Seiderman began work on them before devising any of the other makeup. "I painted them to age in phases, ending with the blood vessels and the arcus senilis of old age." Seiderman's tour de force was the breakfast montage, shot all in one day. "Twelve years, two years shot at each scene," he said. The major studios gave screen credit for make-up only to the department head. When RKO make-up department head Mel Berns refused to share credit with Seiderman, who was only an apprentice, Welles told Berns that there would be no make-up credit. Welles signed a large advertisement in the Los Angeles newspaper: THANKS TO EVERYBODY WHO GETS SCREEN CREDIT FOR "CITIZEN KANE"AND THANKS TO THOSE WHO DON'TTO ALL THE ACTORS, THE CREW, THE OFFICE, THE MUSICIANS, EVERYBODYAND PARTICULARLY TO MAURICE SEIDERMAN, THE BEST MAKE-UP MAN IN THE WORLD Sets Although credited as an assistant, the film's art direction was done by Perry Ferguson. Welles and Ferguson got along during their collaboration. In the weeks before production began Welles, Toland and Ferguson met regularly to discuss the film and plan every shot, set design and prop. Ferguson would take notes during these discussions and create rough designs of the sets and story boards for individual shots. After Welles approved the rough sketches, Ferguson made miniature models for Welles and Toland to experiment on with a periscope in order to rehearse and perfect each shot. Ferguson then had detailed drawings made for the set design, including the film's lighting design. The set design was an integral part of the film's overall look and Toland's cinematography. In the original script the Great Hall at Xanadu was modeled after the Great Hall in Hearst Castle and its design included a mixture of Renaissance and Gothic styles. "The Hearstian element is brought out in the almost perverse juxtaposition of incongruous architectural styles and motifs," wrote Carringer. Before RKO cut the film's budget, Ferguson's designs were more elaborate and resembled the production designs of early Cecil B. DeMille films and Intolerance. The budget cuts reduced Ferguson's budget by 33 percent and his work cost $58,775 total, which was below average at that time. To save costs Ferguson and Welles re-wrote scenes in Xanadu's living room and transported them to the Great Hall. A large staircase from another film was found and used at no additional cost. When asked about the limited budget, Ferguson said "Very often—as in that much-discussed 'Xanadu' set in Citizen Kane—we can make a foreground piece, a background piece, and imaginative lighting suggests a great deal more on the screen than actually exists on the stage." According to the film's official budget there were 81 sets built, but Ferguson said there were between 106 and 116. Still photographs of Oheka Castle in Huntington, New York, were used in the opening montage, representing Kane's Xanadu estate. Ferguson also designed statues from Kane's collection with styles ranging from Greek to German Gothic. The sets were also built to accommodate Toland's camera movements. Walls were built to fold and furniture could quickly be moved. The film's famous ceilings were made out of muslin fabric and camera boxes were built into the floors for low angle shots. Welles later said that he was proud that the film production value looked much more expensive than the film's budget. Although neither worked with Welles again, |
Character encodings are representations of textual data. A given character encoding may be associated with a specific character set (the collection of characters which it can represent), though some character sets have multiple character encodings and vice versa. Character encodings may be broadly grouped according to the number of bytes required to represent a single character: there are single-byte encodings, multibyte (also called wide) encodings, and variable-width (also called variable-length) encodings. The earliest character encodings were single-byte, the best-known example of which is ASCII. ASCII remains in use today, for example in HTTP headers. However, single-byte encodings cannot model character sets with more than 256 characters. Scripts that require large character sets such as Chinese, Japanese and Korean must be represented with multibyte encodings. Early multibyte encodings were fixed-length, meaning that although each character was represented by more than one byte, all characters used the same number of bytes ("word length"), making them suitable for decoding with a lookup table. The final group, variable-width encodings, is a subset of multibyte encodings. These use more complex encoding and decoding logic to efficiently represent large character sets while keeping the representations of more commonly used characters shorter or maintaining backward compatibility properties. This group includes UTF-8, an encoding of the Unicode character set; UTF-8 is the most common encoding of text media on the Internet. Genetic code Biological organisms contain genetic material that is used to control their function and development. This is DNA, which contains units named genes from which messenger RNA is derived. This in turn produces proteins through a genetic code in which a series of triplets (codons) of four possible nucleotides can be translated into one of twenty possible amino acids. A sequence of codons results in a corresponding sequence of amino acids that form a protein molecule; a type of codon called a stop codon signals the end of the sequence. Gödel code In mathematics, a Gödel code was the basis for the proof of Gödel's incompleteness theorem. Here, the idea was to map mathematical notation to a natural number (using a Gödel numbering). Other There are codes using colors, like traffic lights, the color code employed to mark the nominal value of the electrical resistors or that of the trashcans devoted to specific types of garbage (paper, glass, organic, etc.). In marketing, coupon codes can be used for a financial discount or rebate when purchasing a product from a (usual internet) retailer. In military environments, specific uses: to mark some moments of the day, to command the infantry on the battlefield, etc. Communication systems for sensory impairments, such as sign language for deaf people and braille for blind people, are based on movement or tactile codes. Musical scores are the most common way to encode music. Specific games have their own code systems to record the matches, e.g. chess notation. Cryptography In the history of cryptography, codes were once common for ensuring the confidentiality of communications, although ciphers are now used instead. Secret codes intended to obscure the real messages, ranging from serious (mainly espionage in military, diplomacy, business, etc.) to trivial (romance, games) can be any kind of imaginative encoding: flowers, game cards, clothes, fans, hats, melodies, birds, etc., in which the sole requirement is the pre-agreement on the meaning by both the sender and the receiver. Other examples Other examples of encoding include: Encoding (in cognition) - a basic perceptual process of interpreting incoming stimuli; technically speaking, it is a complex, multi-stage process of converting relatively objective sensory input (e.g., light, sound) into a subjectively meaningful experience. A content format - a specific encoding format for converting a specific type of data to information. Text encoding uses a markup language to tag the structure and other features of a text to facilitate processing by computers. (See | source alphabet is obtained by concatenating the encoded strings. Before giving a mathematically precise definition, this is a brief example. The mapping is a code, whose source alphabet is the set and whose target alphabet is the set . Using the extension of the code, the encoded string 0011001 can be grouped into codewords as 0 011 0 01, and these in turn can be decoded to the sequence of source symbols acab. Using terms from formal language theory, the precise mathematical definition of this concept is as follows: let S and T be two finite sets, called the source and target alphabets, respectively. A code is a total function mapping each symbol from S to a sequence of symbols over T. The extension of , is a homomorphism of into , which naturally maps each sequence of source symbols to a sequence of target symbols. Variable-length codes In this section, we consider codes that encode each source (clear text) character by a code word from some dictionary, and concatenation of such code words give us an encoded string. Variable-length codes are especially useful when clear text characters have different probabilities; see also entropy encoding. A prefix code is a code with the "prefix property": there is no valid code word in the system that is a prefix (start) of any other valid code word in the set. Huffman coding is the most known algorithm for deriving prefix codes. Prefix codes are widely referred to as "Huffman codes" even when the code was not produced by a Huffman algorithm. Other examples of prefix codes are country calling codes, the country and publisher parts of ISBNs, and the Secondary Synchronization Codes used in the UMTS WCDMA 3G Wireless Standard. Kraft's inequality characterizes the sets of codeword lengths that are possible in a prefix code. Virtually any uniquely decodable one-to-many code, not necessarily a prefix one, must satisfy Kraft's inequality. Error-correcting codes Codes may also be used to represent data in a way more resistant to errors in transmission or storage. This so-called error-correcting code works by including carefully crafted redundancy with the stored (or transmitted) data. Examples include Hamming codes, Reed–Solomon, Reed–Muller, Walsh–Hadamard, Bose–Chaudhuri–Hochquenghem, Turbo, Golay, Goppa, low-density parity-check codes, and space–time codes. Error detecting codes can be optimised to detect burst errors, or random errors. Examples Codes in communication used for brevity A cable code replaces words (e.g. ship or invoice) with shorter words, allowing the same information to be sent with fewer characters, more quickly, and less expensively. Codes can be used for brevity. When telegraph messages were the state of the art in rapid long-distance communication, elaborate systems of commercial codes that encoded complete phrases into single mouths (commonly five-minute groups) were developed, so that telegraphers became conversant with such "words" as BYOXO ("Are you trying to weasel out of our deal?"), LIOUY ("Why do you not answer my question?"), BMULD ("You're a skunk!"), or |
lemurids, they have long upper incisors, although they do have the comb-like teeth typical of all strepsirhines. They have the dental formula: Cheirogaleids are omnivores, eating fruits, flowers and leaves (and sometimes nectar), as well as insects, spiders, and small vertebrates. The females usually have three pairs of nipples. After a meager 60-day gestation, they will bear two to four (usually two or three) young. After five to six weeks, the young are weaned and become fully mature near the end of their first year or sometime in their second year, depending on the species. In human care, they can live for up to 15 years, although their life expectancy in the wild is probably significantly shorter. Classification The five genera of cheirogaleids contain 34 species. Infraorder Lemuriformes Family Cheirogaleidae Genus Cheirogaleus: dwarf lemurs C. medius group Fat-tailed dwarf lemur, Cheirogaleus medius C. major group Lavasoa dwarf lemur, Cheirogaleus lavasoensis Greater dwarf lemur, Cheirogaleus major Furry-eared dwarf lemur, Cheirogaleus crossleyi Lesser iron-gray dwarf lemur, Cheirogaleus minusculus Sibree's dwarf lemur, Cheirogaleus sibreei Genus Microcebus: mouse lemurs Gray mouse lemur, Microcebus murinus Reddish-gray mouse lemur, Microcebus griseorufus Golden-brown mouse lemur, Microcebus ravelobensis Northern rufous mouse lemur, Microcebus tavaratra Sambirano mouse lemur, Microcebus sambiranensis Simmons' mouse lemur, Microcebus simmonsi Pygmy mouse lemur, Microcebus myoxinus Brown mouse lemur, Microcebus rufus Madame Berthe's mouse lemur, Microcebus berthae Goodman's mouse lemur, Microcebus lehilahytsara Jolly's mouse lemur, Microcebus jollyae MacArthur's mouse lemur, Microcebus macarthurii Mittermeier's mouse lemur, Microcebus mittermeieri Claire's mouse lemur, Microcebus mamiratra Bongolava mouse lemur, Microcebus bongolavensis Danfoss' mouse lemur, Microcebus danfossi Arnhold's mouse lemur, Microcebus arnholdi Margot Marsh's mouse lemur, | weigh no more than 500 grams, with some species weighing as little as 60 grams. Dwarf and mouse lemurs are nocturnal and arboreal. They are excellent climbers and can also jump far, using their long tails for balance. When on the ground (a rare occurrence), they move by hopping on their hind legs. They spend the day in tree hollows or leaf nests. Cheirogaleids are typically solitary, but sometimes live together in pairs. Their eyes possess a tapetum lucidum, a light-reflecting layer that improves their night vision. Some species, such as the lesser dwarf lemur, store fat at the hind legs and the base of the tail, and hibernate. Unlike lemurids, they have long upper incisors, although they do have the comb-like teeth typical of all strepsirhines. They have the dental formula: Cheirogaleids are omnivores, eating fruits, flowers and leaves (and sometimes nectar), as well as insects, spiders, and small vertebrates. The females usually have three pairs of nipples. After a meager 60-day gestation, they will bear two to four (usually two or three) young. After five to six weeks, the young are weaned and become fully mature near the end of their first year or sometime in their second year, depending on the species. In human |
This may exemplify a rare example of insular dwarfing in a mainland context, with the "islands" being formed by biogeographic barriers during arid climatic periods when forest distribution became patchy, and/or by the extensive river networks in the Amazon Basin. All callitrichids are arboreal. They are the smallest of the simian primates. They eat insects, fruit, and the sap or gum from trees; occasionally, they take small vertebrates. The marmosets rely quite heavily on tree exudates, with some species (e.g. Callithrix jacchus and Cebuella pygmaea) considered obligate exudativores. Callitrichids typically live in small, territorial groups of about five or six animals. Their social organization is unique among primates, and is called a "cooperative polyandrous group". This communal breeding system involves groups of multiple males and females, but only one female is reproductively active. Females mate with more than one male and each shares the responsibility of carrying the offspring. They are the only primate group that regularly produces twins, which constitute over 80% of births in species that have been studied. Unlike other male primates, male callitrichids generally provide as much parental care as females. Parental duties may include carrying, protecting, feeding, comforting, and even engaging in play behavior with offspring. In some cases, such as in the cotton-top tamarin (Saguinus oedipus), males, particularly those that are paternal, even show a greater involvement in caregiving than females. The typical social structure seems to constitute a breeding group, with several of their previous offspring living in the group and providing significant help in rearing the young. Species and subspecies list Taxa included in the Callitrichidae are: Family Callitrichidae Genus Cebuella Western pygmy marmoset, Cebuella pygmaea Eastern pygmy marmoset, Cebuella niveiventris Genus Mico Silvery marmoset, Mico argentatus Roosmalens' dwarf marmoset, Mico humilis White marmoset, Mico leucippe Black-tailed marmoset, Mico melanurus Schneider's marmoset, Mico schneideri Hershkovitz's marmoset, Mico intermedius Emilia's marmoset, Mico emiliae Black-headed marmoset, Mico nigriceps Marca's marmoset, Mico marcai Santarem marmoset, Mico humeralifer Gold-and-white marmoset, Mico chrysoleucos Maués marmoset, Mico mauesi Sateré marmoset, Mico saterei Rio Acarí marmoset, Mico acariensis Rondon's marmoset, Mico rondoni Munduruku marmoset, Mico munduruku Genus Callithrix Common | tamarin, Saguinus geoffroyi Subgenus Tamarinus Moustached tamarin, Saguinus mystax Spix's moustached tamarin, Saguinus mystax mystax Red-capped moustached tamarin, Saguinus mystax pileatus White-rump moustached tamarin, Saguinus mystax pluto White-lipped tamarin, Saguinus labiatus Geoffroy's red-bellied tamarin, Saguinus labiatus labiatus Gray's red-bellied tamarin, Saguinus labiatus rufiventer Thomas's red-bellied tamarin, Saguinus labiatus thomasi Emperor tamarin, Saguinus imperator Emperor tamarin, Saguinus imperator imperator Bearded emperor tamarin, Saguinus imperator subgrisescens Mottle-faced tamarin, Saguinus inustus Genus Leontocebus Black-mantled tamarin, Leontocebus nigricollis Spix's black-mantle tamarin, Leontocebus nigricollis nigricollis Graells's tamarin, Leontocebus nigricollis graellsi Hernández-Camacho's black-mantle tamarin, Leontocebus nigricollis hernandezi Brown-mantled tamarin, Leontocebus fuscicollis Avila Pires' saddle-back tamarin, Leontocebus fuscicollis avilapiresi Spix's saddle-back tamarin, Leontocebus fuscicollis fuscicollis Mura's saddleback tamarin, Leontocebus fuscicollis mura Lako's saddleback tamarin, Leontocebus fuscicollis primitivus Andean saddle-back tamarin, Leontocebus leucogenys Lesson's saddle-back tamarin, Leontocebus fuscus Cruz Lima's saddle-back tamarin, Leontocebus cruzlimai Weddell's saddle-back tamarin, Leontocebus weddelli Weddell's tamarin, Leontocebus weddelli weddelli Crandall's saddle-back tamarin, Leontocebus weddelli crandalli White-mantled tamarin, Leontocebus weddelli melanoleucus Golden-mantled tamarin, Leontocebus tripartitus Illiger's saddle-back tamarin, Leontocebus illigeri Red-mantled saddle-back tamarin, Leontocebus lagonotus Geoffroy's saddle-back tamarin, Leontocebus nigrifrons Genus Leontopithecus Golden lion tamarin, Leontopithecus rosalia Golden-headed lion tamarin, Leontopithecus chrysomelas Black lion tamarin, Leontopithecus chrysopygus Superagui lion tamarin, Leontopithecus caissara References External links Primate families |
on species. They are social animals, living in groups of between five and forty individuals, with the smaller species typically forming larger groups. They are generally diurnal in habit. Classification Previously, New World monkeys were divided between Callitrichidae and this family. For a few recent years, marmosets, tamarins, and lion tamarins were placed as a subfamily (Callitrichinae) in Cebidae, while moving other genera from Cebidae into the families Aotidae, Pitheciidae and Atelidae. The most recent classification of New World monkeys again splits the callitrichids off, leaving only the capuchins and squirrel monkeys in this family. Subfamily Cebinae (all capuchin monkeys) Genus Cebus (gracile capuchin monkeys) Colombian white-faced capuchin or Colombian white-headed capuchin, Cebus | of the brown capuchin, with a body length of 33 to 56 cm, and a weight of 2.5 to 3.9 kilograms. They are somewhat variable in form and coloration, but all have the wide, flat, noses typical of New World monkeys. They are omnivorous, mostly eating fruit and insects, although the proportions of these foods vary greatly between species. They have the dental formula: Females give birth to one or two young after a gestation period of between 130 and 170 days, depending on species. They are social animals, living in groups of between five and forty individuals, with the smaller species typically forming larger groups. They are generally diurnal in habit. Classification Previously, New World monkeys were divided between Callitrichidae and this family. For a few recent years, marmosets, tamarins, and lion tamarins were placed as a subfamily (Callitrichinae) in Cebidae, while moving other genera from Cebidae into the families Aotidae, Pitheciidae and Atelidae. The most recent classification of New World monkeys again splits the callitrichids off, leaving only the capuchins and squirrel monkeys in this family. Subfamily Cebinae (all capuchin monkeys) Genus Cebus (gracile capuchin monkeys) Colombian white-faced capuchin or Colombian white-headed capuchin, Cebus capucinus Panamanian white-faced capuchin or Panamanian white-headed capuchin, Cebus imitator Marañón white-fronted capuchin, Cebus yuracus Shock-headed capuchin, Cebus cuscinus Spix's white-fronted capuchin, Cebus unicolor Humboldt's white-fronted capuchin, Cebus albifrons Guianan weeper |
examinations of acanthodian characteristics indicate that bony fish evolved directly from placoderm like ancestors, while acanthodians represent a paraphyletic assemblage leading to Chondrichthyes. Some characteristics previously thought to be exclusive to acanthodians are also present in basal cartilaginous fish. In particular, new phylogenetic studies find cartilaginous fish to be well nested among acanthodians, with Doliodus and Tamiobatis being the closest relatives to Chondrichthyes. Recent studies vindicate this, as Doliodus had a mosaic of chondrichthyian and acanthodiian traits. Dating back to the Middle and Late Ordovician Period, many isolated scales, made of dentine and bone, have a structure and growth form that is chondrichthyan-like. They may be the remains of stem-chondrichthyans, but their classification remains uncertain. The earliest unequivocal fossils of cartilaginous fishes first appeared in the fossil record by about 430 million years ago, during the middle Wenlock Epoch of the Silurian period. The radiation of elasmobranches in the chart on the right is divided into the taxa: Cladoselache, Eugeneodontiformes, Symmoriida, Xenacanthiformes, Ctenacanthiformes, Hybodontiformes, Galeomorphi, Squaliformes and Batoidea. By the start of the Early Devonian, 419 million years ago, jawed fishes had divided into three distinct groups: the now extinct placoderms (a paraphyletic assemblage of ancient armoured fishes), the bony fishes, and the clade that includes spiny sharks and early cartilaginous fish. The modern bony fishes, class Osteichthyes, appeared in the late Silurian or early Devonian, about 416 million years ago. The first abundant genus of shark, Cladoselache, appeared in the oceans during the Devonian Period. The first Cartilaginous fishes evolved from Doliodus-like spiny shark ancestors. A Bayesian analysis of molecular data suggests that the Holocephali and Elasmoblanchii diverged in the Silurian () and that the sharks and rays/skates split in the Carboniferous (). {| class="wikitable" |- ! rowspan=2 style="background:" | Devonian | colspan="3" style="text-align:center; background:#ddf8f8;"| Devonian (419–359 mya) |- | | Cladoselache | Cladoselache was the first abundant genus of primitive shark, appearing about 370 Ma. It grew to long, with anatomical features similar to modern mackerel sharks. It had a streamlined body almost entirely devoid of scales, with five to seven gill slits and a short, rounded snout that had a terminal mouth opening at the front of the skull. It had a very weak jaw joint compared with modern-day sharks, but it compensated for that with very strong jaw-closing muscles. Its teeth were multi-cusped and smooth-edged, making them suitable for grasping, but not tearing or chewing. Cladoselache therefore probably seized prey by the tail and swallowed it whole. It had powerful keels that extended onto the side of the tail stalk and a semi-lunate tail fin, with the superior lobe about the same size as the inferior. This combination helped with its speed and agility which was useful when trying to outswim its probable predator, the heavily armoured long placoderm fish Dunkleosteus. |- ! rowspan=5 style="background:" | Carbon-iferous | colspan="3" style="text-align:center; background:#ddf8f8;"| Carboniferous (359–299 Ma): Sharks underwent a major evolutionary radiation during the Carboniferous. It is believed that this evolutionary radiation occurred because the decline of the placoderms at the end of the Devonian period caused many environmental niches to become unoccupied and allowed new organisms to evolve and fill these niches. |- | width="140px" | | align="center" |Orthacanthus senckenbergianus | The first 15 million years of the Carboniferous has very few terrestrial fossils. This gap in the fossil record, is called Romer's gap after the American palaentologist Alfred Romer. While it has long been debated whether the gap is a result of fossilisation or relates to an actual event, recent work indicates that the gap period saw a drop in atmospheric oxygen levels, indicating some sort of ecological collapse. The gap saw the demise of the Devonian fish-like ichthyostegalian labyrinthodonts, and the rise of the more advanced temnospondyl and reptiliomorphan amphibians that so typify the Carboniferous terrestrial vertebrate fauna. The Carboniferous seas were inhabited by many fish, mainly Elasmobranchs (sharks and their relatives). These included some, like Psammodus, with crushing pavement-like teeth adapted for grinding the shells of brachiopods, crustaceans, and other marine organisms. Other sharks had piercing teeth, such as the Symmoriida; some, the petalodonts, had peculiar cycloid cutting teeth. Most of the sharks were marine, but the Xenacanthida invaded fresh waters of the coal swamps. Among the bony fish, the Palaeonisciformes found in coastal waters also appear to have migrated to rivers. Sarcopterygian fish were also prominent, and one group, the Rhizodonts, reached very large size. Most species of Carboniferous marine fish have been described largely from teeth, fin spines and dermal ossicles, with smaller freshwater fish preserved whole. Freshwater fish were abundant, and include the genera Ctenodus, Uronemus, Acanthodes, Cheirodus, and Gyracanthus. |- | | Stethacanthidae | As a result of the evolutionary radiation, carboniferous sharks assumed a wide variety of bizarre shapes; e.g., sharks belonging to the family Stethacanthidae possessed a flat brush-like dorsal fin with a patch of denticles on its top. ''Stethacanthus unusual fin may have been used in mating rituals. Apart from the fins, Stethacanthidae resembled Falcatus (below). |- | | Falcatus | Falcatus is a genus of small cladodont-toothed sharks which lived 335–318 Ma. They were about long. They are characterised by the prominent fin spines that curved anteriorly over their heads. |- | | Orodus | Orodus is another shark of the Carboniferous, a genus from the family Orodontidae that lived into the early Permian from 303 to 295 Ma. It grew to in length. |- ! rowspan=1 style="background:" | Permian | colspan="3" style="text-align:center; background:#ddf8f8;"| Permian (298–252 Ma): The Permian ended with the most extensive extinction event recorded in paleontology: the Permian-Triassic extinction event. 90% to 95% of marine species became extinct, as well as 70% of all land organisms. Recovery from the Permian-Triassic extinction event was protracted; land ecosystems took 30M years to recover, and marine ecosystems took even longer. |- ! rowspan=1 style="background:" | Triassic | colspan="3" style="text-align:center; background:#ddf8f8;"| Triassic (252–201 Ma): The fish fauna of the Triassic was remarkably uniform, reflecting the fact that very few families survived the Permian extinction. In turn, the Triassic ended with the | the epigonal organ (special tissue around the gonads, which is also thought to play a role in the immune system). They are also produced in the Leydig's organ, which is only found in certain cartilaginous fishes. The subclass Holocephali, which is a very specialized group, lacks both the Leydig's and epigonal organs. Appendages Apart from electric rays, which have a thick and flabby body, with soft, loose skin, chondrichthyans have tough skin covered with dermal teeth (again, Holocephali is an exception, as the teeth are lost in adults, only kept on the clasping organ seen on the caudal ventral surface of the male), also called placoid scales (or dermal denticles), making it feel like sandpaper. In most species, all dermal denticles are oriented in one direction, making the skin feel very smooth if rubbed in one direction and very rough if rubbed in the other. Originally, the pectoral and pelvic girdles, which do not contain any dermal elements, did not connect. In later forms, each pair of fins became ventrally connected in the middle when scapulocoracoid and puboischiadic bars evolved. In rays, the pectoral fins are connected to the head and are very flexible. One of the primary characteristics present in most sharks is the heterocercal tail, which aids in locomotion. Body covering Chondrichthyans have tooth-like scales called dermal denticles or placoid scales. Denticles usually provide protection, and in most cases, streamlining. Mucous glands exist in some species, as well. It is assumed that their oral teeth evolved from dermal denticles that migrated into the mouth, but it could be the other way around, as the teleost bony fish Denticeps clupeoides has most of its head covered by dermal teeth (as does, probably, Atherion elymus, another bony fish). This is most likely a secondary evolved characteristic, which means there is not necessarily a connection between the teeth and the original dermal scales. The old placoderms did not have teeth at all, but had sharp bony plates in their mouth. Thus, it is unknown whether the dermal or oral teeth evolved first. It has even been suggested that the original bony plates of all vertebrates are now gone and that the present scales are just modified teeth, even if both the teeth and body armor had a common origin a long time ago. However, there is currently no evidence of this. Respiratory system All chondrichthyans breathe through five to seven pairs of gills, depending on the species. In general, pelagic species must keep swimming to keep oxygenated water moving through their gills, whilst demersal species can actively pump water in through their spiracles and out through their gills. However, this is only a general rule and many species differ. A spiracle is a small hole found behind each eye. These can be tiny and circular, such as found on the nurse shark (Ginglymostoma cirratum), to extended and slit-like, such as found on the wobbegongs (Orectolobidae). Many larger, pelagic species, such as the mackerel sharks (Lamnidae) and the thresher sharks (Alopiidae), no longer possess them. Nervous system In chondrichthyans, the nervous system is composed of a small brain, 8-10 pairs of cranial nerves, and a spinal chord with spinal nerves. They have several sensory organs which provide information to be processed. Ampullae of Lorenzini are a network of small jelly filled pores called electroreceptors which help the fish sense electric fields in water. This aids in finding prey, navigation, and sensing temperature. The Lateral line system has modified epithelial cells located externally which sense motion, vibration, and pressure in the water around them. Most species have large well-developed eyes. Also, they have very powerful nostrils and olfactory organs. Their inner ears consist of 3 large semicircular canals which aid in balance and orientation. Their sound detecting apparatus has limited range and is typically more powerful at lower frequencies. Some species have electric organs which can be used for defense and predation. They have relatively simple brains with the forebrain not greatly enlarged. The structure and formation of myelin in their nervous systems are nearly identical to that of tetrapods, which has led evolutionary biologists to believe that Chondrichthyes were a cornerstone group in the evolutionary timeline of myelin development. Immune system Like all other jawed vertebrates, members of Chondrichthyes have an adaptive immune system. Reproduction Fertilization is internal. Development is usually live birth (ovoviviparous species) but can be through eggs (oviparous). Some rare species are viviparous. There is no parental care after birth; however, some chondrichthyans do guard their eggs. Capture-induced premature birth and abortion (collectively called capture-induced parturition) occurs frequently in sharks/rays when fished. Capture-induced parturition is often mistaken for natural birth by recreational fishers and is rarely considered in commercial fisheries management despite being shown to occur in at least 12% of live bearing sharks and rays (88 species to date). Classification The class Chondrichthyes has two subclasses: the subclass Elasmobranchii (sharks, rays, skates, and sawfish) and the subclass Holocephali (chimaeras). To see the full list of the species, click here. Evolution Cartilaginous fish are considered to have evolved from acanthodians. Originally assumed to be closely related to bony fish or a polyphyletic assemblage leading to both groups, the discovery of Entelognathus and several examinations of acanthodian characteristics indicate that bony fish evolved directly from placoderm like ancestors, while acanthodians represent a paraphyletic assemblage leading to Chondrichthyes. Some characteristics previously thought to be exclusive to acanthodians are also present in basal cartilaginous fish. In particular, new phylogenetic studies find cartilaginous fish to be well nested among acanthodians, with Doliodus and Tamiobatis being the closest relatives to Chondrichthyes. Recent studies vindicate this, as Doliodus had a mosaic of chondrichthyian and acanthodiian traits. Dating back to the Middle and Late Ordovician Period, many isolated scales, made of dentine and bone, have a structure and growth form that is chondrichthyan-like. They may be the remains of stem-chondrichthyans, but their classification remains uncertain. The earliest unequivocal fossils of cartilaginous fishes first appeared in the fossil record by about 430 million years ago, during the middle Wenlock Epoch of |
could have a future in medicine. The doctor offered to have Linnaeus live with his family in Växjö and to teach him physiology and botany. Nils accepted this offer. University studies Lund Rothman showed Linnaeus that botany was a serious subject. He taught Linnaeus to classify plants according to Tournefort's system. Linnaeus was also taught about the sexual reproduction of plants, according to Sébastien Vaillant. In 1727, Linnaeus, age 21, enrolled in Lund University in Skåne. He was registered as , the Latin form of his full name, which he also used later for his Latin publications. Professor Kilian Stobæus, natural scientist, physician and historian, offered Linnaeus tutoring and lodging, as well as the use of his library, which included many books about botany. He also gave the student free admission to his lectures. In his spare time, Linnaeus explored the flora of Skåne, together with students sharing the same interests. Uppsala In August 1728, Linnaeus decided to attend Uppsala University on the advice of Rothman, who believed it would be a better choice if Linnaeus wanted to study both medicine and botany. Rothman based this recommendation on the two professors who taught at the medical faculty at Uppsala: Olof Rudbeck the Younger and Lars Roberg. Although Rudbeck and Roberg had undoubtedly been good professors, by then they were older and not so interested in teaching. Rudbeck no longer gave public lectures, and had others stand in for him. The botany, zoology, pharmacology and anatomy lectures were not in their best state. In Uppsala, Linnaeus met a new benefactor, Olof Celsius, who was a professor of theology and an amateur botanist. He received Linnaeus into his home and allowed him use of his library, which was one of the richest botanical libraries in Sweden. In 1729, Linnaeus wrote a thesis, on plant sexual reproduction. This attracted the attention of Rudbeck; in May 1730, he selected Linnaeus to give lectures at the University although the young man was only a second-year student. His lectures were popular, and Linnaeus often addressed an audience of 300 people. In June, Linnaeus moved from Celsius's house to Rudbeck's to become the tutor of the three youngest of his 24 children. His friendship with Celsius did not wane and they continued their botanical expeditions. Over that winter, Linnaeus began to doubt Tournefort's system of classification and decided to create one of his own. His plan was to divide the plants by the number of stamens and pistils. He began writing several books, which would later result in, for example, and . He also produced a book on the plants grown in the Uppsala Botanical Garden, . Rudbeck's former assistant, Nils Rosén, returned to the University in March 1731 with a degree in medicine. Rosén started giving anatomy lectures and tried to take over Linnaeus's botany lectures, but Rudbeck prevented that. Until December, Rosén gave Linnaeus private tutoring in medicine. In December, Linnaeus had a "disagreement" with Rudbeck's wife and had to move out of his mentor's house; his relationship with Rudbeck did not appear to suffer. That Christmas, Linnaeus returned home to Stenbrohult to visit his parents for the first time in about three years. His mother had disapproved of his failing to become a priest, but she was pleased to learn he was teaching at the University. Expedition to Lapland During a visit with his parents, Linnaeus told them about his plan to travel to Lapland; Rudbeck had made the journey in 1695, but the detailed results of his exploration were lost in a fire seven years afterwards. Linnaeus's hope was to find new plants, animals and possibly valuable minerals. He was also curious about the customs of the native Sami people, reindeer-herding nomads who wandered Scandinavia's vast tundras. In April 1732, Linnaeus was awarded a grant from the Royal Society of Sciences in Uppsala for his journey. Linnaeus began his expedition from Uppsala on 12 May 1732, just before he turned 25. He travelled on foot and horse, bringing with him his journal, botanical and ornithological manuscripts and sheets of paper for pressing plants. Near Gävle he found great quantities of Campanula serpyllifolia, later known as Linnaea borealis, the twinflower that would become his favourite. He sometimes dismounted on the way to examine a flower or rock and was particularly interested in mosses and lichens, the latter a main part of the diet of the reindeer, a common and economically important animal in Lapland. Linnaeus travelled clockwise around the coast of the Gulf of Bothnia, making major inland incursions from Umeå, Luleå and Tornio. He returned from his six-month-long, over expedition in October, having gathered and observed many plants, birds and rocks. Although Lapland was a region with limited biodiversity, Linnaeus described about 100 previously unidentified plants. These became the basis of his book . However, on the expedition to Lapland, Linnaeus used Latin names to describe organisms because he had not yet developed the binomial system. In Linnaeus's ideas about nomenclature and classification were first used in a practical way, making this the first proto-modern Flora. The account covered 534 species, used the Linnaean classification system and included, for the described species, geographical distribution and taxonomic notes. It was Augustin Pyramus de Candolle who attributed Linnaeus with as the first example in the botanical genre of Flora writing. Botanical historian E. L. Greene described as "the most classic and delightful" of Linnaeus's works. It was also during this expedition that Linnaeus had a flash of insight regarding the classification of mammals. Upon observing the lower jawbone of a horse at the side of a road he was travelling, Linnaeus remarked: "If I only knew how many teeth and of what kind every animal had, how many teats and where they were placed, I should perhaps be able to work out a perfectly natural system for the arrangement of all quadrupeds." In 1734, Linnaeus led a small group of students to Dalarna. Funded by the Governor of Dalarna, the expedition was to catalogue known natural resources and discover new ones, but also to gather intelligence on Norwegian mining activities at Røros. Seminal years in the Dutch Republic (1735–38) Doctorate His relations with Nils Rosén having worsened, Linnaeus accepted an invitation from Claes Sohlberg, son of a mining inspector, to spend the Christmas holiday in Falun, where Linnaeus was permitted to visit the mines. In April 1735, at the suggestion of Sohlberg's father, Linnaeus and Sohlberg set out for the Dutch Republic, where Linnaeus intended to study medicine at the University of Harderwijk while tutoring Sohlberg in exchange for an annual salary. At the time, it was common for Swedes to pursue doctoral degrees in the Netherlands, then a highly revered place to study natural history. On the way, the pair stopped in Hamburg, where they met the mayor, who proudly showed them a supposed wonder of nature in his possession: the taxidermied remains of a seven-headed hydra. Linnaeus quickly discovered the specimen was a fake, cobbled together from the jaws and paws of weasels and the skins of snakes. The provenance of the hydra suggested to Linnaeus that it had been manufactured by monks to represent the Beast of Revelation. Even at the risk of incurring the mayor's wrath, Linnaeus made his observations public, dashing the mayor's dreams of selling the hydra for an enormous sum. Linnaeus and Sohlberg were forced to flee from Hamburg. Linnaeus began working towards his degree as soon as he reached Harderwijk, a university known for awarding degrees in as little as a week. He submitted a dissertation, written back in Sweden, entitled Dissertatio medica inauguralis in qua exhibetur hypothesis nova de febrium intermittentium causa, in which he laid out his hypothesis that malaria arose only in areas with clay-rich soils. Although he failed to identify the true source of disease transmission, (i.e., the Anopheles mosquito), he did correctly predict that Artemisia annua (wormwood) would become a source of antimalarial medications. Within two weeks he had completed his oral and practical examinations and was awarded a doctoral degree. That summer Linnaeus reunited with Peter Artedi, a friend from Uppsala with whom he had once made a pact that should either of the two predecease the other, the survivor would finish the decedent's work. Ten weeks later, Artedi drowned in the canals of Amsterdam, leaving behind an unfinished manuscript on the classification of fish. Publishing of One of the first scientists Linnaeus met in the Netherlands was Johan Frederik Gronovius, to whom Linnaeus showed one of the several manuscripts he had brought with him from Sweden. The manuscript described a new system for classifying plants. When Gronovius saw it, he was very impressed, and offered to help pay for the printing. With an additional monetary contribution by the Scottish doctor Isaac Lawson, the manuscript was published as (1735). Linnaeus became acquainted with one of the most respected physicians and botanists in the Netherlands, Herman Boerhaave, who tried to convince Linnaeus to make a career there. Boerhaave offered him a journey to South Africa and America, but Linnaeus declined, stating he would not stand the heat. Instead, Boerhaave convinced Linnaeus that he should visit the botanist Johannes Burman. After his visit, Burman, impressed with his guest's knowledge, decided Linnaeus should stay with him during the winter. During his stay, Linnaeus helped Burman with his . Burman also helped Linnaeus with the books on which he was working: and . George Clifford, Philip Miller, and Johann Jacob Dillenius In August 1735, during Linnaeus's stay with Burman, he met George Clifford III, a director of the Dutch East India Company and the owner of a rich botanical garden at the estate of Hartekamp in Heemstede. Clifford was very impressed with Linnaeus's ability to classify plants, and invited him to become his physician and superintendent of his garden. Linnaeus had already agreed to stay with Burman over the winter, and could thus not accept immediately. However, Clifford offered to compensate Burman by offering him a copy of Sir Hans Sloane's Natural History of Jamaica, a rare book, if he let Linnaeus stay with him, and Burman accepted. On 24 September 1735, Linnaeus moved to Hartekamp to become personal physician to Clifford, and curator of Clifford's herbarium. He was paid 1,000 florins a year, with free board and lodging. Though the agreement was only for a winter of that year, Linnaeus practically stayed there until 1738. It was here that he wrote a book Hortus Cliffortianus, in the preface of which he described his experience as "the happiest time of my life". (A portion of Hartekamp was declared as public garden in April 1956 by the Heemstede local authority, and was named "Linnaeushof". It eventually became, as it is claimed, the biggest playground in Europe.) In July 1736, Linnaeus travelled to England, at Clifford's expense. He went to London to visit Sir Hans Sloane, a collector of natural history, and to see his cabinet, as well as to visit the Chelsea Physic Garden and its keeper, Philip Miller. He taught Miller about his new system of subdividing plants, as described in . Miller was in fact reluctant to use the new binomial nomenclature, preferring the classifications of Joseph Pitton de Tournefort and John Ray at first. Linnaeus, nevertheless, applauded Miller's Gardeners Dictionary, The conservative Scot actually retained in his dictionary a number of pre-Linnaean binomial signifiers discarded by Linnaeus but which have been retained by modern botanists. He only fully changed to the Linnaean system in the edition of The Gardeners Dictionary of 1768. Miller ultimately was impressed, and from then on started to arrange the garden according to Linnaeus's system. Linnaeus also travelled to Oxford University to visit the botanist Johann Jacob Dillenius. He failed to make Dillenius publicly fully accept his new classification system, though the two men remained in correspondence for many years afterwards. Linnaeus dedicated his Critica Botanica to him, as "opus botanicum quo absolutius mundus non-vidit". Linnaeus would later name a genus of tropical tree Dillenia in his honour. He then returned to Hartekamp, bringing with him many specimens of rare plants. The next year, 1737, he published , in which he described 935 genera of plants, and shortly thereafter he supplemented it with , with another sixty (sexaginta) genera. His work at Hartekamp led to another book, , a catalogue of the botanical holdings in the herbarium and botanical garden of Hartekamp. He wrote it in nine months (completed in July 1737), but it was not published until 1738. It contains the first use of the name Nepenthes, which Linnaeus used to describe a genus of pitcher plants. Linnaeus stayed with Clifford at Hartekamp until 18 October 1737 (new style), when he left the house to return to Sweden. Illness and the kindness of Dutch friends obliged him to stay some months longer in Holland. In May 1738, he set out for Sweden again. On the way home, he stayed in Paris for about a month, visiting botanists such as Antoine de Jussieu. After his return, Linnaeus never left Sweden again. Return to Sweden When Linnaeus returned to Sweden on 28 June 1738, he went to Falun, where he entered into an engagement to Sara Elisabeth Moræa. Three months later, he moved to Stockholm to find employment as a physician, and thus to make it possible to support a family. Once again, Linnaeus found a patron; he became acquainted with Count Carl Gustav Tessin, who helped him get work as a | to buy the collection. The acquaintance was a 24-year-old medical student, James Edward Smith, who bought the whole collection: 14,000 plants, 3,198 insects, 1,564 shells, about 3,000 letters and 1,600 books. Smith founded the Linnean Society of London five years later. The von Linné name ended with his son Carl, who never married. His other son, Johannes, had died aged 3. There are over two hundred descendants of Linnaeus through two of his daughters. Apostles During Linnaeus's time as Professor and Rector of Uppsala University, he taught many devoted students, 17 of whom he called "apostles". They were the most promising, most committed students, and all of them made botanical expeditions to various places in the world, often with his help. The amount of this help varied; sometimes he used his influence as Rector to grant his apostles a scholarship or a place on an expedition. To most of the apostles he gave instructions of what to look for on their journeys. Abroad, the apostles collected and organised new plants, animals and minerals according to Linnaeus's system. Most of them also gave some of their collection to Linnaeus when their journey was finished. Thanks to these students, the Linnaean system of taxonomy spread through the world without Linnaeus ever having to travel outside Sweden after his return from Holland. The British botanist William T. Stearn notes, without Linnaeus's new system, it would not have been possible for the apostles to collect and organise so many new specimens. Many of the apostles died during their expeditions. Early expeditions Christopher Tärnström, the first apostle and a 43-year-old pastor with a wife and children, made his journey in 1746. He boarded a Swedish East India Company ship headed for China. Tärnström never reached his destination, dying of a tropical fever on Côn Sơn Island the same year. Tärnström's widow blamed Linnaeus for making her children fatherless, causing Linnaeus to prefer sending out younger, unmarried students after Tärnström. Six other apostles later died on their expeditions, including Pehr Forsskål and Pehr Löfling. Two years after Tärnström's expedition, Finnish-born Pehr Kalm set out as the second apostle to North America. There he spent two-and-a-half years studying the flora and fauna of Pennsylvania, New York, New Jersey and Canada. Linnaeus was overjoyed when Kalm returned, bringing back with him many pressed flowers and seeds. At least 90 of the 700 North American species described in Species Plantarum had been brought back by Kalm. Cook expeditions and Japan Daniel Solander was living in Linnaeus's house during his time as a student in Uppsala. Linnaeus was very fond of him, promising Solander his eldest daughter's hand in marriage. On Linnaeus's recommendation, Solander travelled to England in 1760, where he met the English botanist Joseph Banks. With Banks, Solander joined James Cook on his expedition to Oceania on the Endeavour in 1768–71. Solander was not the only apostle to journey with James Cook; Anders Sparrman followed on the Resolution in 1772–75 bound for, among other places, Oceania and South America. Sparrman made many other expeditions, one of them to South Africa. Perhaps the most famous and successful apostle was Carl Peter Thunberg, who embarked on a nine-year expedition in 1770. He stayed in South Africa for three years, then travelled to Japan. All foreigners in Japan were forced to stay on the island of Dejima outside Nagasaki, so it was thus hard for Thunberg to study the flora. He did, however, manage to persuade some of the translators to bring him different plants, and he also found plants in the gardens of Dejima. He returned to Sweden in 1779, one year after Linnaeus's death. Major publications Systema Naturae The first edition of was printed in the Netherlands in 1735. It was a twelve-page work. By the time it reached its 10th edition in 1758, it classified 4,400 species of animals and 7,700 species of plants. People from all over the world sent their specimens to Linnaeus to be included. By the time he started work on the 12th edition, Linnaeus needed a new invention—the index card—to track classifications. In Systema Naturae, the unwieldy names mostly used at the time, such as "", were supplemented with concise and now familiar "binomials", composed of the generic name, followed by a specific epithet—in the case given, Physalis angulata. These binomials could serve as a label to refer to the species. Higher taxa were constructed and arranged in a simple and orderly manner. Although the system, now known as binomial nomenclature, was partially developed by the Bauhin brothers (see Gaspard Bauhin and Johann Bauhin) almost 200 years earlier, Linnaeus was the first to use it consistently throughout the work, including in monospecific genera, and may be said to have popularised it within the scientific community. After the decline in Linnaeus's health in the early 1770s, publication of editions of Systema Naturae went in two different directions. Another Swedish scientist, Johan Andreas Murray issued the Regnum Vegetabile section separately in 1774 as the Systema Vegetabilium, rather confusingly labelled the 13th edition. Meanwhile, a 13th edition of the entire Systema appeared in parts between 1788 and 1793. It was through the Systema Vegetabilium that Linnaeus's work became widely known in England, following its translation from the Latin by the Lichfield Botanical Society as A System of Vegetables (1783–1785). Orbis eruditi judicium de Caroli Linnaei MD scriptis ('Opinion of the learned world on the writings of Carl Linnaeus, Doctor') Published in 1740, this small octavo-sized pamphlet was presented to the State Library of New South Wales by the Linnean Society of NSW in 2018. This is considered among the rarest of all the writings of Linnaeus, and crucial to his career, securing him his appointment to a professorship of medicine at Uppsala University. From this position he laid the groundwork for his radical new theory of classifying and naming organisms for which he was considered the founder of modern taxonomy. (or, more fully, ) was first published in 1753, as a two-volume work. Its prime importance is perhaps that it is the primary starting point of plant nomenclature as it exists today. was first published in 1737, delineating plant genera. Around 10 editions were published, not all of them by Linnaeus himself; the most important is the 1754 fifth edition. In it Linnaeus divided the plant Kingdom into 24 classes. One, Cryptogamia, included all the plants with concealed reproductive parts (algae, fungi, mosses and liverworts and ferns). (1751) was a summary of Linnaeus's thinking on plant classification and nomenclature, and an elaboration of the work he had previously published in (1736) and (1737). Other publications forming part of his plan to reform the foundations of botany include his and : all were printed in Holland (as were (1737) and (1735)), the Philosophia being simultaneously released in Stockholm. Collections At the end of his lifetime the Linnean collection in Uppsala was considered one of the finest collections of natural history objects in Sweden. Next to his own collection he had also built up a museum for the university of Uppsala, which was supplied by material donated by Carl Gyllenborg (in 1744–1745), crown-prince Adolf Fredrik (in 1745), Erik Petreus (in 1746), Claes Grill (in 1746), Magnus Lagerström (in 1748 and 1750) and Jonas Alströmer (in 1749). The relation between the museum and the private collection was not formalised and the steady flow of material from Linnean pupils were incorporated to the private collection rather than to the museum. Linnaeus felt his work was reflecting the harmony of nature and he said in 1754 "the earth is then nothing else but a museum of the all-wise creator's masterpieces, divided into three chambers". He had turned his own estate into a microcosm of that 'world museum'. In April 1766 parts of the town were destroyed by a fire and the Linnean private collection was subsequently moved to a barn outside the town, and shortly afterwards to a single-room stone building close to his country house at Hammarby near Uppsala. This resulted in a physical separation between the two collections; the museum collection remained in the botanical garden of the university. Some material which needed special care (alcohol specimens) or ample storage space was moved from the private collection to the museum. In Hammarby the Linnean private collections suffered seriously from damp and the depredations by mice and insects. Carl von Linné's son (Carl Linnaeus) inherited the collections in 1778 and retained them until his own death in 1783. Shortly after Carl von Linné's death his son confirmed that mice had caused "horrible damage" to the plants and that also moths and mould had caused considerable damage. He tried to rescue them from the neglect they had suffered during his father's later years, and also added further specimens. This last activity however reduced rather than augmented the scientific value of the original material. In 1784 the young medical student James Edward Smith purchased the entire specimen collection, library, manuscripts, and correspondence of Carl Linnaeus from his widow and daughter and transferred the collections to London. Not all material in Linné's private collection was transported to England. Thirty-three fish specimens preserved in alcohol were not sent and were later lost. In London Smith tended to neglect the zoological parts of the collection; he added some specimens and also gave some specimens away. Over the following centuries the Linnean collection in London suffered enormously at the hands of scientists who studied the collection, and in the process disturbed the original arrangement and labels, added specimens that did not belong to the original series and withdrew precious original type material. Much material which had been intensively studied by Linné in his scientific career belonged to the collection of Queen Lovisa Ulrika (1720–1782) (in the Linnean publications referred to as "Museum Ludovicae Ulricae" or "M. L. U."). This collection was donated by her grandson King Gustav IV Adolf (1778–1837) to the museum in Uppsala in 1804. Another important collection in this respect was that of her husband King Adolf Fredrik (1710–1771) (in the Linnean sources known as "Museum Adolphi Friderici" or "Mus. Ad. Fr."), the wet parts (alcohol collection) of which were later donated to the Royal Swedish Academy of Sciences, and is today housed in the Swedish Museum of Natural History at Stockholm. The dry material was transferred to Uppsala. System of taxonomy The establishment of universally accepted conventions for the naming of organisms was Linnaeus's main contribution to taxonomy—his work marks the starting point of consistent use of binomial nomenclature. During the 18th century expansion of natural history knowledge, Linnaeus also developed what became known as the Linnaean taxonomy; the system of scientific classification now widely used in the biological sciences. A previous zoologist Rumphius (1627–1702) had more or less approximated the Linnaean system and his material contributed to the later development of the binomial scientific classification by Linnaeus. The Linnaean system classified nature within a nested hierarchy, starting with three kingdoms. Kingdoms were divided into classes and they, in turn, into orders, and thence into genera (singular: genus), which were divided into species (singular: species). Below the rank of species he sometimes recognised taxa of a lower (unnamed) rank; these have since acquired standardised names such as variety in botany and subspecies in zoology. Modern taxonomy includes a rank of family between order and genus and a rank of phylum between kingdom and class that were not present in Linnaeus's original system. Linnaeus's groupings were based upon shared physical characteristics, and not simply upon differences. Of his higher groupings, only those for animals are still in use, and the groupings themselves have been significantly changed since their conception, as have the principles behind them. Nevertheless, Linnaeus is credited with establishing the idea of a hierarchical structure of classification which is based upon observable characteristics and intended to reflect natural relationships. While the underlying details concerning what are considered to be scientifically valid "observable characteristics" have changed with expanding knowledge (for example, DNA sequencing, unavailable in Linnaeus's time, has proven to be a tool of considerable utility for classifying living organisms and establishing their evolutionary relationships), the fundamental principle remains sound. Human taxonomy Linnaeus's system of taxonomy was especially noted as the first to include humans (Homo) taxonomically grouped with apes (Simia), under the header of Anthropomorpha. German biologist Ernst Haeckel speaking in 1907 noted this as the "most important sign of Linnaeus's genius". Linnaeus classified humans among the primates beginning with the first edition of . During his time at Hartekamp, he had the opportunity to examine several monkeys and noted similarities between them and man. He pointed out both species basically have the same anatomy; except for speech, he found no other differences. Thus he placed man and monkeys under the same category, Anthropomorpha, meaning "manlike." This classification received criticism from other biologists such as Johan Gottschalk Wallerius, Jacob Theodor Klein and Johann Georg Gmelin on the ground that it is illogical to describe man as human-like. In a letter to Gmelin from 1747, Linnaeus replied: It does not please [you] that I've placed Man among the Anthropomorpha, perhaps because of the term 'with human form', but man learns to know himself. Let's not quibble over words. It will be the same to me whatever name we apply. But I seek from you and from the whole world a generic difference between man and simian that [follows] from the principles of Natural History. I absolutely know of none. If only someone might tell me a single one! If I would have called man a simian or vice versa, I would have brought together all the theologians against me. Perhaps I ought to have by virtue of the law of the discipline. The theological concerns were twofold: first, putting man at the same level as monkeys or apes would lower the spiritually higher position that man was assumed to have in the great chain of being, and second, because the Bible says man was created in the image of God (theomorphism), if monkeys/apes and humans were not distinctly and separately designed, that would mean monkeys and apes were created in the image of God as well. This was something many could not accept. The conflict between world views that was caused by asserting man was a type of animal would simmer for a century until the much greater, and still ongoing, creation–evolution controversy began in earnest with the publication of On the Origin of Species by Charles Darwin in 1859. After such criticism, Linnaeus felt he needed to explain himself more clearly. The 10th edition of introduced new terms, including Mammalia and Primates, the latter of which would replace Anthropomorpha as well as giving humans the full binomial Homo sapiens. The new classification received less criticism, but many natural historians still believed he had demoted humans from their former place of ruling over nature and not being a part of it. Linnaeus believed that man biologically belongs to the animal kingdom and had to be included in it. In his book , he said, "One should not vent one's wrath on animals, Theology decree that man has a soul and that the animals are mere 'aoutomata mechanica,' but I believe they would be better advised that animals have a soul and that the difference is of nobility." Linnaeus added a second species to the genus Homo in based on a figure and description by Jacobus Bontius from a 1658 publication: Homo troglodytes ("caveman") and published a third in 1771: Homo lar. Swedish historian Gunnar Broberg states that the new human species Linnaeus described were actually simians or native people clad in skins to frighten colonial settlers, whose appearance had been exaggerated in accounts to Linnaeus. In early editions of , many well-known legendary creatures were included such as the phoenix, dragon, manticore, and satyrus, which Linnaeus collected into the catch-all category Paradoxa. Broberg thought Linnaeus was trying to offer a natural explanation and demystify the world of superstition. Linnaeus tried to debunk some of these creatures, as he had with the hydra; regarding the purported remains of dragons, Linnaeus wrote that they were either derived from lizards or rays. For Homo troglodytes he asked the Swedish East India Company to search for one, but they did not find any signs of its existence. Homo lar has since been reclassified as Hylobates lar, the lar gibbon. In the first edition of , Linnaeus subdivided the human species into four varieties based on continent and skin colour: "Europæus albesc[ens]" (whitish European), "Americanus rubesc[ens]" (reddish American), "Asiaticus fuscus" (tawny Asian) and "Africanus nigr[iculus]" (blackish African). In the tenth edition of Systema Naturae he further detailed phenotypical characteristics for each variety, based on the concept of the four temperaments from classical antiquity, and changed the description of Asians' skin tone to "luridus" (yellow). Additionally, Linnaeus created a wastebasket taxon "monstrosus" for "wild and monstrous humans, unknown groups, and more or less abnormal people". In 1959, W. T. Stearn designated Linnaeus to be the lectotype of H. sapiens. Influences and economic beliefs Linnaeus's applied science was inspired not only by the instrumental utilitarianism general to the early Enlightenment, but also by his adherence to the older economic doctrine of Cameralism. Additionally, Linnaeus was a state interventionist. He supported tariffs, levies, export bounties, quotas, embargoes, navigation acts, subsidised investment capital, ceilings on wages, cash grants, state-licensed producer monopolies, and cartels. Commemoration Anniversaries of Linnaeus's birth, especially in centennial years, have been marked by major celebrations. Linnaeus has appeared on numerous Swedish postage stamps and banknotes. There are numerous statues of Linnaeus in countries around the world. The Linnean Society of London has awarded the Linnean Medal for excellence in botany or zoology since 1888. Following approval by the Riksdag of Sweden, Växjö University and Kalmar College merged on 1 January 2010 to become Linnaeus University. Other things named after Linnaeus include the twinflower genus Linnaea, Linnaeosicyos (a monotypic genus in the family Cucurbitaceae), the crater Linné on the Earth's moon, a street in Cambridge, Massachusetts, and the cobalt sulfide mineral Linnaeite. Commentary Andrew Dickson White wrote in A History of the Warfare of Science with Theology in Christendom (1896): Linnaeus ... was the most eminent naturalist of his time, a wide observer, a close thinker; but the atmosphere in which he lived and moved and had his being was saturated with biblical theology, and this permeated all his thinking. ... Toward the end of his life he timidly advanced the hypothesis that all the species of one genus constituted at the creation one species; and from the last edition of his Systema Naturæ he quietly left out the strongly orthodox statement of the |
in the erosion, accretion and reshaping of coasts as well as flooding and creation of continental shelves and drowned river valleys (rias). Importance for humans and ecosystems Human settlements More and more of the world's people live in coastal regions. According to a United Nations atlas, 44% of all people live within 150 km (93 mi) of the sea. Many major cities are on or near good harbors and have port facilities. Some landlocked places have achieved port status by building canals. Nations defend their coasts against military invaders, smugglers and illegal migrants. Fixed coastal defenses have long been erected in many nations, and coastal countries typically have a navy and some form of coast guard. Tourism Coasts, especially those with beaches and warm water, attract tourists often leading to the development of seaside resort communities. In many island nations such as those of the Mediterranean, South Pacific Ocean and Caribbean, tourism is central to the economy. Coasts offer recreational activities such as swimming, fishing, surfing, boating, and sunbathing. Growth management and coastal management can be a challenge for coastal local authorities who often struggle to provide the infrastructure required by new residents, and poor management practices of construction often leave these communities and infrastructure vulnerable to processes like coastal erosion and sea level rise. In many of these communities, management practices such as beach nourishment or when the coastal infrastructure is no longer financially sustainable, managed retreat to remove communities from the coast. Ecosystem services Types Emergent coastline According to one principle of classification, an emergent coastline is a coastline that has experienced a fall in sea level, because of either a global sea-level change, or local uplift. Emergent coastlines are identifiable by the coastal landforms, which are above the high tide mark, such as raised beaches. In contrast, a submergent coastline is one where the sea level has risen, due to a global sea-level change, local subsidence, or isostatic rebound. Submergent coastlines are identifiable by their submerged, or "drowned" landforms, such as rias (drowned valleys) and fjords Concordant coastline According to the second principle of classification, a concordant coastline is a coastline where bands of different rock types run parallel to the shore. These rock types are usually of varying resistance, so the coastline forms distinctive landforms, such as coves. Discordant coastlines feature distinctive landforms because the rocks are eroded by the ocean waves. The less resistant rocks erode faster, creating inlets or bay; the more resistant rocks erode more slowly, remaining as headlands or outcroppings. Other coastal categories A cliffed coast or abrasion coast is one where marine action has produced steep declivities known as cliffs. A flat coast is one where the land gradually descends into the sea. A graded shoreline is one where wind and water action has produced a flat and straight coastline. Landforms The following articles describe some coastal landforms: Bay Headland Cove Peninsula Cliff erosion Much of the sediment deposited along a coast is the result of erosion of a surrounding cliff, or bluff. Sea cliffs retreat landward because of the constant undercutting of slopes by waves. If the slope/cliff being undercut is made of unconsolidated sediment it will erode at a much faster rate than a cliff made of bedrock. A natural arch is formed when a headland is eroded through by waves. Sea caves are made when certain rock beds are more susceptible to erosion than the surrounding rock beds because of different areas of weakness. These areas are eroded at a faster pace creating a hole or crevice that, through time, by means of wave action and erosion, becomes a cave. A stack is formed when a headland is eroded away by wave and wind action. A stump is a shortened sea stack that has been eroded away or fallen because of instability. Wave-cut notches are caused by the undercutting of overhanging slopes which leads to increased stress on cliff material and a greater probability that the slope material will fall. The fallen debris accumulates at the bottom of the cliff and is eventually removed by waves. A wave-cut platform forms after erosion and retreat of a sea cliff has been occurring for a long time. Gently sloping wave-cut platforms develop early on in the first stages of cliff retreat. Later, the length of the platform decreases because the waves lose their energy as they break further offshore. Coastal features formed by sediment Beach Beach cusps Cuspate foreland Dune system Mudflat Raised beach Ria Shoal Spit Strand plain Surge channel Tombolo Coastal features formed by another feature Lagoon Salt marsh Mangrove forests Kelp forests Coral reefs Oyster reefs Other features on the coast Concordant coastline Discordant coastline Fjord Island Island arc Machair In geology The identification of bodies of rock formed from sediments deposited in shoreline and nearshore environments (shoreline and nearshore facies) is extremely important to geologists. These provide vital clues for reconstructing the geography of ancient continents (paleogeography). The locations of these beds show the extent of ancient seas at particular points in geological time, and provide clues to the magnitudes of tides in the distant past. Sediments deposited in the shoreface are preserved as lenses of sandstone in which the upper part of the sandstone is coarser than the lower part (a coarsening upwards sequence). Geologists refer to these are parasequences. Each records an episode of retreat of the ocean from the shoreline over a period of 10,000 to 1,000,000 years. These often show laminations reflecting various kinds of tidal cycles. | the margins of the continental shelves, make up about 7 percent of the Earth's oceans, but at least 85% of commercially harvested fish depend on coastal environments during at least part of their life cycle. As of October 2010, about 2.86% of exclusive economic zones were part of marine protected areas. The definition of coasts varies. Marine scientists think of the "wet" (aquatic or intertidal) vegetated habitats as being coastal ecosystems (e.g. seagrass, salt marsh etc.) whilst some terrestrial scientist might only think of coastal ecosystems as purely terrestrial plants that live close to the seashore (see also estuaries and coastal ecosystems). Formation Tides often determine the range over which sediment is deposited or eroded. Areas with high tidal ranges allow waves to reach farther up the shore, and areas with lower tidal ranges produce deposition at a smaller elevation interval. The tidal range is influenced by the size and shape of the coastline. Tides do not typically cause erosion by themselves; however, tidal bores can erode as the waves surge up the river estuaries from the ocean. Geologists classify coasts on the basis of tidal range into macrotidal coasts with a tidal range greater than 4 meters (13 feet); mesotidal coasts with a tidal range of 2 to 4 meters (7 to 13 feet); and microtidal coasts with a tidal range of less than 2 meters (7 feet). The distinction between macrotidal and mesotidal coasts is more important. Macrotidal coasts lack barrier islands and lagoons, and are characterized by funnel-shaped estuaries containing sand ridges aligned with tidal currents. Wave action is much more important for determining bedforms of sediments deposited along mesotidal and microtidal coasts than in macrotidal coasts. Waves erode coastline as they break on shore releasing their energy; the larger the wave the more energy it releases and the more sediment it moves. Coastlines with longer shores have more room for the waves to disperse their energy, while coasts with cliffs and short shore faces give little room for the wave energy to be dispersed. In these areas, the wave energy breaking against the cliffs is higher, and air and water are compressed into cracks in the rock, forcing the rock apart, breaking it down. Sediment deposited by waves comes from eroded cliff faces and is moved along the coastline by the waves. This forms an abrasion or cliffed coast. Sediment deposited by rivers is the dominant influence on the amount of sediment located in the case of coastlines that have estuaries. Today riverine deposition at the coast is often blocked by dams and other human regulatory devices, which remove the sediment from the stream by causing it to be deposited inland. Coral reefs are a provider of sediment for coastlines of tropical islands. Like the ocean which shapes them, coasts are a dynamic environment with constant change. The Earth's natural processes, particularly sea level rises, waves and various weather phenomena, have resulted in the erosion, accretion and reshaping of coasts as well as flooding and creation of continental shelves and drowned river valleys (rias). Importance for humans and ecosystems Human settlements More and more of the world's people live in coastal regions. According to a United Nations atlas, 44% of all people live within 150 km (93 mi) of the sea. Many major cities are on or near good harbors and have port facilities. Some landlocked places have achieved port status by building canals. Nations defend their coasts against military invaders, smugglers and illegal migrants. Fixed coastal defenses have long been erected in many nations, and coastal countries typically have a navy and some form of coast guard. Tourism Coasts, especially those with beaches and warm water, attract tourists often leading to the development of seaside resort communities. In many island nations such as those of the Mediterranean, South Pacific Ocean and Caribbean, tourism is central to the economy. Coasts offer recreational activities such as swimming, fishing, surfing, boating, and sunbathing. Growth management and coastal management can be a challenge for coastal local authorities who often struggle to provide the infrastructure required by new residents, and poor management practices of construction often leave these communities and infrastructure vulnerable to processes like coastal erosion and sea level rise. In many of these communities, management practices such as beach nourishment or when the coastal infrastructure is no longer financially sustainable, managed retreat to remove communities from the coast. Ecosystem services Types Emergent coastline According to one principle of classification, an emergent coastline is a coastline that has experienced a fall in sea level, because of either a global sea-level change, or local uplift. Emergent coastlines are identifiable by the coastal landforms, which are above the high tide mark, such as raised beaches. In contrast, a submergent coastline is one where the sea level has risen, due to a global sea-level change, local subsidence, or isostatic rebound. Submergent coastlines are identifiable by their submerged, or "drowned" landforms, such as rias (drowned valleys) and fjords Concordant coastline According to the second principle of classification, a concordant coastline is a coastline where bands of different rock types run parallel to the shore. These rock types are usually of varying resistance, so the coastline forms distinctive landforms, such as coves. Discordant coastlines feature distinctive landforms because the rocks are eroded by the ocean waves. The less resistant rocks erode faster, creating inlets or bay; the more resistant rocks erode more slowly, remaining as headlands or outcroppings. Other coastal categories A cliffed coast or abrasion coast is one where marine action has produced steep declivities known as cliffs. A flat coast is one where the land gradually descends into the sea. A graded shoreline is one where wind and water action has produced a flat and straight coastline. Landforms The following articles describe some coastal landforms: Bay Headland Cove Peninsula Cliff erosion Much of the sediment deposited along a coast is the result of erosion of a surrounding cliff, or bluff. Sea cliffs retreat landward because of the constant undercutting of slopes by waves. If the slope/cliff being undercut is made of unconsolidated sediment it will erode at a much faster rate than a cliff made of bedrock. A natural arch is formed when a headland is eroded through by waves. Sea caves are made when certain rock beds are more susceptible to erosion than the surrounding rock beds because of different areas of weakness. These areas are eroded at a faster pace creating a hole or crevice that, through time, by means of wave action and erosion, becomes a cave. A stack is formed when a headland is eroded away by wave and wind action. A stump is a shortened sea stack that has been eroded away or fallen because of instability. Wave-cut notches are caused by the undercutting of overhanging slopes which leads to increased stress on cliff material and a greater probability that the slope material will fall. The fallen debris accumulates at the bottom of the cliff and is eventually removed by waves. A wave-cut platform forms after erosion and retreat of a sea cliff has been occurring for a long time. Gently sloping wave-cut platforms develop early on in the first stages of cliff retreat. Later, the length of the platform decreases because the waves lose their energy as they break further offshore. Coastal features formed by sediment Beach Beach cusps Cuspate foreland Dune system Mudflat Raised beach Ria Shoal Spit Strand plain Surge channel Tombolo Coastal features formed by another feature Lagoon Salt marsh Mangrove forests Kelp forests Coral reefs Oyster reefs |
moved into that position, then it is likely catatonia. If the patient has a “lead-pipe rigidity” then NMS should be the prime suspect. Diagnosis There is not yet a definitive consensus regarding diagnostic criteria of catatonia. In the American Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) and the World Health Organization's eleventh edition of the International Classification of Disease (ICD-11) the classification is more homogeneous than in earlier editions. Prominent researchers in the field have other suggesions for diagnostic criteria. DSM-5 classification The DSM-5 does not classify catatonia as an independent disorder, but rather it classifies it as catatonia associated with another mental disorder, due to another medical condition, or as unspecified catatonia. Catatonia is diagnosed by the presence of three or more of the following 12 psychomotor symptoms in association with a mental disorder, medical condition, or unspecified: stupor: no psycho-motor activity; not actively relating to the environment catalepsy: passive induction of a posture held against gravity waxy flexibility: allowing positioning by an examiner and maintaining that position mutism: no, or very little, verbal response (exclude if known aphasia) negativism: opposition or no response to instructions or external stimuli posturing: spontaneous and active maintenance of a posture against gravity mannerisms that are odd, circumstantial caricatures of normal actions stereotypy: repetitive, abnormally frequent, non-goal-directed movements agitation, not influenced by external stimuli grimacing: keeping a fixed facial expression echolalia: mimicking another's speech echopraxia: mimicking another's movements. Other disorders (additional code 293.89 [F06.1] to indicate the presence of the co-morbid catatonia): Catatonia associated with autism spectrum disorder Catatonia associated with schizophrenia spectrum and other psychotic disorders Catatonia associated with brief psychotic disorder Catatonia associated with schizophreniform disorder Catatonia associated with schizoaffective disorder Catatonia associated with a substance-induced psychotic disorder Catatonia associated with bipolar and related disorders Catatonia associated with major depressive disorder Catatonic disorder due to another medical condition If catatonic symptoms are present but do not form the catatonic syndrome, a medication- or substance-induced aetiology should first be considered. ICD-11 classification In ICD-11 catatonia is defined as a syndrome of primarily psychomotor disturbances that is characterized by the simultaneous occurrence of several symptoms such as stupor; catalepsy; waxy flexibility; mutism; negativism; posturing; mannerisms; stereotypies; psychomotor agitation; grimacing; echolalia and echopraxia. Catatonia may occur in the context of specific mental disorders, including mood disorders, schizophrenia or other primary psychotic disorders, and Neurodevelopmental disorders, and may be induced by psychoactive substances, including medications. Catatonia may also be caused by a medical condition not classified under mental, behavioral, or neurodevelopmental disorders. Assessment/Physical Catatonia is often overlooked and under-diagnosed. Patients with catatonia most commonly have an underlying psychiatric disorder, for this reason, physicians may overlook signs of catatonia due to the severity of the psychosis the patient is presenting with. Furthermore, the patient may not be presenting with the common signs of catatonia such as mutism and posturing. Additionally, the motor abnormalities seen in catatonia are also present in psychiatric disorders. For example, a patient with mania will show increased motor activity that may progress to exciting catatonia. One way in which physicians can differentiate between the two is to observe the motor abnormality. Patients with mania present with increased goal-directed activity. On the other hand, the increased activity in catatonia is not goal-directed and often repetitive. Catatonia is a clinical diagnosis and there is no specific laboratory test to diagnose it. However, certain testing can help determine what is causing the catatonia. An EEG will likely show diffuse slowing. If seizure activity is driving the syndrome, then an EEG would also be helpful in detecting this. CT or MRI will not show catatonia; however, they might reveal abnormalities that might be leading to the syndrome. Metabolic screens, inflammatory markers, or autoantibodies may reveal reversible medical causes of catatonia. Vital signs should be frequently monitored as catatonia can progress to malignant catatonia which is life-threatening. Malignant catatonia is characterized by fever, hypertension, tachycardia, and tachypnea. Rating scale Various rating scales for catatonia have been developed, however, their utility for clinical care has not been well established. The most commonly used scale is the Bush-Francis Catatonia Rating Scale (BFCRS) (external link is provided below). The scale is composed of 23 items with the first 14 items being used as the screening tool. If 2 of the 14 are positive, this prompts for further evaluation and completion of the remaining 9 items. A diagnosis can be supported by the lorazepam challenge or the zolpidem challenge. While proven useful in the past, barbiturates are no longer commonly used in psychiatry; thus the option of either benzodiazepines or ECT. Treatment The initial treatment of catatonia is to stop medication that could be potentially leading to the syndrome. These may include steroids, stimulants, anticonvulsants, neuroleptics, dopamine blockers, etc. The next step is to provide a “lorazepam challenge,” in which patients are given 2 mg of IV lorazepam (or another benzodiazepine). Most patients with catatonia will respond significantly to this within the first 15–30 minutes. If no change is observed during the first dose, then a second dose is given and the patient is re-examined. If the patient responds to the lorazepam challenge, then lorazepam can be scheduled at interval doses until the catatonia resolves. The lorazepam must be tapered slowly, otherwise, the catatonia symptoms may return. The underlying cause of the catatonia should also be treated during this time. If within a week the catatonia is not resolved, then ECT can be used to reverse the symptoms. ECT in combination with benzodiazepines is used to treat malignant catatonia. In France, zolpidem has also been used in diagnosis, and response may occur within the same time period. Ultimately the underlying cause needs to be treated. Electroconvulsive therapy (ECT) is an effective treatment for catatonia that is well acknowledged. ECT has also shown favorable outcomes in patients with chronic catatonia. However, it has been pointed out that further high quality randomized controlled trials are needed to evaluate the efficacy, tolerance, and protocols of ECT in catatonia. Antipsychotics should be used with care as they can worsen catatonia and are the cause of neuroleptic malignant syndrome, a dangerous condition that can mimic catatonia and requires immediate discontinuation of the antipsychotic. Excessive glutamate activity is believed to be involved in catatonia; when first-line treatment options fail, NMDA antagonists such as amantadine or memantine may be used. Amantadine may have an increased incidence of tolerance with prolonged use and can cause psychosis, due to its additional effects on the dopamine system. Memantine has a more targeted pharmacological profile for the glutamate system, reduced incidence of psychosis and may therefore be preferred for individuals who cannot tolerate amantadine. Topiramate is another treatment option for resistant catatonia; it produces its therapeutic effects by producing glutamate antagonism via modulation of AMPA receptors. Complications, outcomes, and recurrence Patients may suffer several complications from being in a catatonic state. The nature of these complications will depend on the type of catatonia being experienced by the patient. For example, patients presenting with retarded catatonia may have refusal to eat which will in turn lead to malnutrition and dehydration. Furthermore, if immobility is a symptom the patient is presenting with, then they may develop pressure ulcers, muscle contractions, and are at risk of developing deep vein thrombosis (DVT) and pulmonary embolus (PE). Patients with excited catatonia may be aggressive and violent, and physical trauma may result from this. Catatonia may progress to the malignant type which will present with autonomic instability and may be life-threatening. Other complications also include the development of pneumonia and neuroleptic malignant syndrome. Patients who experience an episode of catatonia are more likely to suffer recurrence. Treatment response for patients with catatonia is 50-70% and these patients have a good prognosis. However, failure to respond to medication is a very poor prognosis. Many | serious adverse effects. History It was first described in 1874 by Karl Ludwig Kahlbaum as (Catatonia or Tension Insanity). Causes Catatonia is almost always secondary to another underlying illness, often a psychiatric disorder. Mood disorders such as a bipolar disorder and depression are the most common etiologies to progress to catatonia. Other psychiatric associations include schizophrenia and other primary psychotic disorders. It also is related to autism spectrum disorders. Catatonia is also seen in many medical disorders, including infections (such as encephalitis), autoimmune disorders, meningitis, focal neurological lesions (including strokes), alcohol withdrawal, abrupt or overly rapid benzodiazepine withdrawal, cerebrovascular disease, neoplasms, head injury, and some metabolic conditions (homocystinuria, diabetic ketoacidosis, hepatic encephalopathy, and hypercalcaemia). Epidemiology Catatonia has been mostly studied in acutely ill psychiatric patients. Catatonia frequently goes unrecognized, leading to the belief that the syndrome is rare; however, this is not true and prevalence has been reported to be as high as 10% in patients with acute psychiatric illnesses. One large population estimate has suggested that the incidence of catatonia is 10.6 episodes per 100 000 person-years. It occurs in males and females in approximately equal numbers. 21-46% of all catatonia cases can be attributed to a general medical condition. Pathogenesis/Mechanism The pathophysiology that leads to catatonia is still poorly understood and a definite mechanism remains unknown. Neurologic studies have implicated several pathways; however, it remains unclear whether these findings are the cause or the consequence of the disorder. Abnormalities in GABA, glutamate signaling, serotonin, and dopamine transmission are believed to be implicated in catatonia. Furthermore, it has also been hypothesized that pathways that connect the basal ganglia with the cortex and thalamus is involved in the development of catatonia. Signs and symptoms The presentation of a patient with catatonia varies greatly depending on the subtype, underlying cause and it can be acute or subtle. Because most patients with catatonia have an underlying psychiatric illness, the majority will present with worsening depression, mania, or psychosis followed by catatonia symptoms. Catatonia presents as a motor disturbance in which patients will display marked reduction in movement, marked agitation, or a mixture of both despite having the physical capacity to move normally. These patients may be unable to start an action or stop one. Movements and mannerisms may be repetitive, or purposeless. The most common signs of catatonia are immobility, mutism, withdrawal and refusal to eat, staring, negativism, posturing (rigidity), rigidity, waxy flexibility/catalepsy, stereotypy (purposeless, repetitive movements), echolalia or echopraxia, verbigeration (repeat meaningless phrases). It should not be assumed that patients presenting with catatonia are unaware of their surroundings as some patients can recall in detail their catatonic state and their actions. There are several subtypes of catatonia and they are characterized by the specific movement disturbance and associated features. Although catatonia can be divided into various subtypes, the natural history of catatonia is often fluctuant and different states can exist within the same individual. Subtypes Retarded/Withdrawn Catatonia: This form of catatonia is characterized by decreased response to external stimuli, immobility or inhibited movement, mutism, staring, posturing, and negativism. Patients may sit or stand in the same position for hours, may hold odd positions, and may resist movement of their extremities. Excited Catatonia: Excited catatonia is characterized by odd mannerisms/gestures, performing purposeless or inappropriate actions, excessive motor activity restlessness, stereotypy, impulsivity, agitation, combativeness. Speech and actions may be repetitive or mimic another person's. People in this state are extremely hyperactive and may have delusions and hallucinations. Catatonic excitement is commonly cited as one of the most dangerous mental states in psychiatry. Malignant Catatonia: Malignant catatonia is a life-threatening condition that may progress rapidly within a few days. It is characterized by fever, abnormalities in blood pressure, heart rate, respiratory rate, diaphoresis (sweating), and delirium. Certain lab findings are common with this presentation; however, they are nonspecific, which means that they are also present in other conditions and do not diagnose catatonia. These lab findings include: leukocytosis, elevated creatine kinase, low serum iron. The signs and symptoms of malignant catatonia overlap significantly with neuroleptic malignant syndrome (NMS) and so a careful history, review of medications, and physical exam are critical to properly differentiate these conditions. For example, if the patient has waxy flexibility and holds a position against gravity when passively moved into that position, then it is likely catatonia. If the patient has a “lead-pipe rigidity” then NMS should be the prime suspect. Diagnosis There is not yet a definitive consensus regarding diagnostic criteria of catatonia. In the American Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) and the World Health Organization's eleventh edition of the International Classification of Disease (ICD-11) the classification is more homogeneous than in earlier editions. Prominent researchers in the field have other suggesions for diagnostic criteria. DSM-5 classification The DSM-5 does not classify catatonia as an independent disorder, but rather it classifies it as catatonia associated with another mental disorder, due to another medical condition, or as unspecified catatonia. Catatonia is diagnosed by the presence of three or more of the following 12 psychomotor symptoms in association with a mental disorder, medical condition, or unspecified: stupor: no psycho-motor activity; not actively relating to the environment catalepsy: passive induction of a |
"Proceed to the following coordinates." When using a cipher the original information is known as plaintext, and the encrypted form as ciphertext. The ciphertext message contains all the information of the plaintext message, but is not in a format readable by a human or computer without the proper mechanism to decrypt it. The operation of a cipher usually depends on a piece of auxiliary information, called a key (or, in traditional NSA parlance, a cryptovariable). The encrypting procedure is varied depending on the key, which changes the detailed operation of the algorithm. A key must be selected before using a cipher to encrypt a message. Without knowledge of the key, it should be extremely difficult, if not impossible, to decrypt the resulting ciphertext into readable plaintext. Most modern ciphers can be categorized in several ways By whether they work on blocks of symbols usually of a fixed size (block ciphers), or on a continuous stream of symbols (stream ciphers). By whether the same key is used for both encryption and decryption (symmetric key algorithms), or if a different key is used for each (asymmetric key algorithms). If the algorithm is symmetric, the key must be known to the recipient and sender and to no one else. If the algorithm is an asymmetric one, the enciphering key is different from, but closely related to, the deciphering key. If one key cannot be deduced from the other, the asymmetric key algorithm has the public/private key property and one of the keys may be made public without loss of confidentiality. Etymology The Roman number system was very cumbersome, in part because there was no concept of zero. The Arabic numeral system spread from the Arabic world to Europe in the Middle Ages. In this transition, the Arabic word for zero صفر (sifr) was adopted into Medieval Latin as cifra, and then into Middle French as . This eventually led to the English word cipher (minority spelling cypher). One theory for how the term came to refer to encoding is that the concept of zero was confusing to Europeans, and so the term came to refer to a message or communication that was not easily understood. The term cipher was later also used to refer to any Arabic digit, or to calculation using them, so encoding text in the form of Arabic numerals is literally converting the text to "ciphers". Versus codes In non-technical usage, a "(secret) code" typically means a "cipher". Within technical discussions, however, the words "code" and "cipher" refer to two different concepts. Codes work at the level of meaning—that is, words or phrases are converted into something else and this chunking generally shortens the message. An example of this is the commercial telegraph code which was used to shorten long telegraph messages which resulted from entering into commercial contracts using exchanges of telegrams. Another example is given by whole word ciphers, which allow the user to replace an entire word with a symbol or character, much like the way Japanese utilize Kanji (meaning Chinese characters in Japanese) characters to supplement their language. ex "The quick brown fox jumps over the lazy dog" becomes "The quick brown 狐 jumps 上 the lazy 犬". Ciphers, on the other hand, work at a lower level: the level of individual letters, small groups of letters, or, in modern schemes, individual bits and blocks of bits. Some systems used both codes and ciphers in one system, using superencipherment to increase the security. In some cases the terms codes and ciphers are also used synonymously to substitution and transposition. Historically, cryptography was split into a dichotomy of codes and ciphers; and coding had its own terminology, analogous to that for ciphers: "encoding, codetext, decoding" and so on. However, codes have a variety of drawbacks, including susceptibility to cryptanalysis and the difficulty of managing a cumbersome codebook. Because of this, codes have fallen into disuse in modern cryptography, and ciphers are the dominant technique. Types There are a variety of different types of encryption. Algorithms used earlier in the history of cryptography are substantially different from modern methods, and modern ciphers can be classified according | in the Middle Ages. In this transition, the Arabic word for zero صفر (sifr) was adopted into Medieval Latin as cifra, and then into Middle French as . This eventually led to the English word cipher (minority spelling cypher). One theory for how the term came to refer to encoding is that the concept of zero was confusing to Europeans, and so the term came to refer to a message or communication that was not easily understood. The term cipher was later also used to refer to any Arabic digit, or to calculation using them, so encoding text in the form of Arabic numerals is literally converting the text to "ciphers". Versus codes In non-technical usage, a "(secret) code" typically means a "cipher". Within technical discussions, however, the words "code" and "cipher" refer to two different concepts. Codes work at the level of meaning—that is, words or phrases are converted into something else and this chunking generally shortens the message. An example of this is the commercial telegraph code which was used to shorten long telegraph messages which resulted from entering into commercial contracts using exchanges of telegrams. Another example is given by whole word ciphers, which allow the user to replace an entire word with a symbol or character, much like the way Japanese utilize Kanji (meaning Chinese characters in Japanese) characters to supplement their language. ex "The quick brown fox jumps over the lazy dog" becomes "The quick brown 狐 jumps 上 the lazy 犬". Ciphers, on the other hand, work at a lower level: the level of individual letters, small groups of letters, or, in modern schemes, individual bits and blocks of bits. Some systems used both codes and ciphers in one system, using superencipherment to increase the security. In some cases the terms codes and ciphers are also used synonymously to substitution and transposition. Historically, cryptography was split into a dichotomy of codes and ciphers; and coding had its own terminology, analogous to that for ciphers: "encoding, codetext, decoding" and so on. However, codes have a variety of drawbacks, including susceptibility to cryptanalysis and the difficulty of managing a cumbersome codebook. Because of this, codes have fallen into disuse in modern cryptography, and ciphers are the dominant technique. Types There are a variety of different types of encryption. Algorithms used earlier in the history of cryptography are substantially different from modern methods, and modern ciphers can be classified according to how they operate and whether they use one or two keys. Historical Historical pen and paper ciphers used in the past are sometimes known as classical ciphers. They include simple substitution ciphers (such as ROT13) and transposition ciphers (such as a Rail Fence Cipher). For example, "GOOD DOG" can be encrypted as "PLLX XLP" where "L" substitutes for "O", "P" for "G", and "X" for "D" in the message. Transposition of the letters "GOOD DOG" can result in "DGOGDOO". These simple ciphers and examples are easy to crack, even without plaintext-ciphertext pairs. Simple ciphers were replaced by polyalphabetic substitution ciphers (such as the Vigenère) |
at colleges and we started selling records, we were O.K. The whole outlaw thing, it had nothing to do with the music, it was something that got written in an article, and the young people said, 'Well, that's pretty cool.' And started listening." (Willie Nelson) The term outlaw country is traditionally associated with Willie Nelson, Jerry Jeff Walker, Hank Williams, Jr., Merle Haggard, Waylon Jennings and Joe Ely. It was encapsulated in the 1976 album Wanted! The Outlaws. Though the outlaw movement as a cultural fad had died down after the late 1970s (with Jennings noting in 1978 that it had gotten out of hand and led to real-life legal scrutiny), many Western and Outlaw country music artists maintained their popularity during the 1980s by forming supergroups, such as The Highwaymen, Texas Tornados, and Bandido. Country pop Country pop or soft pop, with roots in the countrypolitan sound, folk music, and soft rock, is a subgenre that first emerged in the 1970s. Although the term first referred to country music songs and artists that crossed over to top 40 radio, country pop acts are now more likely to cross over to adult contemporary music. It started with pop music singers like Glen Campbell, Bobbie Gentry, John Denver, Olivia Newton-John, Anne Murray, B. J. Thomas, the Bellamy Brothers, and Linda Ronstadt having hits on the country charts. Between 1972 and 1975, singer/guitarist John Denver released a series of hugely successful songs blending country and folk-rock musical styles ("Rocky Mountain High", "Sunshine on My Shoulders", "Annie's Song", "Thank God I'm a Country Boy", and "I'm Sorry"), and was named Country Music Entertainer of the Year in 1975. The year before, Olivia Newton-John, an Australian pop singer, won the "Best Female Country Vocal Performance" as well as the Country Music Association's most coveted award for females, "Female Vocalist of the Year". In response George Jones, Tammy Wynette, Jean Shepard and other traditional Nashville country artists dissatisfied with the new trend formed the short-lived "Association of Country Entertainers" in 1974; the ACE soon unraveled in the wake of Jones and Wynette's bitter divorce and Shepard's realization that most others in the industry lacked her passion for the movement. During the mid-1970s, Dolly Parton, a successful mainstream country artist since the late 1960s, mounted a high-profile campaign to cross over to pop music, culminating in her 1977 hit "Here You Come Again", which topped the U.S. country singles chart, and also reached No. 3 on the pop singles charts. Parton's male counterpart, Kenny Rogers, came from the opposite direction, aiming his music at the country charts, after a successful career in pop, rock and folk music with the First Edition, achieving success the same year with "Lucille", which topped the country charts and reached No. 5 on the U.S. pop singles charts, as well as reaching Number 1 on the British all-genre chart. Parton and Rogers would both continue to have success on both country and pop charts simultaneously, well into the 1980s. Country music propelled Kenny Rogers’ career, making him a three-time Grammy Award winner and six-time Country Music Association Awards winner. Having sold more than 50 million albums in the US, one of his Song "The Gambler," inspired multiple TV movies, with Rogers as the main character. Artists like Crystal Gayle, Ronnie Milsap and Barbara Mandrell would also find success on the pop charts with their records. In 1975, author Paul Hemphill stated in the Saturday Evening Post, "Country music isn't really country anymore; it is a hybrid of nearly every form of popular music in America." During the early 1980s, country artists continued to see their records perform well on the pop charts. Willie Nelson and Juice Newton each had two songs in the top 5 of the Billboard Hot 100 in the early eighties: Nelson charted "Always on My Mind" (No. 5, 1982) and "To All the Girls I've Loved Before" (No. 5, 1984, a duet with Julio Iglesias), and Newton achieved success with "Queen of Hearts" (No. 2, 1981) and "Angel of the Morning" (No. 4, 1981). Four country songs topped the Billboard Hot 100 in the 1980s: "Lady" by Kenny Rogers, from the late fall of 1980; "9 to 5" by Dolly Parton, "I Love a Rainy Night" by Eddie Rabbitt (these two back-to-back at the top in early 1981); and "Islands in the Stream", a duet by Dolly Parton and Kenny Rogers in 1983, a pop-country crossover hit written by Barry, Robin, and Maurice Gibb of the Bee Gees. Newton's "Queen of Hearts" almost reached No. 1, but was kept out of the spot by the pop ballad juggernaut "Endless Love" by Diana Ross and Lionel Richie. The move of country music toward neotraditional styles led to a marked decline in country/pop crossovers in the late 1980s, and only one song in that period—Roy Orbison's "You Got It", from 1989—made the top 10 of both the Billboard Hot Country Singles" and Hot 100 charts, due largely to a revival of interest in Orbison after his sudden death. The only song with substantial country airplay to reach number one on the pop charts in the late 1980s was "At This Moment" by Billy Vera and the Beaters, an R&B song with slide guitar embellishment that appeared at number 42 on the country charts from minor crossover airplay. The record-setting, multi-platinum group Alabama was named Artist of the Decade for the 1980s by the Academy of Country Music. Country rock Country rock is a genre that started in the 1960s but became prominent in the 1970s. The late 1960s in American music produced a unique blend as a result of traditionalist backlash within separate genres. In the aftermath of the British Invasion, many desired a return to the "old values" of rock n' roll. At the same time there was a lack of enthusiasm in the country sector for Nashville-produced music. What resulted was a crossbred genre known as country rock. Early innovators in this new style of music in the 1960s and 1970s included Bob Dylan, who was the first to revert to country music with his 1967 album John Wesley Harding (and even more so with that album's follow-up, Nashville Skyline), followed by Gene Clark, Clark's former band the Byrds (with Gram Parsons on Sweetheart of the Rodeo) and its spin-off the Flying Burrito Brothers (also featuring Gram Parsons), guitarist Clarence White, Michael Nesmith (the Monkees and the First National Band), the Grateful Dead, Neil Young, Commander Cody, the Allman Brothers, the Marshall Tucker Band, Poco, Buffalo Springfield, Stephen Stills' band Manassas and Eagles, among many, even the former folk music duo Ian & Sylvia, who formed Great Speckled Bird in 1969. The Eagles would become the most successful of these country rock acts, and their compilation album Their Greatest Hits (1971–1975) remains the second-best-selling album in the US with 29 million copies sold. The Rolling Stones also got into the act with songs like "Dead Flowers" and a country version of "Honky Tonk Women". Described by AllMusic as the "father of country-rock", Gram Parsons' work in the early 1970s was acclaimed for its purity and for his appreciation for aspects of traditional country music. Though his career was cut tragically short by his 1973 death, his legacy was carried on by his protégé and duet partner Emmylou Harris; Harris would release her debut solo in 1975, an amalgamation of country, rock and roll, folk, blues and pop. Subsequent to the initial blending of the two polar opposite genres, other offspring soon resulted, including Southern rock, heartland rock and in more recent years, alternative country. In the decades that followed, artists such as Juice Newton, Alabama, Hank Williams, Jr. (and, to an even greater extent, Hank Williams III), Gary Allan, Shania Twain, Brooks & Dunn, Faith Hill, Garth Brooks, Dwight Yoakam, Steve Earle, Dolly Parton, Rosanne Cash and Linda Ronstadt moved country further towards rock influence. Neocountry In 1980, a style of "neocountry disco music" was popularized by the film Urban Cowboy, which also included more traditional songs such as "The Devil Went Down to Georgia" by the Charlie Daniels Band. It was during this time that a glut of pop-country crossover artists began appearing on the country charts: former pop stars Bill Medley (of the Righteous Brothers), "England Dan" Seals (of England Dan and John Ford Coley), Tom Jones, and Merrill Osmond (both alone and with some of his brothers; his younger sister Marie Osmond was already an established country star) all recorded significant country hits in the early 1980s. Sales in record stores rocketed to $250 million in 1981; by 1984, 900 radio stations began programming country or neocountry pop full-time. As with most sudden trends, however, by 1984 sales had dropped below 1979 figures. Truck driving country Truck driving country music is a genre of country music and is a fusion of honky-tonk, country rock and the Bakersfield sound. It has the tempo of country rock and the emotion of honky-tonk, and its lyrics focus on a truck driver's lifestyle. Truck driving country songs often deal with the profession of trucking and love. Well-known artists who sing truck driving country include Dave Dudley, Red Sovine, Dick Curless, Red Simpson, Del Reeves, the Willis Brothers and Jerry Reed, with C. W. McCall and Cledus Maggard (pseudonyms of Bill Fries and Jay Huguely, respectively) being more humorous entries in the subgenre. Dudley is known as the father of truck driving country. Neotraditionalist movement During the mid-1980s, a group of new artists began to emerge who rejected the more polished country-pop sound that had been prominent on radio and the charts, in favor of more, traditional, "back-to-basics" production. Many of the artists during the latter half of the 1980s drew on traditional honky-tonk, bluegrass, folk and western swing. Artists who typified this sound included Travis Tritt, Reba McEntire, George Strait, Keith Whitley, Alan Jackson, John Anderson, Patty Loveless, Kathy Mattea, Randy Travis, Dwight Yoakam, Clint Black, Ricky Skaggs, and the Judds. Beginning in 1989, a confluence of events brought an unprecedented commercial boom to country music. New marketing strategies were used to engage fans, powered by technology that more accurately tracked the popularity of country music, and boosted by a political and economic climate that focused attention on the genre. Garth Brooks ("Friends in Low Places") in particular attracted fans with his fusion of neotraditionalist country and stadium rock. Other artists such as Brooks and Dunn ("Boot Scootin' Boogie") also combined conventional country with slick, rock elements, while Lorrie Morgan, Mary Chapin Carpenter, and Kathy Mattea updated neotraditionalist styles. Fifth generation (1990s) Country music was aided by the U.S. Federal Communications Commission's (FCC) Docket 80–90, which led to a significant expansion of FM radio in the 1980s by adding numerous higher-fidelity FM signals to rural and suburban areas. At this point, country music was mainly heard on rural AM radio stations; the expansion of FM was particularly helpful to country music, which migrated to FM from the AM band as AM became overcome by talk radio (the country music stations that stayed on AM developed the classic country format for the AM audience). At the same time, beautiful music stations already in rural areas began abandoning the format (leading to its effective demise) to adopt country music as well. This wider availability of country music led to producers seeking to polish their product for a wider audience. In 1990, Billboard, which had published a country music chart since the 1940s, changed the methodology it used to compile the chart: singles sales were removed from the methodology, and only airplay on country radio determined a song's place on the chart. In the 1990s, country music became a worldwide phenomenon thanks to Garth Brooks, who enjoyed one of the most successful careers in popular music history, breaking records for both sales and concert attendance throughout the decade. The RIAA has certified his recordings at a combined (128× platinum), denoting roughly 113 million U.S. shipments. Other artists who experienced success during this time included Clint Black, John Michael Montgomery, Tracy Lawrence, Tim McGraw, Kenny Chesney, Travis Tritt, Alan Jackson and the newly formed duo of Brooks & Dunn; George Strait, whose career began in the 1980s, also continued to have widespread success in this decade and beyond. Toby Keith began his career as a more pop-oriented country singer in the 1990s, evolving into an outlaw persona in the early 2000s with Pull My Chain and its follow-up, Unleashed. Success of female artists Female artists such as Reba McEntire, Patty Loveless, Faith Hill, Martina McBride, Deana Carter, LeAnn Rimes, Mindy McCready, Pam Tillis, Lorrie Morgan, Shania Twain, and Mary Chapin Carpenter all released platinum-selling albums in the 1990s. The Dixie Chicks became one of the most popular country bands in the 1990s and early 2000s. Their 1998 debut album Wide Open Spaces went on to become certified 12x platinum while their 1999 album Fly went on to become 10x platinum. After their third album, Home, was released in 2003, the band made political news in part because of lead singer Natalie Maines's comments disparaging then-President George W. Bush while the band was overseas (Maines stated that she and her bandmates were ashamed to be from the same state as Bush, who had just commenced the Iraq War a few days prior). The comments caused a rift between the band and the country music scene, and the band's fourth (and most recent) album, 2006's Taking the Long Way, took a more rock-oriented direction; the album was commercially successful overall among non-country audiences but largely ignored among country audiences. After Taking the Long Way, the band broke up for a decade (with two of its members continuing as the Court Yard Hounds) before reuniting in 2016 and releasing new material in 2020. Shania Twain became the best selling female country artist of the decade. This was primarily due to the success of her breakthrough sophomore 1995 album, The Woman in Me, which was certified 12x platinum sold over 20 million copies worldwide and its follow up, 1997's Come On Over, which was certified 20x platinum and sold over 40 million copies. The album became a major worldwide phenomenon and became one of the world's best selling albums of 1998, 1999 and 2000; it also went on to become the best selling country album of all time. Unlike the majority of her contemporaries, Twain enjoyed large international success that had been seen by very few country artists, before or after her. Critics have noted that Twain enjoyed much of her success due to breaking free of traditional country stereotypes and for incorporating elements of rock and pop into her music. In 2002, she released her successful fourth studio album, titled Up!, which was certified 11x platinum and sold over 15 million copies worldwide. Shania Twain has been nominated eighteen times for Grammy Awards and won five Grammys. [] She was the best-paid country music star in 2016 according to Forbes, with a net worth of $27.5 million. []Twain has been credited with breaking international boundaries for country music, as well as inspiring many country artists to incorporate different genres into their music in order to attract a wider audience. She is also credited with changing the way in which many female country performers would market themselves, as unlike many before her she used fashion and her sex appeal to get rid of the stereotypical 'honky-tonk' image the majority of country singers had in order to distinguish herself from many female country artists of the time. Line dancing revival In the early-mid-1990s, country western music was influenced by the popularity of line dancing. This influence was so great that Chet Atkins was quoted as saying, "The music has gotten pretty bad, I think. It's all that damn line dancing." By the end of the decade, however, at least one line dance choreographer complained that good country line dance music was no longer being released. In contrast, artists such as Don Williams and George Jones who had more or less had consistent chart success through the 1970s and 1980s suddenly had their fortunes fall rapidly around 1991 when the new chart rules took effect. With the fusion genre of "country trap"—a fusion of country/western themes to a hip hop beat, but usually with fully sung lyrics—emerging in the late 2010s, line dancing country had a minor revival. Examples of the phenomenon include "Old Town Road" by Lil Nas X and "The Git Up" by Blanco Brown, both of which topped the Billboard country charts despite scant radio airplay. Alternative country Country influences combined with Punk rock and alternative rock to forge the "cowpunk" scene in Southern California during the 1980s, which included bands such as the Long Ryders, Lone Justice and the Beat Farmers, as well as the established punk group X, whose music had begun to include country and rockabilly influences. Simultaneously, a generation of diverse country artists outside of California emerged that rejected the perceived cultural and musical conservatism associated with Nashville's mainstream country musicians in favor of more countercultural outlaw country and the folk singer-songwriter traditions of artists such as Woody Guthrie, Gram Parsons and Bob Dylan. Artists from outside California who were associated with early alternative country included singer-songwriters such as Lucinda Williams, Lyle Lovett and Steve Earle, the Nashville country rock band Jason and the Scorchers, the Providence "cowboy pop" band Rubber Rodeo, and the British post-punk band the Mekons. Earle, in particular, was noted for his popularity with both country and college rock audiences: He promoted his 1986 debut album Guitar Town with a tour that saw him open for both country singer Dwight Yoakam and alternative rock band the Replacements. Yoakam also cultivated a fanbase spanning multiple genres through his stripped-down honky-tonk influenced sound, association with the cowpunk scene, and performances at Los Angeles punk rock clubs. These early styles had coalesced into a genre by the time the Illinois group Uncle Tupelo released their influential debut album No Depression in 1990. The album is widely credited as being the first "alternative country" album, and inspired the name of No Depression magazine, which exclusively covered the new genre. Following Uncle Tupelo's disbanding in 1994, its members formed two significant bands in genre: Wilco and Son Volt. Although Wilco's sound had moved away from country and towards indie rock by the time they released their critically acclaimed album Yankee Hotel Foxtrot in 2002, they have continued to be an influence on later alt-country artists. Other acts who became prominent in the alt-country genre during the 1990s and 2000s included the Bottle Rockets, the Handsome Family, Blue Mountain, Robbie Fulks, Blood Oranges, Bright Eyes, Drive-By Truckers, Old 97's, Old Crow Medicine Show, Nickel Creek, Neko Case, and Whiskeytown, whose lead singer Ryan Adams later had a successful solo-career. Alt-country, in various iterations overlapped with other genres, including Red Dirt country music (Cross Canadian Ragweed), jam bands (My Morning Jacket and the String Cheese Incident), and indie folk (the Avett Brothers). Despite the genre's growing popularity in the 1980s, '90s and 2000s, alternative country and neo-traditionalist artists saw minimal support from country radio in those decades, despite strong sales and critical acclaim for albums such as the soundtrack to the 2000 film O Brother, Where Art Thou?. In 1987, the Beat Farmers gained airplay on country music stations with their song "Make It Last", but the single was pulled from the format when station programmers decreed the band's music was too rock-oriented for their audience. However, some alt-country songs have been crossover hits to mainstream country radio in cover versions by established artists on the format; Lucinda Williams' "Passionate Kisses" was a hit for Mary Chapin Carpenter in 1993, Ryan Adams's "When the Stars Go Blue" was a hit for Tim McGraw in 2007, and Old Crow Medicine Show's "Wagon Wheel" was a hit for Darius Rucker in 2013. In the 2010s, the alt-country genre saw an increase in its critical and commercial popularity, owing to the success of artists such as the Civil Wars, Chris Stapleton, Sturgill Simpson, Jason Isbell, Lydia Loveless and Margo Price. In 2019, Kacey Musgraves – a country artist who had gained a following with indie rock fans and music critics despite minimal airplay on country radio – won the Grammy Award for Album of the Year for her album Golden Hour. Sixth generation (2000s–present) The sixth generation of country music continued to be influenced by other genres such as pop, rock, and R&B. Richard Marx crossed over with his Days in Avalon album, which features five country songs and several singers and musicians. Alison Krauss sang background vocals to Marx's single "Straight from My Heart." Also, Bon Jovi had a hit single, "Who Says You Can't Go Home", with Jennifer Nettles of Sugarland. Kid Rock's collaboration with Sheryl Crow, "Picture," was a major crossover hit in 2001 and began Kid Rock's transition from hard rock to a country-rock hybrid that would later produce another major crossover hit, 2008's "All Summer Long." (Crow, whose music had often incorporated country elements, would also officially cross over into country with her hit "Easy" from her debut country album Feels like Home). Darius Rucker, frontman for the 1990s pop-rock band Hootie & the Blowfish, began a country solo career in the late 2000s, one that to date has produced five albums and several hits on both the country charts and the Billboard Hot 100. Singer-songwriter Unknown Hinson became famous for his appearance in the Charlotte television show Wild, Wild, South, after which Hinson started his own band and toured in southern states. Other rock stars who featured a country song on their albums were Don Henley (who released Cass County in 2015, an album which featured collaborations with numerous country artists) and Poison. The back half of the 2010-2020 decade saw an increasing number of mainstream country acts collaborate with pop and R&B acts; many of these songs achieved commercial success by appealing to fans across multiple genres; examples include collaborations between Kane Brown and Marshmello and Maren Morris and Zedd. There has also been interest from pop singers in country music, including Beyoncé, Lady Gaga, Alicia Keys, Gwen Stefani, Justin Timberlake, Justin Bieber and Pink. Supporting this movement is the new generation of contemporary pop-country, including Taylor Swift, Miranda Lambert, Carrie Underwood, Kacey Musgraves, Miley Cyrus, Billy Ray Cyrus, Sam Hunt, Chris Young, who introduced new themes in their works, touching on fundamental rights, feminism, and controversies about racism and religion of the older generations. Popular culture In 2005, country singer Carrie Underwood rose to fame as the winner of the fourth season of American Idol and has since become one of the most prominent recording artists in the genre, with worldwide sales of more than 65 million records and seven Grammy Awards. With her first single, "Inside Your Heaven", Underwood became the only solo country artist to have a number 1 hit on the Billboard Hot 100 chart in the 2000–2009 decade and also broke Billboard chart history as the first country music artist ever to debut at No. 1 on the Hot 100. Underwood's debut album, Some Hearts, became the best-selling solo female debut album in country music history, the fastest-selling debut country album in the history of the SoundScan era and the best-selling country album of the last 10 years, being ranked by Billboard as the number 1 Country Album of the 2000–2009 decade. She has also become the female country artist with the most number one hits on the Billboard Hot Country Songs chart in the Nielsen SoundScan era (1991–present), having 14 No. 1s and breaking her own Guinness Book record of ten. In 2007, Underwood won the Grammy Award for Best New Artist, becoming only the second Country artist in history (and the first in a decade) to win it. She also made history by becoming the seventh woman to win Entertainer of the Year at the Academy of Country Music Awards, and the first woman in history to win the award twice, as well as twice consecutively. Time has listed Underwood as one of the 100 most influential people in the world. In 2016, Underwood topped the Country Airplay chart for the 15th time, becoming the female artist with most number ones on that chart. Carrie Underwood was only one of several country stars produced by a television series in the 2000s. In addition to Underwood, American Idol launched the careers of Kellie Pickler, Josh Gracin, Bucky Covington, Kristy Lee Cook, Danny Gokey, Lauren Alaina and Scotty McCreery (as well as that of occasional country singer Kelly Clarkson) in the decade, and would continue to launch country careers in the 2010s. The series Nashville Star, while not nearly as successful as Idol, did manage to bring Miranda Lambert, Kacey Musgraves and Chris Young to mainstream success, also launching the careers of lower-profile musicians such as Buddy Jewell, Sean Patrick McGraw, and Canadian musician George Canyon. Can You Duet? produced the duos Steel Magnolia and Joey + Rory. Teen sitcoms also have influenced modern country music; in 2008, actress Jennette McCurdy (best known as the sidekick Sam on the teen sitcom iCarly) released her first single, "So Close", following that with the single "Generation Love" in 2011. Another teen sitcom star, Miley Cyrus (of Hannah Montana), also had a crossover hit in the late 2000s with "The Climb" and another with a duet with her father, Billy Ray Cyrus, with "Ready, Set, Don't Go." Jana Kramer, an actress in the teen drama One Tree Hill, released a country album in 2012 that has produced two hit singles as of 2013. Actresses Hayden Panettiere and Connie Britton began recording country songs as part of their roles in the TV series Nashville and Pretty Little Liars star Lucy Hale released her debut album Road Between in 2014. In 2010, the group Lady Antebellum won five Grammys, including the coveted Song of the Year and Record of the Year for "Need You Now". A large number of duos and vocal groups emerged on the charts in the 2010s, many of which feature close harmony in the lead vocals. In addition to Lady A, groups such as Herrick, the Quebe Sisters Band, Little Big Town, the Band Perry, Gloriana, Thompson Square, Eli Young Band, Zac Brown Band and British duo the Shires have emerged to occupy a large share of mainstream success alongside solo singers such as Kacey Musgraves and Miranda Lambert. One of the most commercially successful country artists of the late 2000s and early 2010s has been singer-songwriter Taylor Swift. Swift first became widely known in 2006 when her debut single, "Tim McGraw," was released when Swift was only 16. In 2006, Swift released her self-titled debut studio album, which spent 275 weeks on Billboard 200, one of the longest runs of any album on that chart. In 2008, Taylor Swift released her second studio album, Fearless, which made her the second longest number-one charted on Billboard 200 and the second best-selling album (just behind Adele's 21) within the past 5 years. At the 2010 Grammys, Taylor Swift was 20 and won Album of the Year for Fearless, which made her the youngest artist to win this award. Swift has received ten Grammys already. Buoyed by her teen idol status among girls and a change in the methodology of compiling the Billboard charts to favor pop-crossover songs, Swift's 2012 single "We Are Never Ever Getting Back Together" spent the most weeks at the top of Billboard's Hot 100 chart and Hot Country Songs chart of any song in nearly five decades. The song's long run at the top of the chart was somewhat controversial, as the song is largely a pop song without much country influence and its success on the charts driven by a change to the chart's criteria to include airplay on non-country radio stations, prompting disputes over what constitutes a country song; many of Swift's later releases, such as album 1989 (2014), Reputation (2017), and Lover (2019) were released solely to pop audiences. Swift returned to country music in her recent folk-inspired releases, Folklore (2020) and Evermore (2020), with songs like "Betty" and "No Body, No Crime". National patriotism Roots of conservative country was Lee Greenwood's "God Bless the USA". The September 11 attacks of 2001 and the economic recession helped move country music back into the spotlight. Many country artists, such as Alan Jackson with his ballad on terrorist attacks, "Where Were You (When the World Stopped Turning)", wrote songs that celebrated the military, highlighted the gospel, and emphasized home and family values over wealth. Alt-Country singer Ryan Adams song "New York, New York" pays tribute to New York City, and its popular music video (which was shot 4 days before the attacks) shows Adams playing in front of the Manhattan skyline, Along with several shots of the city. In contrast, more rock-oriented country singers took more direct aim at the attacks' perpetrators; Toby Keith's "Courtesy of the Red, White and Blue (The Angry American)" threatened to "a boot in" the posterior of the enemy, while Charlie Daniels's "This Ain't No Rag, It's a Flag" promised to "hunt" the perpetrators "down like a mad dog hound." These songs gained such recognition that it put country music back into popular culture. Darryl Worley recorded "Have You Forgotten" also. Influence of rock music The influence of rock music in country has become more overt during the late 2000s and early 2010s as artists like Eric Church, Jason Aldean, and Brantley Gilbert have had success; Aaron Lewis, former frontman for the rock group Staind, had a moderately successful entry into country music in 2011 and 2012, as did Dallas Smith, former frontman of the band Default. Bro country In the early 2010s, "bro-country", a genre noted primarily for its themes on drinking and partying, girls, and pickup trucks became particularly popular. Notable artists associated with this genre are Luke Bryan, Jason Aldean, Blake Shelton, Jake Owen and Florida Georgia Line whose song "Cruise" became the best-selling country song of all time. Research in the mid-2010s suggested that about 45 percent of country's best-selling songs could be considered bro-country, with the top two artists being Luke Bryan and Florida Georgia Line. Albums by bro-country singers also sold very well—in 2013, Luke Bryan's Crash My Party was the third best-selling of all albums in the US, with Florida Georgia Line's Here's to the Good Times at sixth, and Blake Shelton's Based on a True Story at ninth. It is also thought that the popularity of bro-country helped country music to surpass classic rock as the most popular genre in America in 2012. The genre however is controversial as it has been criticized by other country musicians and commentators over its themes and depiction of women, opening up a divide between the older generation of country singers and the younger bro country singers that was described as "civil war" by musicians, critics, and journalists." In 2014, Maddie & Tae's "Girl in a Country Song", addressing many of the controversial bro-country themes, peaked at number one on the Billboard Country Airplay chart. Texas Country The Lone Star state can proudly claim some of the most talented musicians in country music. These artists have created large Texas based fan communities that regularly attend live shows throughout the state and of course tune in to listen to their favorite songs on radio stations in Texas and beyond. Texas country music has developed a secondary music chart to that of the country music chart based in Nashville. The Texas Country Music Chart is composed of artists who were born, reside or have connections to Texas. Artists on this chart are huge stars within the realm of Texas and the reach of Texas country radio airplay. The work these artists have made is not only important for Texas music, but country music in general. Artists currently paving the way for the sub genre include Cody Johnson, Aaron Watson and many others who fail to receive recognition from the country music community in Nashville. Traditional artists within Texas country include Bruce Robison, The Randy Rogers Band, Roger Creager, Pat Green and numerous other influential artists. Texas country music is a massive sleeping giant in the music industry and with growing interest and talent from the region and radio airplay, the country music scene is expecting change via Texas based artists. Bluegrass country Bluegrass Country is a genre that contain songs about going through hard times, country loving, and telling stories. Newer artists like Billy Strings, the Grascals, Molly Tuttle, Tyler Childers and the Infamous Stringdusters have been increasing the popularity of this genre, alongside some of the genres more established stars who still remain popular including Rhonda Vincent, Alison Krauss and Union Station, Ricky Skaggs and Del McCoury. The genre has developed in the Northern Kentucky and Cincinnati area. Other artists include Nitty Gritty Dirt Band, Johnny Cash, Osborne Brothers, and many others. Americana In an effort to combat the over-reliance of mainstream country music on pop-infused artists, the sister genre of Americana began to gain popularity and increase in prominence, receiving eight Grammy categories of its own in 2009. Americana music incorporates elements of country music, bluegrass, folk, blues, gospel, rhythm and blues, roots rock and southern soul and is overseen by the Americana Music Association and the Americana Music Honors & Awards. As a result of an increasingly pop-leaning mainstream, many more traditional-sounding artists such as Tyler Childers, Zach Bryan and Old Crow Medicine Show began to associate themselves more with Americana and the alternative country scene where their sound was more celebrated. Similarly, many established country acts who no longer received commercial airplay, including Emmylou Harris and Lyle Lovett, began to flourish again. Deep country During the 2000s, Brad Paisley & Alison Krauss recorded deep country song "Whiskey Lullaby", and Gretchen Wilson released "Redneck Woman". Chris Young's second album, The Man I Want to Be, was released in September 2009. It was produced by James Stroud and includes cover versions of Waylon Jennings' "Rose in Paradise" (as a duet with Willie Nelson) and Tony Joe White's "Rainy Night in Georgia". Contemporary country In the mid to late 2010s, country music began to increasingly sound more like the style of modern-day Pop music, with more simple and repetitive lyrics, more electronic-based instrumentation, and experimentation with "talk-singing" and rap, pop-country pulled farther away from the traditional sounds of country music and received criticisms from country music purists while gaining in popularity with mainstream audiences. The topics addressed have also changed, turning controversial such as acceptance of the LGBT community, safe sex, recreational marijuana use, and questioning religious sentiment. Influences also come from some pop artists' interest in the country genre, including Justin Timberlake with the album Man of the Woods, Beyoncé's single "Daddy Lessons" from Lemonade, Gwen Stefani with "Nobody but You", Bruno Mars, Lady Gaga, Alicia Keys, Kelly Clarkson, and Pink. Some modern artists that primarily or entirely produce country pop music include Kacey Musgraves, Maren Morris, Kelsea Ballerini, Sam Hunt, Kane Brown, Chris Lane, Lil Nas X, The Highwomen and Dan + Shay. The singers who are part of this country movement are also defined as "Nashville's new generation of country". Although the changes made by the new generation, it has been recognized by major music awards associations and successes in Billboard and international charts. Golden Hour by Kacey Musgraves won album of the year at 61st Annual Grammy Awards, Academy of Country Music Awards, Country Music Association Awards, although it has received widespread criticism from the more traditionalist public. Lil Nas X song "Old Town Road" spent 19 weeks atop the US Billboard Hot 100 chart, becoming the longest-running number-one song since the chart debuted in 1958, winning Billboard Music Awards, MTV Video Music Awards and Grammy Award. Sam Hunt "Leave the Night On" peaked concurrently on the Hot Country Songs and Country Airplay charts, making Hunt the first country artist in 22 years, since Billy Ray Cyrus, to reach the top of three country charts simultaneously in the Nielsen SoundScan-era. Maren Morris success collaboration "The Middle" with EDM producer Zedd is considered one of the representations of the fusion of electro-pop with country music. International Australia Australian country music has a long tradition. Influenced by American country music, it has developed a distinct style, shaped by British and Irish folk ballads and Australian bush balladeers like Henry Lawson and Banjo Paterson. Country instruments, including the guitar, banjo, fiddle and harmonica, create the distinctive sound of country music in Australia and accompany songs with strong storyline and memorable chorus. Folk songs sung in Australia between the 1780s and 1920s, based around such themes as the struggle against government tyranny, or the lives of bushrangers, swagmen, drovers, stockmen and shearers, continue to influence the genre. This strain of Australian country, with lyrics focusing on Australian subjects, is generally known as "bush music" or "bush band music". "Waltzing Matilda", often regarded as Australia's unofficial national anthem, is a quintessential Australian country song, influenced more by British and Irish folk ballads than by American country and western music. The lyrics were composed by the poet Banjo Paterson in 1895. Other popular songs from this tradition include "The Wild Colonial Boy", "Click Go the Shears", "The Queensland Drover" and "The Dying Stockman". Later themes which endure to the present include the experiences of war, of droughts and flooding rains, of Aboriginality and of the railways and trucking routes which link Australia's vast distances. Pioneers of a more Americanised popular country music in Australia included Tex Morton (known as "The Father of Australian Country Music") in the 1930s. Author Andrew Smith delivers a through research and engaged view of Tex Morton's life and his impact on the country music scene in Australia in the 1930s and 1940s. Other early stars included Buddy Williams, Shirley Thoms and Smoky Dawson. Buddy Williams (1918–1986) was the first Australian-born to record country music in Australia in the late 1930s and was the pioneer of a distinctly Australian style of country music called the bush ballad that others such as Slim Dusty would make popular in later years. During the Second World War, many of Buddy Williams recording sessions were done whilst on leave from the Army. At the end of the war, Williams would go on to operate some of the largest travelling tent rodeo shows Australia has ever seen. In 1952, Dawson began a radio show and went on to national stardom as a singing cowboy of radio, TV and film. Slim Dusty (1927–2003) was known as the "King of Australian Country Music" and helped to popularise the Australian bush ballad. His successful career spanned almost six decades, and his 1957 hit "A Pub with No Beer" was the biggest-selling record by an Australian to that time, and with over seven million record sales in Australia he is the most successful artist in Australian musical history. Dusty recorded and released his one-hundredth album in the year 2000 and was given the honour of singing "Waltzing Matilda" in the closing ceremony of the Sydney 2000 Olympic Games. Dusty's wife Joy McKean penned several of his most popular songs. Chad Morgan, who began recording in the 1950s, has represented a vaudeville style of comic Australian country; Frank Ifield achieved considerable success in the early 1960s, especially in the UK Singles Charts and Reg Lindsay was one of the first Australians to perform at Nashville's Grand Ole Opry in 1974. Eric Bogle's 1972 folk lament to the Gallipoli Campaign "And the Band Played Waltzing Matilda" recalled the British and Irish origins of Australian folk-country. Singer-songwriter Paul Kelly, whose music style straddles folk, rock and country, is often described as the poet laureate of Australian music. By the 1990s, country music had attained crossover success in the pop charts, with artists like James Blundell and James Reyne singing "Way Out West", and country star Kasey Chambers winning the ARIA Award for Best Female Artist in 2000, 2002 and 2004, tying with pop stars Wendy Matthews and Sia for the most wins in that category. Furthermore, Chambers has gone on to win nine ARIA Awards for Best Country Album and, in 2018, became the youngest artist to ever be inducted into the ARIA Hall of Fame. The crossover influence of Australian country is also evident in the music of successful contemporary bands the Waifs and the John Butler Trio. Nick Cave has been heavily influenced by the country artist Johnny Cash. In 2000, Cash, covered Cave's "The Mercy Seat" on the album American III: Solitary Man, seemingly repaying Cave for the compliment he paid by covering Cash's "The Singer" (originally "The Folk Singer") on his Kicking Against the Pricks album. Subsequently, Cave cut a duet with Cash on a version of Hank Williams' "I'm So Lonesome I Could Cry" for Cash's American IV: The Man Comes Around album (2002). Popular contemporary performers of Australian country music include John Williamson (who wrote the iconic "True Blue"), Lee Kernaghan (whose hits include "Boys from the Bush" and "The Outback Club"), Gina Jeffreys, Forever Road and Sara Storer. In the United States, Olivia Newton-John, Sherrié Austin and Keith Urban have attained great success. During her time as a country singer in the 1970s, Newton-John became the first (and to date only) non-American winner of the Country Music Association Award for Female Vocalist of the Year which many considered a controversial decision by the CMA; after starring in the rock-and-roll musical film Grease in 1978, Newton-John (mirroring the character she played in the film) shifted to pop music in the 1980s. Urban is arguably considered the most successful international Australian country star, winning nine CMA Awards, including three Male Vocalist of the Year wins and two wins of the CMA's top honour Entertainer of the Year. Pop star Kylie Minogue found success with her 2018 country pop album Golden which she recorded in Nashville reaching number one in Scotland, the UK and her native Australia. Country music has been a particularly popular form of musical expression among Indigenous Australians. Troy Cassar-Daley is among Australia's successful contemporary indigenous performers, and Kev Carmody and Archie Roach employ a combination of folk-rock and country music to sing about Aboriginal rights issues. The Tamworth Country Music Festival began in 1973 and now attracts up to 100,000 visitors annually. Held in Tamworth, New South Wales (country music capital of Australia), it celebrates the culture and heritage of Australian country music. During the festival the CMAA holds the Country Music Awards of Australia ceremony awarding the Golden Guitar trophies. Other significant country music festivals include the Whittlesea Country Music Festival (near Melbourne) and the Mildura Country Music Festival for "independent" performers during October, and the Canberra Country Music Festival held in the national capital during November. Country HQ showcases new talent on the rise in the country music scene down under. CMC (the Country Music Channel), a 24‑hour music channel dedicated to non-stop country music, can be viewed on pay TV and features once a year the Golden Guitar Awards, CMAs and CCMAs alongside international shows such as The Wilkinsons, The Road Hammers, and Country Music Across America. Canada Outside of the United States, Canada has the largest country music fan and artist base, something that is to be expected given the two countries' proximity and cultural parallels. Mainstream country music is | numerous references to the trucker culture of the time like "ICC" for Interstate Commerce Commission and "little white pills" as a reference to amphetamines. Starday Records in Nashville followed up on Dudley's initial success with the release of Give me 40 Acres by the Willis Brothers. Rockabilly Rockabilly was most popular with country fans in the 1950s; one of the first rock and roll superstars was former Western yodeler Bill Haley, who repurposed his Four Aces of Western Swing into a rockabilly band in the early 1950s and renamed it the Comets. Bill Haley & His Comets are credited with two of the first successful rock and roll records, "Crazy Man, Crazy" of 1953 and "Rock Around the Clock" in 1954. 1956 could be called the year of rockabilly in country music. Rockabilly was an early form of rock and roll, an upbeat combination of blues and country music. The number two, three and four songs on Billboard's charts for that year were Elvis Presley, "Heartbreak Hotel"; Johnny Cash, "I Walk the Line"; and Carl Perkins, "Blue Suede Shoes" Thumper Jones (George Jones) Cash and Presley placed songs in the top 5 in 1958 with No. 3 "Guess Things Happen That Way/Come In, Stranger" by Cash, and No. 5 by Presley "Don't/I Beg of You." Presley acknowledged the influence of rhythm and blues artists and his style, saying "The colored folk been singin' and playin' it just the way I'm doin' it now, man for more years than I know." Within a few years, many rockabilly musicians returned to a more mainstream style or had defined their own unique style. Country music gained national television exposure through Ozark Jubilee on ABC-TV and radio from 1955 to 1960 from Springfield, Missouri. The program showcased top stars including several rockabilly artists, some from the Ozarks. As Webb Pierce put it in 1956, "Once upon a time, it was almost impossible to sell country music in a place like New York City. Nowadays, television takes us everywhere, and country music records and sheet music sell as well in large cities as anywhere else." The late 1950s saw the emergence of Buddy Holly, but by the end of the decade, backlash as well as traditional artists such as Ray Price, Marty Robbins, and Johnny Horton began to shift the industry away from the rock n' roll influences of the mid-1950s. The Country Music Association was founded in 1958, in part because numerous country musicians were appalled by the increased influence of rock and roll on country music. The Nashville and countrypolitan sounds Beginning in the mid-1950s, and reaching its peak during the early 1960s, the Nashville sound turned country music into a multimillion-dollar industry centered in Nashville, Tennessee. Under the direction of producers such as Chet Atkins, Bill Porter, Paul Cohen, Owen Bradley, Bob Ferguson, and later Billy Sherrill, the sound brought country music to a diverse audience and helped revive country as it emerged from a commercially fallow period. This subgenre was notable for borrowing from 1950s pop stylings: a prominent and smooth vocal, backed by a string section (violins and other orchestral strings) and vocal chorus. Instrumental soloing was de-emphasized in favor of trademark "licks". Leading artists in this genre included Jim Reeves, Skeeter Davis, Connie Smith, the Browns, Patsy Cline, and Eddy Arnold. The "slip note" piano style of session musician Floyd Cramer was an important component of this style. The Nashville Sound collapsed in mainstream popularity in 1964, a victim of both the British Invasion and the deaths of Reeves and Cline in separate airplane crashes. By the mid-1960s, the genre had developed into countrypolitan. Countrypolitan was aimed straight at mainstream markets, and it sold well throughout the later 1960s into the early 1970s. Top artists included Tammy Wynette, Lynn Anderson and Charlie Rich, as well as such former "hard country" artists as Ray Price and Marty Robbins. Despite the appeal of the Nashville sound, many traditional country artists emerged during this period and dominated the genre: Loretta Lynn, Merle Haggard, Buck Owens, Porter Wagoner, George Jones, and Sonny James among them. Country-soul crossover In 1962, Ray Charles surprised the pop world by turning his attention to country and western music, topping the charts and rating number three for the year on Billboard's pop chart with the "I Can't Stop Loving You" single, and recording the landmark album Modern Sounds in Country and Western Music. Bakersfield sound Another subgenre of country music grew out of hardcore honky tonk with elements of Western swing and originated north-northwest of Los Angeles in Bakersfield, California, where many "Okies" and other Dust Bowl migrants had settled. Influenced by one-time West Coast residents Bob Wills and Lefty Frizzell, by 1966 it was known as the Bakersfield sound. It relied on electric instruments and amplification, in particular the Telecaster electric guitar, more than other subgenres of the country music of the era, and it can be described as having a sharp, hard, driving, no-frills, edgy flavor—hard guitars and honky-tonk harmonies. Leading practitioners of this style were Buck Owens, Merle Haggard, Tommy Collins, Gary Allan, and Wynn Stewart, each of whom had his own style. Ken Nelson, who had produced Owens and Haggard and Rose Maddox became interested in the trucking song subgenre following the success of Six Days on the Road and asked Red Simpson to record an album of trucking songs. Haggard's White Line Fever was also part of the trucking subgenre. Western music merges with country The country music scene of the 1940s until the 1970s was largely dominated by Western music influences, so much so that the genre began to be called "Country and Western". Even today, cowboy and frontier values continue to play a role in the larger country music, with Western wear, cowboy boots, and cowboy hats continues to be in fashion for country artists. West of the Mississippi river, many of these Western genres continue to flourish, including the Red Dirt of Oklahoma, New Mexico music of New Mexico, and both Texas country music and Tejano music of Texas. During the 1950s until the early 1970s, the latter part of the Western heyday in country music, many of these genres featured popular artists that continue to influence both their distinctive genres and larger country music. Red Dirt featured Bob Childers and Steve Ripley; for New Mexico music Al Hurricane, Al Hurricane Jr., and Antonia Apodaca; and within the Texas scenes Willie Nelson, Freddie Fender, Johnny Rodriguez, and Little Joe. As Outlaw country music emerged as subgenre in its own right, Red Dirt, New Mexico, Texas country, and Tejano grew in popularity as a part of the Outlaw country movement. Originating in the bars, fiestas, and honky-tonks of Oklahoma, New Mexico, and Texas, their music supplemented outlaw country's singer-songwriter tradition as well as 21st-century rock-inspired alternative country and hip hop-inspired country rap artists. Fourth generation (1970s–1980s) Outlaw movement Outlaw country was derived from the traditional Western, including Red Dirt, New Mexico, Texas country, Tejano, and honky-tonk musical styles of the late 1950s and 1960s. Songs such as the 1963 Johnny Cash popularized "Ring of Fire" show clear influences from the likes of Al Hurricane and Little Joe, this influence just happened to culminate with artists such as Ray Price (whose band, the "Cherokee Cowboys", included Willie Nelson and Roger Miller) and mixed with the anger of an alienated subculture of the nation during the period, a collection of musicians that came to be known as the outlaw movement revolutionized the genre of country music in the early 1970s. "After I left Nashville (the early 70s), I wanted to relax and play the music that I wanted to play, and just stay around Texas, maybe Oklahoma. Waylon and I had that outlaw image going, and when it caught on at colleges and we started selling records, we were O.K. The whole outlaw thing, it had nothing to do with the music, it was something that got written in an article, and the young people said, 'Well, that's pretty cool.' And started listening." (Willie Nelson) The term outlaw country is traditionally associated with Willie Nelson, Jerry Jeff Walker, Hank Williams, Jr., Merle Haggard, Waylon Jennings and Joe Ely. It was encapsulated in the 1976 album Wanted! The Outlaws. Though the outlaw movement as a cultural fad had died down after the late 1970s (with Jennings noting in 1978 that it had gotten out of hand and led to real-life legal scrutiny), many Western and Outlaw country music artists maintained their popularity during the 1980s by forming supergroups, such as The Highwaymen, Texas Tornados, and Bandido. Country pop Country pop or soft pop, with roots in the countrypolitan sound, folk music, and soft rock, is a subgenre that first emerged in the 1970s. Although the term first referred to country music songs and artists that crossed over to top 40 radio, country pop acts are now more likely to cross over to adult contemporary music. It started with pop music singers like Glen Campbell, Bobbie Gentry, John Denver, Olivia Newton-John, Anne Murray, B. J. Thomas, the Bellamy Brothers, and Linda Ronstadt having hits on the country charts. Between 1972 and 1975, singer/guitarist John Denver released a series of hugely successful songs blending country and folk-rock musical styles ("Rocky Mountain High", "Sunshine on My Shoulders", "Annie's Song", "Thank God I'm a Country Boy", and "I'm Sorry"), and was named Country Music Entertainer of the Year in 1975. The year before, Olivia Newton-John, an Australian pop singer, won the "Best Female Country Vocal Performance" as well as the Country Music Association's most coveted award for females, "Female Vocalist of the Year". In response George Jones, Tammy Wynette, Jean Shepard and other traditional Nashville country artists dissatisfied with the new trend formed the short-lived "Association of Country Entertainers" in 1974; the ACE soon unraveled in the wake of Jones and Wynette's bitter divorce and Shepard's realization that most others in the industry lacked her passion for the movement. During the mid-1970s, Dolly Parton, a successful mainstream country artist since the late 1960s, mounted a high-profile campaign to cross over to pop music, culminating in her 1977 hit "Here You Come Again", which topped the U.S. country singles chart, and also reached No. 3 on the pop singles charts. Parton's male counterpart, Kenny Rogers, came from the opposite direction, aiming his music at the country charts, after a successful career in pop, rock and folk music with the First Edition, achieving success the same year with "Lucille", which topped the country charts and reached No. 5 on the U.S. pop singles charts, as well as reaching Number 1 on the British all-genre chart. Parton and Rogers would both continue to have success on both country and pop charts simultaneously, well into the 1980s. Country music propelled Kenny Rogers’ career, making him a three-time Grammy Award winner and six-time Country Music Association Awards winner. Having sold more than 50 million albums in the US, one of his Song "The Gambler," inspired multiple TV movies, with Rogers as the main character. Artists like Crystal Gayle, Ronnie Milsap and Barbara Mandrell would also find success on the pop charts with their records. In 1975, author Paul Hemphill stated in the Saturday Evening Post, "Country music isn't really country anymore; it is a hybrid of nearly every form of popular music in America." During the early 1980s, country artists continued to see their records perform well on the pop charts. Willie Nelson and Juice Newton each had two songs in the top 5 of the Billboard Hot 100 in the early eighties: Nelson charted "Always on My Mind" (No. 5, 1982) and "To All the Girls I've Loved Before" (No. 5, 1984, a duet with Julio Iglesias), and Newton achieved success with "Queen of Hearts" (No. 2, 1981) and "Angel of the Morning" (No. 4, 1981). Four country songs topped the Billboard Hot 100 in the 1980s: "Lady" by Kenny Rogers, from the late fall of 1980; "9 to 5" by Dolly Parton, "I Love a Rainy Night" by Eddie Rabbitt (these two back-to-back at the top in early 1981); and "Islands in the Stream", a duet by Dolly Parton and Kenny Rogers in 1983, a pop-country crossover hit written by Barry, Robin, and Maurice Gibb of the Bee Gees. Newton's "Queen of Hearts" almost reached No. 1, but was kept out of the spot by the pop ballad juggernaut "Endless Love" by Diana Ross and Lionel Richie. The move of country music toward neotraditional styles led to a marked decline in country/pop crossovers in the late 1980s, and only one song in that period—Roy Orbison's "You Got It", from 1989—made the top 10 of both the Billboard Hot Country Singles" and Hot 100 charts, due largely to a revival of interest in Orbison after his sudden death. The only song with substantial country airplay to reach number one on the pop charts in the late 1980s was "At This Moment" by Billy Vera and the Beaters, an R&B song with slide guitar embellishment that appeared at number 42 on the country charts from minor crossover airplay. The record-setting, multi-platinum group Alabama was named Artist of the Decade for the 1980s by the Academy of Country Music. Country rock Country rock is a genre that started in the 1960s but became prominent in the 1970s. The late 1960s in American music produced a unique blend as a result of traditionalist backlash within separate genres. In the aftermath of the British Invasion, many desired a return to the "old values" of rock n' roll. At the same time there was a lack of enthusiasm in the country sector for Nashville-produced music. What resulted was a crossbred genre known as country rock. Early innovators in this new style of music in the 1960s and 1970s included Bob Dylan, who was the first to revert to country music with his 1967 album John Wesley Harding (and even more so with that album's follow-up, Nashville Skyline), followed by Gene Clark, Clark's former band the Byrds (with Gram Parsons on Sweetheart of the Rodeo) and its spin-off the Flying Burrito Brothers (also featuring Gram Parsons), guitarist Clarence White, Michael Nesmith (the Monkees and the First National Band), the Grateful Dead, Neil Young, Commander Cody, the Allman Brothers, the Marshall Tucker Band, Poco, Buffalo Springfield, Stephen Stills' band Manassas and Eagles, among many, even the former folk music duo Ian & Sylvia, who formed Great Speckled Bird in 1969. The Eagles would become the most successful of these country rock acts, and their compilation album Their Greatest Hits (1971–1975) remains the second-best-selling album in the US with 29 million copies sold. The Rolling Stones also got into the act with songs like "Dead Flowers" and a country version of "Honky Tonk Women". Described by AllMusic as the "father of country-rock", Gram Parsons' work in the early 1970s was acclaimed for its purity and for his appreciation for aspects of traditional country music. Though his career was cut tragically short by his 1973 death, his legacy was carried on by his protégé and duet partner Emmylou Harris; Harris would release her debut solo in 1975, an amalgamation of country, rock and roll, folk, blues and pop. Subsequent to the initial blending of the two polar opposite genres, other offspring soon resulted, including Southern rock, heartland rock and in more recent years, alternative country. In the decades that followed, artists such as Juice Newton, Alabama, Hank Williams, Jr. (and, to an even greater extent, Hank Williams III), Gary Allan, Shania Twain, Brooks & Dunn, Faith Hill, Garth Brooks, Dwight Yoakam, Steve Earle, Dolly Parton, Rosanne Cash and Linda Ronstadt moved country further towards rock influence. Neocountry In 1980, a style of "neocountry disco music" was popularized by the film Urban Cowboy, which also included more traditional songs such as "The Devil Went Down to Georgia" by the Charlie Daniels Band. It was during this time that a glut of pop-country crossover artists began appearing on the country charts: former pop stars Bill Medley (of the Righteous Brothers), "England Dan" Seals (of England Dan and John Ford Coley), Tom Jones, and Merrill Osmond (both alone and with some of his brothers; his younger sister Marie Osmond was already an established country star) all recorded significant country hits in the early 1980s. Sales in record stores rocketed to $250 million in 1981; by 1984, 900 radio stations began programming country or neocountry pop full-time. As with most sudden trends, however, by 1984 sales had dropped below 1979 figures. Truck driving country Truck driving country music is a genre of country music and is a fusion of honky-tonk, country rock and the Bakersfield sound. It has the tempo of country rock and the emotion of honky-tonk, and its lyrics focus on a truck driver's lifestyle. Truck driving country songs often deal with the profession of trucking and love. Well-known artists who sing truck driving country include Dave Dudley, Red Sovine, Dick Curless, Red Simpson, Del Reeves, the Willis Brothers and Jerry Reed, with C. W. McCall and Cledus Maggard (pseudonyms of Bill Fries and Jay Huguely, respectively) being more humorous entries in the subgenre. Dudley is known as the father of truck driving country. Neotraditionalist movement During the mid-1980s, a group of new artists began to emerge who rejected the more polished country-pop sound that had been prominent on radio and the charts, in favor of more, traditional, "back-to-basics" production. Many of the artists during the latter half of the 1980s drew on traditional honky-tonk, bluegrass, folk and western swing. Artists who typified this sound included Travis Tritt, Reba McEntire, George Strait, Keith Whitley, Alan Jackson, John Anderson, Patty Loveless, Kathy Mattea, Randy Travis, Dwight Yoakam, Clint Black, Ricky Skaggs, and the Judds. Beginning in 1989, a confluence of events brought an unprecedented commercial boom to country music. New marketing strategies were used to engage fans, powered by technology that more accurately tracked the popularity of country music, and boosted by a political and economic climate that focused attention on the genre. Garth Brooks ("Friends in Low Places") in particular attracted fans with his fusion of neotraditionalist country and stadium rock. Other artists such as Brooks and Dunn ("Boot Scootin' Boogie") also combined conventional country with slick, rock elements, while Lorrie Morgan, Mary Chapin Carpenter, and Kathy Mattea updated neotraditionalist styles. Fifth generation (1990s) Country music was aided by the U.S. Federal Communications Commission's (FCC) Docket 80–90, which led to a significant expansion of FM radio in the 1980s by adding numerous higher-fidelity FM signals to rural and suburban areas. At this point, country music was mainly heard on rural AM radio stations; the expansion of FM was particularly helpful to country music, which migrated to FM from the AM band as AM became overcome by talk radio (the country music stations that stayed on AM developed the classic country format for the AM audience). At the same time, beautiful music stations already in rural areas began abandoning the format (leading to its effective demise) to adopt country music as well. This wider availability of country music led to producers seeking to polish their product for a wider audience. In 1990, Billboard, which had published a country music chart since the 1940s, changed the methodology it used to compile the chart: singles sales were removed from the methodology, and only airplay on country radio determined a song's place on the chart. In the 1990s, country music became a worldwide phenomenon thanks to Garth Brooks, who enjoyed one of the most successful careers in popular music history, breaking records for both sales and concert attendance throughout the decade. The RIAA has certified his recordings at a combined (128× platinum), denoting roughly 113 million U.S. shipments. Other artists who experienced success during this time included Clint Black, John Michael Montgomery, Tracy Lawrence, Tim McGraw, Kenny Chesney, Travis Tritt, Alan Jackson and the newly formed duo of Brooks & Dunn; George Strait, whose career began in the 1980s, also continued to have widespread success in this decade and beyond. Toby Keith began his career as a more pop-oriented country singer in the 1990s, evolving into an outlaw persona in the early 2000s with Pull My Chain and its follow-up, Unleashed. Success of female artists Female artists such as Reba McEntire, Patty Loveless, Faith Hill, Martina McBride, Deana Carter, LeAnn Rimes, Mindy McCready, Pam Tillis, Lorrie Morgan, Shania Twain, and Mary Chapin Carpenter all released platinum-selling albums in the 1990s. The Dixie Chicks became one of the most popular country bands in the 1990s and early 2000s. Their 1998 debut album Wide Open Spaces went on to become certified 12x platinum while their 1999 album Fly went on to become 10x platinum. After their third album, Home, was released in 2003, the band made political news in part because of lead singer Natalie Maines's comments disparaging then-President George W. Bush while the band was overseas (Maines stated that she and her bandmates were ashamed to be from the same state as Bush, who had just commenced the Iraq War a few days prior). The comments caused a rift between the band and the country music scene, and the band's fourth (and most recent) album, 2006's Taking the Long Way, took a more rock-oriented direction; the album was commercially successful overall among non-country audiences but largely ignored among country audiences. After Taking the Long Way, the band broke up for a decade (with two of its members continuing as the Court Yard Hounds) before reuniting in 2016 and releasing new material in 2020. Shania Twain became the best selling female country artist of the decade. This was primarily due to the success of her breakthrough sophomore 1995 album, The Woman in Me, which was certified 12x platinum sold over 20 million copies worldwide and its follow up, 1997's Come On Over, which was certified 20x platinum and sold over 40 million copies. The album became a major worldwide phenomenon and became one of the world's best selling albums of 1998, 1999 and 2000; it also went on to become the best selling country album of all time. Unlike the majority of her contemporaries, Twain enjoyed large international success that had been seen by very few country artists, before or after her. Critics have noted that Twain enjoyed much of her success due to breaking free of traditional country stereotypes and for incorporating elements of rock and pop into her music. In 2002, she released her successful fourth studio album, titled Up!, which was certified 11x platinum and sold over 15 million copies worldwide. Shania Twain has been nominated eighteen times for Grammy Awards and won five Grammys. [] She was the best-paid country music star in 2016 according to Forbes, with a net worth of $27.5 million. []Twain has been credited with breaking international boundaries for country music, as well as inspiring many country artists to incorporate different genres into their music in order to attract a wider audience. She is also credited with changing the way in which many female country performers would market themselves, as unlike many before her she used fashion and her sex appeal to get rid of the stereotypical 'honky-tonk' image the majority of country singers had in order to distinguish herself from many female country artists of the time. Line dancing revival In the early-mid-1990s, country western music was influenced by the popularity of line dancing. This influence was so great that Chet Atkins was quoted as saying, "The music has gotten pretty bad, I think. It's all that damn line dancing." By the end of the decade, however, at least one line dance choreographer complained that good country line dance music was no longer being released. In contrast, artists such as Don Williams and George Jones who had more or less had consistent chart success through the 1970s and 1980s suddenly had their fortunes fall rapidly around 1991 when the new chart rules took effect. With the fusion genre of "country trap"—a fusion of country/western themes to a hip hop beat, but usually with fully sung lyrics—emerging in the late 2010s, line dancing country had a minor revival. Examples of the phenomenon include "Old Town Road" by Lil Nas X and "The Git Up" by Blanco Brown, both of which topped the Billboard country charts despite scant radio airplay. Alternative country Country influences combined with Punk rock and alternative rock to forge the "cowpunk" scene in Southern California during the 1980s, which included bands such as the Long Ryders, Lone Justice and the Beat Farmers, as well as the established punk group X, whose music had begun to include country and rockabilly influences. Simultaneously, a generation of diverse country artists outside of California emerged that rejected the perceived cultural and musical conservatism associated with Nashville's mainstream country musicians in favor of more countercultural outlaw country and the folk singer-songwriter traditions of artists such as Woody Guthrie, Gram Parsons and Bob Dylan. Artists from outside California who were associated with early alternative country included singer-songwriters such as Lucinda Williams, Lyle Lovett and Steve Earle, the Nashville country rock band Jason and the Scorchers, the Providence "cowboy pop" band Rubber Rodeo, and the British post-punk band the Mekons. Earle, in particular, was noted for his popularity with both country and college rock audiences: He promoted his 1986 debut album Guitar Town with a tour that saw him open for both country singer Dwight Yoakam and alternative rock band the Replacements. Yoakam also cultivated a fanbase spanning multiple genres through his stripped-down honky-tonk influenced sound, association with the cowpunk scene, and performances at Los Angeles punk rock clubs. These early styles had coalesced into a genre by the time the Illinois group Uncle Tupelo released their influential debut album No Depression in 1990. The album is widely credited as being the first "alternative country" album, and inspired the name of No Depression magazine, which exclusively covered the new genre. Following Uncle Tupelo's disbanding in 1994, its members formed two significant bands in genre: Wilco and Son Volt. Although Wilco's sound had moved away from country and towards indie rock by the time they released their critically acclaimed album Yankee Hotel Foxtrot in 2002, they have continued to be an influence on later alt-country artists. Other acts who became prominent in the alt-country genre during the 1990s and 2000s included the Bottle Rockets, the Handsome Family, Blue Mountain, Robbie Fulks, Blood Oranges, Bright Eyes, Drive-By Truckers, Old 97's, Old Crow Medicine Show, Nickel Creek, Neko Case, and Whiskeytown, whose lead singer Ryan Adams later had a successful solo-career. Alt-country, in various iterations overlapped with other genres, including Red Dirt country music (Cross Canadian Ragweed), jam bands (My Morning Jacket and the String Cheese Incident), and indie folk (the Avett Brothers). Despite the genre's growing popularity in the 1980s, '90s and 2000s, alternative country and neo-traditionalist artists saw minimal support from country radio in those decades, despite strong sales and critical acclaim for albums such as the soundtrack to the 2000 film O Brother, Where Art Thou?. In 1987, the Beat Farmers gained airplay on country music stations with their song "Make It Last", but the single was pulled from the format when station programmers decreed the band's music was too rock-oriented for their audience. However, some alt-country songs have been crossover hits to mainstream country radio in cover versions by established artists on the format; Lucinda Williams' "Passionate Kisses" was a hit for Mary Chapin Carpenter in 1993, Ryan Adams's "When the Stars Go Blue" was a hit for Tim McGraw in 2007, and Old Crow Medicine Show's "Wagon Wheel" was a hit for Darius Rucker in 2013. In the 2010s, the alt-country genre saw an increase in its critical and commercial popularity, owing to the success of artists such as the Civil Wars, Chris Stapleton, Sturgill Simpson, Jason Isbell, Lydia Loveless and Margo Price. In 2019, Kacey Musgraves – a country artist who had gained a following with indie rock fans and music critics despite minimal airplay on country radio – won the Grammy Award for Album of the Year for her album Golden Hour. Sixth generation (2000s–present) The sixth generation of country music continued to be influenced by other genres such as pop, rock, and R&B. Richard Marx crossed over with his Days in Avalon album, which features five country songs and several singers and musicians. Alison Krauss sang background vocals to Marx's single "Straight from My Heart." Also, Bon Jovi had a hit single, "Who Says You Can't Go Home", with Jennifer Nettles of Sugarland. Kid Rock's collaboration with Sheryl Crow, "Picture," was a major crossover hit in 2001 and began Kid Rock's transition from hard rock to a country-rock hybrid that would later produce another major crossover hit, 2008's "All Summer Long." (Crow, whose music had often incorporated country elements, would also officially cross over into country with her hit "Easy" from her debut country album Feels like Home). Darius Rucker, frontman for the 1990s pop-rock band Hootie & the Blowfish, began a country solo career in the late 2000s, one that to date has produced five albums and several hits on both the country charts and the Billboard Hot 100. Singer-songwriter Unknown Hinson became famous for his appearance in the Charlotte television show Wild, Wild, South, after which Hinson started his own band and toured in southern states. Other rock stars who featured a country song on their albums were Don Henley (who released Cass County in 2015, an album which featured collaborations with numerous country artists) and Poison. The back half of the 2010-2020 decade saw an increasing number of mainstream country acts collaborate with pop and R&B acts; many of these songs achieved commercial success by appealing to fans across multiple genres; examples include collaborations between Kane Brown and Marshmello and Maren Morris and Zedd. There has also been interest from pop singers in country music, including Beyoncé, Lady Gaga, Alicia Keys, Gwen Stefani, Justin Timberlake, Justin Bieber and Pink. Supporting this movement is the new generation of contemporary pop-country, including Taylor Swift, Miranda Lambert, Carrie Underwood, Kacey Musgraves, Miley Cyrus, Billy Ray Cyrus, Sam Hunt, Chris Young, who introduced new themes in their works, touching on fundamental rights, feminism, and controversies about racism and religion of the older generations. Popular culture In 2005, country singer Carrie Underwood rose to fame as the winner of the fourth season of American Idol and has since become one of the most prominent recording artists in the genre, with worldwide sales of more than 65 million records and seven Grammy Awards. With her first single, "Inside Your Heaven", Underwood became the only solo country artist to have a number 1 hit on the Billboard Hot 100 chart in the 2000–2009 decade and also broke Billboard chart history as the first country music artist ever to debut at No. 1 on the Hot 100. Underwood's debut album, Some Hearts, became the best-selling solo female debut album in country music history, the fastest-selling debut country album in the history of the SoundScan era and the best-selling country album of the last 10 years, being ranked by Billboard as the number 1 Country Album of the 2000–2009 decade. She has also become the female country artist with the most number one hits on the Billboard Hot Country Songs chart in the Nielsen SoundScan era (1991–present), having 14 No. 1s and breaking her own Guinness Book record of ten. In 2007, Underwood won the Grammy Award for Best New Artist, becoming only the second Country artist in history (and the first in a decade) to win it. She also made history by becoming the seventh woman to win Entertainer of the Year at the Academy of Country Music Awards, and the first woman in history to win the award twice, as well as twice consecutively. Time has listed Underwood as one of the 100 most influential people in the world. In 2016, Underwood topped the Country Airplay chart for the 15th time, becoming the female artist with most number ones on that chart. Carrie Underwood was only one of several country stars produced by a television series in the 2000s. In addition to Underwood, American Idol launched the careers of Kellie Pickler, Josh Gracin, Bucky Covington, Kristy Lee Cook, Danny Gokey, Lauren Alaina and Scotty McCreery (as well as that of occasional country singer Kelly Clarkson) in the decade, and would continue to launch country careers in the 2010s. The series Nashville Star, while not nearly as successful as Idol, did manage to bring Miranda Lambert, Kacey Musgraves and Chris Young to mainstream success, also launching the careers of lower-profile musicians such as Buddy Jewell, Sean Patrick McGraw, and Canadian musician George Canyon. Can You Duet? produced the duos Steel Magnolia and Joey + Rory. Teen sitcoms also have influenced modern country music; in 2008, actress Jennette McCurdy (best known as the sidekick Sam on the teen sitcom iCarly) released her first single, "So Close", following that with the single "Generation Love" in 2011. Another teen sitcom star, Miley Cyrus (of Hannah Montana), also had a crossover hit in the late 2000s with "The Climb" and another with a duet with her father, Billy Ray Cyrus, with "Ready, Set, Don't Go." Jana Kramer, an actress in the teen drama One Tree Hill, released a country album in 2012 that has produced two hit singles as of 2013. Actresses Hayden Panettiere and Connie Britton began recording country songs as part of their roles in the TV series Nashville and Pretty Little Liars star Lucy Hale released her debut album Road Between in 2014. In 2010, the group Lady Antebellum won five Grammys, including the coveted Song of the Year and Record of the Year for "Need You Now". A large number of duos and vocal groups emerged on the charts in the 2010s, many of which feature close harmony in the lead vocals. In addition to Lady A, groups such as Herrick, the Quebe Sisters Band, Little Big Town, the Band Perry, Gloriana, Thompson Square, Eli Young Band, Zac Brown Band and British duo the Shires have emerged to occupy a large share of mainstream success alongside solo singers such as Kacey Musgraves and Miranda Lambert. One of the most commercially successful country artists of the late 2000s and early 2010s has been singer-songwriter Taylor Swift. Swift first became widely known in 2006 when her debut single, "Tim McGraw," was released when Swift was only 16. In 2006, Swift released her self-titled debut studio album, which spent 275 weeks on Billboard 200, one of the longest runs of any album on that chart. In 2008, Taylor Swift released her second studio album, Fearless, which made her the second longest number-one charted on Billboard 200 and the second best-selling album (just behind Adele's 21) within the past 5 years. At the 2010 Grammys, Taylor Swift was 20 and won Album of the Year for Fearless, which made her the youngest artist to win this award. Swift has received ten Grammys already. Buoyed by her teen idol status among girls and a change in the methodology of compiling the Billboard charts to favor pop-crossover songs, Swift's 2012 single "We Are Never Ever Getting Back Together" spent the most weeks at the top of Billboard's Hot 100 chart and Hot Country Songs chart of any song in nearly five decades. The song's long run at the top of the chart was somewhat controversial, as the song is largely a pop song without much country influence and its success on the charts driven by a change to the chart's criteria to include airplay on non-country radio stations, prompting disputes over what constitutes a country song; many of Swift's later releases, such as album 1989 (2014), Reputation (2017), and Lover (2019) were released solely to pop audiences. Swift returned to country music in her recent folk-inspired releases, Folklore (2020) and Evermore (2020), with songs like "Betty" and "No Body, No Crime". National patriotism Roots of conservative country was Lee Greenwood's "God Bless the USA". The September 11 attacks of 2001 and the economic recession helped move country music back into the spotlight. Many country artists, such as Alan Jackson with his ballad on terrorist attacks, "Where Were You (When the World Stopped Turning)", wrote songs that celebrated the military, highlighted the gospel, and emphasized home and family values over wealth. Alt-Country singer Ryan Adams song "New York, New York" pays tribute to New York City, and its popular music video (which was shot 4 days before the attacks) shows Adams playing in front of the Manhattan skyline, Along with several shots of the city. In contrast, more rock-oriented country singers took more direct aim at the attacks' perpetrators; Toby Keith's "Courtesy of the Red, White and Blue (The Angry American)" threatened to "a boot in" the posterior of the enemy, while Charlie Daniels's "This Ain't No Rag, It's a Flag" promised to "hunt" the perpetrators "down like a mad dog hound." These songs gained such recognition that it put country music back into popular culture. Darryl Worley recorded "Have You Forgotten" also. Influence of rock music The influence of rock music in country has become more overt during the late 2000s and early 2010s as artists like Eric Church, Jason Aldean, and Brantley Gilbert have had success; Aaron Lewis, former frontman for the rock group Staind, had a moderately successful entry into country music in 2011 and 2012, as did Dallas Smith, former frontman of the band Default. Bro country In the early 2010s, "bro-country", a genre noted primarily for its themes on drinking and partying, girls, and pickup trucks became particularly popular. Notable artists associated with this genre are Luke Bryan, Jason Aldean, Blake Shelton, Jake Owen and Florida Georgia Line whose song "Cruise" became the best-selling country song of all time. Research in the mid-2010s suggested that about 45 percent of country's best-selling songs could be considered bro-country, with the top two artists being Luke Bryan and Florida Georgia Line. Albums by bro-country singers also sold very well—in 2013, Luke Bryan's Crash My Party was the third best-selling of all albums in the US, with Florida Georgia Line's Here's to the Good Times at sixth, and Blake Shelton's Based on a True Story at ninth. It is also thought that the popularity of bro-country helped country music to surpass classic rock as the most popular genre in America in 2012. The genre however is controversial as it has been criticized by other country musicians and commentators over its themes and depiction of women, opening up a divide between the older generation of country singers and the younger bro country singers that was described as "civil war" by musicians, critics, and journalists." In 2014, Maddie & Tae's "Girl in a Country Song", addressing many of the controversial bro-country themes, peaked at number one on the Billboard Country Airplay chart. Texas Country The Lone Star state can proudly claim some of the most talented musicians in country music. These artists have created large Texas based fan communities that regularly attend live shows throughout the state and of course tune in to listen to their favorite songs on radio stations in Texas and beyond. Texas country music has developed a secondary music chart to that of the country music chart based in Nashville. The Texas Country Music Chart is composed of artists who were born, reside or have connections to Texas. Artists on this chart are huge stars within the realm of Texas and the reach of Texas country radio airplay. The work these artists have made is not only important for Texas music, but country music in general. Artists currently paving the way |
other goods was initiated by the United States, Britain, France and other countries. The Soviets derided "the futile attempts of the Americans to save face and to maintain their untenable position in Berlin." The success of the airlift eventually caused the Soviets to lift their blockade in May 1949. However, the Soviet Army was still capable of conquering Western Europe without much difficulty. In September 1948, US military intelligence experts estimated that the Soviets had about 485,000 troops in their German occupation zone and in Poland, and some 1.785 million troops in Europe in total. At the same time, the number of US troops in 1948 was about 140,000. Tito–Stalin Split After disagreements between Yugoslavian leader Josip Broz Tito and the Soviet Union regarding Greece and the People's Republic of Albania, a Tito–Stalin Split occurred, followed by Yugoslavia being expelled from the Cominform in June 1948 and a brief failed Soviet putsch in Belgrade. The split created two separate communist forces in Europe. A vehement campaign against "Titoism" was immediately started in the Eastern Bloc, describing agents of both the West and Tito in all places engaging in subversive activity. This resulted in the persecution of many major party cadres, including those in East Germany. Besides Berlin, the port city of Trieste was a particular focus after the Second World War. Until the break between Tito and Stalin, the Western powers and the Eastern bloc faced each other uncompromisingly. The neutral buffer state Free Territory of Trieste, founded in 1947 with the United Nations, was split up and dissolved in 1954 and 1975, also because of the détente between the West and Tito. NATO The United States joined Britain, France, Canada, Denmark, Portugal, Norway, Belgium, Iceland, Luxembourg, Italy, and the Netherlands in 1949 to form the North Atlantic Treaty Organization (NATO), the United States' first "entangling" European alliance in 170 years. West Germany, Spain, Greece, and Turkey would later join this alliance. The Eastern leaders retaliated against these steps by integrating the economies of their nations in Comecon, their version of the Marshall Plan; exploding the first Soviet atomic device in 1949; signing an alliance with People's Republic of China in February 1950; and forming the Warsaw Pact, Eastern Europe's counterpart to NATO, in 1955. The Soviet Union, Albania, Czechoslovakia, Hungary, East Germany, Bulgaria, Romania, and Poland founded this military alliance. NSC 68 U.S. officials quickly moved to escalate and expand "containment." In a secret 1950 document, NSC 68, they proposed to strengthen their alliance systems, quadruple defense spending, and embark on an elaborate propaganda campaign to convince the U.S. public to fight this costly cold war. Truman ordered the development of a hydrogen bomb. In early 1950, the U.S. took its first efforts to oppose communist forces in Vietnam; planned to form a West German army, and prepared proposals for a peace treaty with Japan that would guarantee long-term U.S. military bases there. Outside Europe The Cold War took place worldwide, but it had a partially different timing and trajectory outside Europe. In Africa, decolonization took place first; it was largely accomplished in the 1950s. The main rivals then sought bases of support in the new national political alignments. In Latin America, the first major confrontation took place in Guatemala in 1954. When the new Castro government of Cuba turned to Soviets support in 1960, Cuba became the center of the anti-American Cold War forces, supported by the Soviet Union. Chinese Civil War As Japan collapsed in 1945 the civil war resumed in China between the Kuomintang (KMT) led by Generalissimo Chiang Kai-shek and the Chinese Communist Party led by Mao Zedong. The USSR had signed a Treaty of Friendship with the Kuomintang in 1945 and disavowed support for the Chinese Communists. The outcome was closely fought, with the Communists finally prevailing with superior military tactics. Although the Nationalists had an advantage in numbers of men and weapons, initially controlled a much larger territory and population than their adversaries, and enjoyed considerable international support, they were exhausted by the long war with Japan and the attendant internal responsibilities. In addition, the Chinese Communists were able to fill the political vacuum left in Manchuria after Soviet forces withdrew from the area and thus gained China's prime industrial base. The Chinese Communists were able to fight their way from the north and northeast, and virtually all of mainland China was taken by the end of 1949. On October 1, 1949, Mao Zedong proclaimed the People's Republic of China (PRC). Chiang Kai-shek and 600,000 Nationalist troops and 2 million refugees, predominantly from the government and business community, fled from the mainland to the island of Taiwan. In December 1949, Chiang proclaimed Taipei the temporary capital of the Republic of China (ROC) and continued to assert his government as the sole legitimate authority in China. The continued hostility between the Communists on the mainland and the Nationalists on Taiwan continued throughout the Cold War. Though the United States refused to aide Chiang Kai-shek in his hope to "recover the mainland," it continued supporting the Republic of China with military supplies and expertise to prevent Taiwan from falling into PRC hands. Through the support of the Western bloc (most Western countries continued to recognize the ROC as the sole legitimate government of China), the Republic of China on Taiwan retained China's seat in the United Nations until 1971. Madiun Affair Madiun Affair took place on September 18, 1948 in the city of Madiun, East Java. This rebellion was carried out by the Front Demokrasi Rakyat (FDR, People's Democratic Front) which united all socialist and communist groups in Indonesia. This rebellion ended 3 months later after its leaders were arrested and executed by the TNI. This revolt began with the fall of the Amir Syarifuddin Cabinet due to the signing of the Renville Agreement which benefited the Dutch and was eventually replaced by the Hatta Cabinet which did not belong to the left wing. This led Amir Syarifuddin to declare opposition to the Hatta Cabinet government and to declare the formation of the People's Democratic Front. Before it, In the PKI Politburo session on August 13–14, 1948, Musso, an Indonesian communist figure, introduced a political concept called "Jalan Baru". He also wanted a single Marxism party called the PKI (Communist Party of Indonesia) consisting of illegal communists, the Labour Party of Indonesia, and Partai Sosialis(Socialist Party). On September 18, 1948, the FDR declared the formation of the Republic of Soviet-Indonesia. In addition, the communists also carried out a rebellion in the Pati Residency and the kidnapping of groups who were considered to be against communists. Even this rebellion resulted in the murder of the Governor of East Java at the time, Raden Mas Tumenggung Ario Soerjo. The crackdown operation against this movement began. This operation was led by A.H. Nasution. The Indonesian government also applied Commander General Sudirman to the Military Operations Movement I where General Sudirman ordered Colonel Gatot Soebroto and Colonel Sungkono to mobilize the TNI and police to crush the rebellion. On September 30, 1948, Madiun was captured again by the Republic of Indonesia. Musso was shot dead on his escape in Sumoroto and Amir Syarifuddin was executed after being captured in Central Java. In early December 1948, the Madiun Affair crackdown was declared complete. Korean War In early 1950, the United States made its first commitment to form a peace treaty with Japan that would guarantee long-term U.S. military bases. Some observers (including George Kennan) believed that the Japanese treaty led Stalin to approve a plan to invade U.S.-supported South Korea on June 25, 1950. Korea had been divided at the end of World War II along the 38th parallel into Soviet and U.S. occupation zones, in which a communist government was installed in the North by the Soviets, and an elected government in the South came to power after UN-supervised elections in 1948. In June 1950, Kim Il-sung's North Korean People's Army invaded South Korea. Fearing that communist Korea under a Kim Il Sung dictatorship could threaten Japan and foster other communist movements in Asia, Truman committed U.S. forces and obtained help from the United Nations to counter the North Korean invasion. The Soviets boycotted UN Security Council meetings while protesting the Council's failure to seat the People's Republic of China and, thus, did not veto the Council's approval of UN action to oppose the North Korean invasion. A joint UN force of personnel from South Korea, the United States, Britain, Turkey, Canada, Australia, France, the Philippines, the Netherlands, Belgium, New Zealand and other countries joined to stop the invasion. After a Chinese invasion to assist the North Koreans, fighting stabilized along the 38th parallel, which had separated the Koreas. Truman faced a hostile China, a Sino-Soviet partnership, and a defense budget that had quadrupled in eighteen months. The Korean Armistice Agreement was signed in July 1953 after the death of Stalin, who had been insisting that the North Koreans continue fighting. In North Korea, Kim Il-sung created a highly centralized and brutal dictatorship, according himself unlimited power and generating a formidable cult of personality. Hydrogen bomb A hydrogen bomb—which produced nuclear fusion instead of nuclear fission—was first tested by the United States in November 1952 and the Soviet Union in August 1953. Such bombs were first deployed in the 1960s. Culture and media Fear of a nuclear war spurred the production of public safety films by the United States federal government's Civil Defense branch that demonstrated ways on protecting oneself from a Soviet nuclear attack. The 1951 children's film Duck and Cover is a prime example. George Orwell's classic dystopia Nineteen Eighty-Four was published in 1949. The novel explores life in an imagined future world where a totalitarian government has achieved terrifying levels of power and control. With Nineteen Eighty-Four, Orwell taps into the anti-communist fears that would continue to haunt so many in the West for decades to come. In a Cold War setting his descriptions could hardly fail to evoke comparison to Soviet communism and the seeming willingness of Stalin and his successors to control those within the Soviet bloc by whatever means necessary. Orwell's famous allegory of totalitarian rule, Animal Farm, published in 1945, provoked similar anti-communist sentiments. Significant documents The Cold War generated innumerable documents. The texts of 171 documents appear in The Encyclopedia of the Cold War (2008). Baruch Plan: 1946. A proposal by the U.S. to the United Nations Atomic Energy Commission (UNAEC) to a) extend between all nations the exchange of basic scientific information for peaceful ends; b) implement control of atomic energy to the extent necessary to ensure its use only for peaceful purposes; c) eliminate from national armaments atomic weapons and all other major weapons adaptable to mass destruction; and d) establish effective safeguards by way of inspection and other means to protect complying States against the hazards of violations and evasions. When the Soviet Union was the only member state which refused to sign, the U.S. embarked on a massive nuclear weapons testing, development, and deployment program. The Long Telegram and The "X Article", 1946–1947. Formally titled "The Sources of Soviet Conduct". The article describes the concepts that became the foundation of United States Cold War policy and was published in Foreign Affairs in 1947. The article was an expansion of a well-circulated top secret State Department cable called the X Article and became famous for setting forth the doctrine of containment. Though the article was signed pseudonymously by "X," it was well known at the time that the true author was George F. Kennan, the deputy chief of mission of the United States to the Soviet Union from 1944 to 1946, under ambassador W. Averell Harriman. NSC 68: April 14, 1950. A classified report written and issued by the United States National Security Council. The report outlined the National Security Strategy of the United States for that time and provided a comprehensive analysis of the capabilities of the Soviet Union and of the United States from military, economic, political, and psychological standpoints. NSC68's principal thesis was that the Soviet Union intended to become the single dominant world power. The report argued that the Soviet Union had a systematic strategy aimed at the spread of communism across the entire world, and it recommended that the United States government adopt a policy of containment to stop the further spread of Soviet power. NSC68 outlined a drastic foreign policy shift from defensive to active containment and advocated aggressive military preparedness. NSC68 shaped government actions in the Cold War for the next 20 years and has subsequently been labeled the "blueprint" for the Cold War. Speech by James F. Byrnes, United States Secretary of State "Restatement of Policy on Germany" Stuttgart September 6, 1946. Also known as the "Speech of hope," it set the tone of future U.S. policy as it repudiated the Morgenthau Plan economic policies and gave the Germans hope for the future. The Western powers worst fear was that the poverty and hunger would drive the Germans to communism. General Lucius Clay stated "There is no choice between being a communist on 1,500 calories a day and a believer in democracy on a thousand". The speech was also seen as a stand against the Soviet Union because it stated the firm intention of the United States to maintain a military presence in Europe indefinitely. But the heart of the message was as Byrnes stated a month later "The nub of our program was to win the German people ... it was a battle between us and Russia over minds". See also Western Union History of the Soviet Union (1927–1953) History of the United States (1945–1964) Timeline of events in the Cold War Animal Farm Notes References Ball, S. J. The Cold War: An International History, 1947–1991 (1998). British perspective Brzezinski, Zbigniew. The Grand Failure: The Birth and Death of Communism in the Twentieth Century (1989); Brune, Lester Brune and Richard Dean Burns. Chronology of the Cold War: 1917–1992 (2005) 700pp; highly detailed month-by-month summary for many countries Gaddis, John Lewis. The Cold War: A New History (2005) Gaddis, John Lewis. Long Peace: Inquiries into the History of the Cold War (1987) Gaddis, John Lewis. Strategies of Containment: A Critical Appraisal of Postwar American National Security Policy (1982) LaFeber, Walter. America, Russia, and the Cold War, 1945–1992 7th ed. (1993) Lewkowicz, Nicolas (2018) The United States, the Soviet Union and the Geopolitical Implications of the Origins of the Cold War, Anthem Press, London Mitchell, George. The Iron Curtain: The Cold War in Europe (2004) Ninkovich, Frank. Germany and the United States: The Transformation of the German Question since 1945 (1988) Paterson, Thomas G. Meeting the Communist Threat: Truman to Reagan (1988) Sivachev, Nikolai and Nikolai Yakolev, Russia and the United States (1979), by Soviet historians Ulam, Adam B. Expansion and Coexistence: Soviet Foreign Policy, 1917–1973, 2nd ed. (1974) Walker, J. Samuel. "Historians and Cold War Origins: The New Consensus", in | the Greek Civil War to the army of the Communist Party of Greece, the DSE (Democratic Army of Greece). The UK had given aid to the royalist Greek forces, leaving the Communists (without Soviet aid and having boycotted the elections) at a disadvantaged position. However, by 1947, the near-bankrupt British government could no longer maintain its massive overseas commitments. In addition to granting independence to India and handing back the Palestinian Mandate to the United Nations, the British government decided to withdraw from both Greece and nearby Turkey. This would have left the two nations, in particular Greece, on the brink of a communist-led revolution. Notified that British aid to Greece and Turkey would end in less than six weeks, and already hostile towards and suspicious of Soviet intentions, because of their reluctance to withdraw from Iran, the Truman administration decided that additional action was necessary. With Congress solidly in Republican hands, and with isolationist sentiment strong among the U.S. public, Truman adopted an ideological approach. In a meeting with congressional leaders, the argument of "apples in a barrel infected by one rotten one" was used to convince them of the significance in supporting Greece and Turkey. It was to become the "domino theory". On the morning of March 12, 1947, President Harry S. Truman appeared before Congress to ask for $400 million of aid to Greece and Turkey. Calling on congressional approval for the United States to "support free peoples who are resisting attempted subjugation by armed minorities or by outside pressures," or in short a policy of "containment", Truman articulated a presentation of the ideological struggle that became known as the "Truman Doctrine." Although based on a simplistic analysis of internal strife in Greece and Turkey, it became the single dominating influence over U.S. policy until at least the Vietnam War. Truman's speech had a tremendous effect. The anti-communist feelings that had just begun to hatch in the U.S. were given a great boost, and a silenced Congress voted overwhelmingly in approval of aid. The United States would not withdraw back to the Western Hemisphere as it had after World War I. From then on, the U.S. actively fought communist advances anywhere in the globe under the ostensible causes of "freedom", "democracy" and "human rights." The U.S. brandished its role as the leader of the "free world." Meanwhile, the Soviet Union brandished its position as the leader of the "progressive" and "anti-imperialist" camp. Nazi–Soviet relations and Falsifiers of History Relations further deteriorated when, in January 1948, the U.S. State Department also published a collection of documents titled Nazi–Soviet Relations, 1939–1941: Documents from the Archives of The German Foreign Office, which contained documents recovered from the Foreign Office of Nazi Germany revealing Soviet conversations with Germany regarding the Molotov–Ribbentrop Pact, including its secret protocol dividing eastern Europe, the 1939 German–Soviet Commercial Agreement, and discussions of the Soviet Union potentially becoming the fourth Axis Power. In response, one month later, the Soviet Union published Falsifiers of History, a Stalin edited and partially re-written book attacking the West. The book did not attempt to directly counter or deal with the documents published in Nazi-Soviet Relations and rather, focused upon Western culpability for the outbreak of war in 1939. It argues that "Western powers" aided Nazi rearmament and aggression, including that American bankers and industrialists provided capital for the growth of German war industries, while deliberately encouraging Hitler to expand eastward. The book also included the claim that, during the Pact's operation, Stalin rejected Hitler's offer to share in a division of the world, without mentioning the Soviet offers to join the Axis. Historical studies, official accounts, memoirs and textbooks published in the Soviet Union used that depiction of events until the Soviet Union's dissolution. Berlin Blockade After the Marshall Plan, the introduction of a new currency to Western Germany to replace the debased Reichsmark and massive electoral losses for communist parties in 1946, in June 1948, the Soviet Union cut off surface road access to Berlin. On the day of the Berlin Blockade, a Soviet representative told the other occupying powers "We are warning both you and the population of Berlin that we shall apply economic and administrative sanctions that will lead to circulation in Berlin exclusively of the currency of the Soviet occupation zone." Thereafter, street and water communications were severed, rail and barge traffic was stopped and the Soviets initially stopped supplying food to the civilian population in the non-Soviet sectors of Berlin. Because Berlin was located within the Soviet-occupied zone of Germany and the other occupying powers had previously relied on Soviet good will for access to Berlin, the only available methods of supplying the city were three limited air corridors. By February 1948, because of massive post-war military cuts, the entire United States army had been reduced to 552,000 men. Military forces in non-Soviet Berlin sectors totaled only 8,973 Americans, 7,606 British and 6,100 French. Soviet military forces in the Soviet sector that surrounded Berlin totaled one and a half million men. The two United States regiments in Berlin would have provided little resistance against a Soviet attack. Believing that Britain, France and the United States had little option other than to acquiesce, the Soviet Military Administration in Germany celebrated the beginning of the blockade. Thereafter, a massive aerial supply campaign of food, water and other goods was initiated by the United States, Britain, France and other countries. The Soviets derided "the futile attempts of the Americans to save face and to maintain their untenable position in Berlin." The success of the airlift eventually caused the Soviets to lift their blockade in May 1949. However, the Soviet Army was still capable of conquering Western Europe without much difficulty. In September 1948, US military intelligence experts estimated that the Soviets had about 485,000 troops in their German occupation zone and in Poland, and some 1.785 million troops in Europe in total. At the same time, the number of US troops in 1948 was about 140,000. Tito–Stalin Split After disagreements between Yugoslavian leader Josip Broz Tito and the Soviet Union regarding Greece and the People's Republic of Albania, a Tito–Stalin Split occurred, followed by Yugoslavia being expelled from the Cominform in June 1948 and a brief failed Soviet putsch in Belgrade. The split created two separate communist forces in Europe. A vehement campaign against "Titoism" was immediately started in the Eastern Bloc, describing agents of both the West and Tito in all places engaging in subversive activity. This resulted in the persecution of many major party cadres, including those in East Germany. Besides Berlin, the port city of Trieste was a particular focus after the Second World War. Until the break between Tito and Stalin, the Western powers and the Eastern bloc faced each other uncompromisingly. The neutral buffer state Free Territory of Trieste, founded in 1947 with the United Nations, was split up and dissolved in 1954 and 1975, also because of the détente between the West and Tito. NATO The United States joined Britain, France, Canada, Denmark, Portugal, Norway, Belgium, Iceland, Luxembourg, Italy, and the Netherlands in 1949 to form the North Atlantic Treaty Organization (NATO), the United States' first "entangling" European alliance in 170 years. West Germany, Spain, Greece, and Turkey would later join this alliance. The Eastern leaders retaliated against these steps by integrating the economies of their nations in Comecon, their version of the Marshall Plan; exploding the first Soviet atomic device in 1949; signing an alliance with People's Republic of China in February 1950; and forming the Warsaw Pact, Eastern Europe's counterpart to NATO, in 1955. The Soviet Union, Albania, Czechoslovakia, Hungary, East Germany, Bulgaria, Romania, and Poland founded this military alliance. NSC 68 U.S. officials quickly moved to escalate and expand "containment." In a secret 1950 document, NSC 68, they proposed to strengthen their alliance systems, quadruple defense spending, and embark on an elaborate propaganda campaign to convince the U.S. public to fight this costly cold war. Truman ordered the development of a hydrogen bomb. In early 1950, the U.S. took its first efforts to oppose communist forces in Vietnam; planned to form a West German army, and prepared proposals for a peace treaty with Japan that would guarantee long-term U.S. military bases there. Outside Europe The Cold War took place worldwide, but it had a partially different timing and trajectory outside Europe. In Africa, decolonization took place first; it was largely accomplished in the 1950s. The main rivals then sought bases of support in the new national political alignments. In Latin America, the first major confrontation took place in Guatemala in 1954. When the new Castro government of Cuba turned to Soviets support in 1960, Cuba became the center of the anti-American Cold War forces, supported by the Soviet Union. Chinese Civil War As Japan collapsed in 1945 the civil war resumed in China between the Kuomintang (KMT) led by Generalissimo Chiang Kai-shek and the Chinese Communist Party led by Mao Zedong. The USSR had signed a Treaty of Friendship with the Kuomintang in 1945 and disavowed support for the Chinese Communists. The outcome was closely fought, with the Communists finally prevailing with superior military tactics. Although the Nationalists had an advantage in numbers of men and weapons, initially controlled a much larger territory and population than their adversaries, and enjoyed considerable international support, they were exhausted by the long war with Japan and the attendant internal responsibilities. In addition, the Chinese Communists were able to fill the political vacuum left in Manchuria after Soviet forces withdrew from the area and thus gained China's prime industrial base. The Chinese Communists were able to fight their way from the north and northeast, and virtually all of mainland China was taken by the end of 1949. On October 1, 1949, Mao Zedong proclaimed the People's Republic of China (PRC). Chiang Kai-shek and 600,000 Nationalist troops and 2 million refugees, predominantly from the government and business community, fled from the mainland to the island of Taiwan. In December 1949, Chiang proclaimed Taipei the temporary capital of the Republic of China (ROC) and continued to assert his government as the sole legitimate authority in China. The continued hostility between the Communists on the mainland and the Nationalists on Taiwan continued throughout the Cold War. Though the United States refused to aide Chiang Kai-shek in his hope to "recover the mainland," it continued supporting the Republic of China with military supplies and expertise to prevent Taiwan from falling into PRC hands. Through the support of the Western bloc (most Western countries continued to recognize the ROC as the sole legitimate government of China), the Republic of China on Taiwan retained China's seat in the United Nations until 1971. Madiun Affair Madiun Affair took place on September 18, 1948 in the city of Madiun, East Java. This rebellion was carried out by the Front Demokrasi Rakyat (FDR, People's Democratic Front) which united all socialist and communist groups in Indonesia. This rebellion ended 3 months later after its leaders were arrested and executed by the TNI. This revolt began with the fall of the Amir Syarifuddin Cabinet due to the signing of the Renville Agreement which benefited the Dutch and was eventually replaced by the Hatta Cabinet which did not belong to the left wing. This led Amir Syarifuddin to declare opposition to the Hatta Cabinet government and to declare the formation of the People's Democratic Front. Before it, In the PKI Politburo session on August 13–14, 1948, Musso, an Indonesian communist figure, introduced a political concept called "Jalan Baru". He also wanted a single Marxism party called the PKI (Communist Party of Indonesia) consisting of illegal communists, the Labour Party of Indonesia, and Partai Sosialis(Socialist Party). On September 18, 1948, the FDR declared the formation of the Republic of Soviet-Indonesia. In addition, the communists also carried out a rebellion in the Pati Residency and the kidnapping of groups who were considered to be against communists. Even this rebellion resulted in the murder of the Governor of East Java at the time, Raden Mas Tumenggung Ario Soerjo. The crackdown operation against this movement began. This operation was led by A.H. Nasution. The Indonesian government also applied Commander General Sudirman to the Military Operations Movement I where General Sudirman ordered Colonel Gatot Soebroto and Colonel Sungkono to mobilize the TNI and police to crush the rebellion. On September 30, 1948, Madiun was captured again by the Republic of Indonesia. Musso was shot dead on his escape in Sumoroto and Amir Syarifuddin was |
is often achieved by the manipulation of relationships with state power by business interests rather than unfettered competition in obtaining permits, government grants, tax breaks, or other forms of state intervention over resources where business interests exercise undue influence over the state's deployment of public goods, for example, mining concessions for primary commodities or contracts for public works. Money is then made not merely by making a profit in the market, but through profiteering by rent seeking using this monopoly or oligopoly. Entrepreneurship and innovative practices which seek to reward risk are stifled since the value-added is little by crony businesses, as hardly anything of significant value is created by them, with transactions taking the form of trading. Crony capitalism spills over into the government, the politics, and the media, when this nexus distorts the economy and affects society to an extent it corrupts public-serving economic, political, and social ideals. Historical usage The first extensive use of the term "crony capitalism" came about in the 1980s, to characterize the Philippine economy under the dictatorship of Ferdinand Marcos. Early uses of this term to describe the economic practices of the Marcos regime included that of Ricardo Manapat, who introduced it in his 1979 pamphlet "Some are Smarter than Others", which was later published in 1991; former Time magazine business editor George M. Taber, who used the term in a Time magazine article in 1980, and activist (and later Finance Minister) Jaime Ongpin, who used the term extensively in his writing and is sometimes credited for having coined it. The term crony capitalism made a significant impact in the public as an explanation of the Asian financial crisis. It is also used to describe governmental decisions favoring cronies of governmental officials. In this context, the term is often used comparatively with corporate welfare, a technical term often used to assess government bailouts and favoritistic monetary policy as opposed to the economic theory described by crony capitalism. The extent of difference between these terms is whether a government action can be said to benefit the individual rather than the industry. In practice Crony capitalism exists along a continuum. In its lightest form, crony capitalism consists of collusion among market players which is officially tolerated or encouraged by the government. While perhaps lightly competing against each other, they will present a unified front (sometimes called a trade association or industry trade group) to the government in requesting subsidies or aid or regulation. For instance, newcomers to a market then need to surmount significant barriers to entry in seeking loans, acquiring shelf space, or receiving official sanction. Some such systems are very formalized, such as sports leagues and the Medallion System of the taxicabs of New York City, but often the process is more subtle, such as expanding training and certification exams to make it more expensive for new entrants to enter a market and thereby limiting potential competition. In technological fields, there may evolve a system whereby new entrants may be accused of infringing on patents that the established competitors never assert against each other. In spite of this, some competitors may succeed when the legal barriers are light. The term crony capitalism is generally used when these practices either come to dominate the economy as a whole, or come to dominate the most valuable industries in an economy. Intentionally ambiguous laws and regulations are common in such systems. Taken strictly, such laws would greatly impede practically all business activity, but in practice they are only erratically enforced. The specter of having such laws suddenly brought down upon a business provides an incentive to stay in the good graces of political officials. Troublesome rivals who have overstepped their bounds can have these laws suddenly enforced against them, leading to fines or even jail time. Even in high-income democracies with well-established legal systems and freedom of the press in place, a larger state is generally associated with increased political corruption. The term crony capitalism was initially applied to states involved in the 1997 Asian financial crisis such as Thailand and Indonesia. In these cases, the term was used to point out how family members of the ruling leaders become extremely wealthy with no non-political justification. Southeast Asian nations, such as Hong Kong and Malaysia, still score very poorly in rankings measuring this. The term has also been applied to the system of oligarchs in Russia. Other states to which the term has been applied include India, in particular the system after the 1990s liberalization, whereby land and other resources were given at throwaway prices in the name of public private partnerships, the more recent coal-gate scam and cheap allocation of land and resources to Adani SEZ under the Congress and BJP governments. Similar references to crony capitalism have been made to other countries such as Argentina and Greece. Wu Jinglian, one of China's leading economists and a longtime advocate of its transition to free markets, says that it faces two starkly contrasting futures, namely a market economy under the rule of law or crony capitalism. A dozen years later, prominent political scientist Pei Minxin had concluded that the latter course had become deeply embedded in China. The anti-corruption campaign under Xi Jinping (2012–) has seen more than 100,000 high- and low-ranking Chinese officials indicted and jailed. Many prosperous nations have also | tolerated or encouraged by the government. While perhaps lightly competing against each other, they will present a unified front (sometimes called a trade association or industry trade group) to the government in requesting subsidies or aid or regulation. For instance, newcomers to a market then need to surmount significant barriers to entry in seeking loans, acquiring shelf space, or receiving official sanction. Some such systems are very formalized, such as sports leagues and the Medallion System of the taxicabs of New York City, but often the process is more subtle, such as expanding training and certification exams to make it more expensive for new entrants to enter a market and thereby limiting potential competition. In technological fields, there may evolve a system whereby new entrants may be accused of infringing on patents that the established competitors never assert against each other. In spite of this, some competitors may succeed when the legal barriers are light. The term crony capitalism is generally used when these practices either come to dominate the economy as a whole, or come to dominate the most valuable industries in an economy. Intentionally ambiguous laws and regulations are common in such systems. Taken strictly, such laws would greatly impede practically all business activity, but in practice they are only erratically enforced. The specter of having such laws suddenly brought down upon a business provides an incentive to stay in the good graces of political officials. Troublesome rivals who have overstepped their bounds can have these laws suddenly enforced against them, leading to fines or even jail time. Even in high-income democracies with well-established legal systems and freedom of the press in place, a larger state is generally associated with increased political corruption. The term crony capitalism was initially applied to states involved in the 1997 Asian financial crisis such as Thailand and Indonesia. In these cases, the term was used to point out how family members of the ruling leaders become extremely wealthy with no non-political justification. Southeast Asian nations, such as Hong Kong and Malaysia, still score very poorly in rankings measuring this. The term has also been applied to the system of oligarchs in Russia. Other states to which the term has been applied include India, in particular the system after the 1990s liberalization, whereby land and other resources were given at throwaway prices in the name of public private partnerships, the more recent coal-gate scam and cheap allocation of land and resources to Adani SEZ under the Congress and BJP governments. Similar references to crony capitalism have been made to other countries such as Argentina and Greece. Wu Jinglian, one of China's leading economists and a longtime advocate of its transition to free markets, says that it faces two starkly contrasting futures, namely a market economy under the rule of law or crony capitalism. A dozen years later, prominent political scientist Pei Minxin had concluded that the latter course had become deeply embedded in China. The anti-corruption campaign under Xi Jinping (2012–) has seen more than 100,000 high- and low-ranking Chinese officials indicted and jailed. Many prosperous nations have also had varying amounts of cronyism throughout their history, including the United Kingdom especially in the 1600s and 1700s, the United States and Japan. Crony capitalism index The Economist benchmarks countries based on a crony-capitalism index calculated via how much economic activity occurs in industries prone to cronyism. Its 2014 Crony Capitalism Index ranking listed Hong Kong, Russia and Malaysia in the top three spots. In finance Crony capitalism in finance was found in the Second Bank of the United States. It was a private company, but its largest stockholder was the federal government which owned 20%. It was an early bank regulator and grew to be one being the most powerful organizations in the country due largely to being the depository of the government's revenue. The Gramm–Leach–Bliley Act in 1999 completely removed Glass–Steagall’s separation between commercial banks and investment banks. After this repeal, commercial banks, investment banks and insurance companies combined their lobbying efforts. Critics claim this was instrumental in the passage of the Bankruptcy Abuse Prevention and Consumer Protection Act of 2005. In sections of an economy More direct government involvement in a specific sector can also lead to specific areas of crony capitalism, even if the economy as a whole may be competitive. This is most common in natural resource sectors through the granting of mining or drilling concessions, but it is also possible through a process known as regulatory capture where the government agencies in charge of regulating an industry come to be controlled by that industry. Governments will often establish in good faith government agencies to regulate an industry. However, the members of an industry have a very strong interest in the actions of that regulatory body while the rest of the citizenry are only lightly affected. As a result, it is not uncommon for current industry players to gain control of the watchdog and to use it against competitors. This typically takes the form of making it very expensive for a new entrant to enter the market. An 1824 landmark United States Supreme Court ruling overturned a New York State-granted monopoly ("a veritable model of state munificence" facilitated by Robert R. Livingston, one of the Founding Fathers) for the then-revolutionary technology of steamboats. Leveraging the Supreme Court's establishment of Congressional supremacy over commerce, the Interstate Commerce Commission was established in 1887 with the intent of regulating railroad robber barons. President Grover Cleveland appointed Thomas M. Cooley, a railroad ally, as its first chairman and a permit system was used to deny access to new entrants and legalize price fixing. The defense industry in the United States is often described as an example of crony capitalism in an industry. Connections with the Pentagon and lobbyists in Washington are described by critics as more important than actual competition due to the political and secretive nature of defense contracts. In the Airbus-Boeing WTO dispute, Airbus (which receives outright subsidies from European governments) has stated Boeing receives similar subsidies which are hidden as inefficient defense contracts. Other American defense companies were put under scrutiny for no-bid contracts for Iraq War and Hurricane Katrina related contracts purportedly due to having cronies in the Bush administration. Gerald P. O'Driscoll, former vice president at the Federal Reserve Bank of Dallas, stated that Fannie Mae and Freddie Mac became examples of crony capitalism as government backing let Fannie and Freddie dominate mortgage underwriting, saying. "The politicians created the mortgage giants, which then returned some of the profits to the pols—sometimes directly, as campaign funds; sometimes as "contributions" to favored constituents". In developing economies In its worst form, crony capitalism can devolve into simple corruption where any pretense of a free market is dispensed with. Bribes to government officials are considered de rigueur and tax evasion is common. This is seen in many parts of Africa and is sometimes called plutocracy (rule by wealth) or kleptocracy (rule by theft). Kenyan economist David Ndii has repeatedly brought to light how this system has manifested over time, occasioned by the reign of Uhuru Kenyatta as president. Corrupt governments may favor one set of business owners who have close ties to the government over others. This may also be done with, religious, or ethnic favoritism. For instance, Alawites in Syria have a disproportionate share of power in the government and business there (President Assad himself is an Alawite). This can be explained by considering personal relationships as a social network. As government and business leaders try to accomplish various things, they naturally turn to other powerful people for support in their endeavors. These people form hubs in the network. In a developing country those hubs may be very few, thus concentrating economic and political power in a small interlocking group. Normally, this will be untenable to maintain in business as new entrants will affect the market. However, |
United States Catholic universities Ecclesiastical universities Benedictine colleges and universities Jesuit institutions Opus Dei universities Pontifical universities International Council of Universities of Saint Thomas Aquinas International Federation of Catholic Universities Churches of Christ Church of the Nazarene Christian churches and churches of Christ Islamic seminaries Lutheran colleges and universities Universities and colleges affiliated with the United Methodist Church Muslim educational institutions Extremities Endowment Largest universities by enrollment Oldest madrasahs in continuous operation | Catholic universities Ecclesiastical universities Benedictine colleges and universities Jesuit institutions Opus Dei universities Pontifical universities International Council of Universities of Saint Thomas Aquinas International Federation of Catholic Universities Churches of Christ Church of the Nazarene Christian churches and churches of Christ Islamic seminaries Lutheran colleges and universities Universities and colleges affiliated with the United Methodist Church |
Yaroslava, originally combined by Yaroslav the Wise the Grand Prince of Kyiv, was granted to Great Novgorod around 1017, and in 1054 was incorporated into the Ruska Pravda; it became the law for all of Kievan Rus. It survived only in later editions of the 15th century. In England, Henry I's proclamation of the Charter of Liberties in 1100 bound the king for the first time in his treatment of the clergy and the nobility. This idea was extended and refined by the English barony when they forced King John to sign Magna Carta in 1215. The most important single article of the Magna Carta, related to "habeas corpus", provided that the king was not permitted to imprison, outlaw, exile or kill anyone at a whim – there must be due process of law first. This article, Article 39, of the Magna Carta read: This provision became the cornerstone of English liberty after that point. The social contract in the original case was between the king and the nobility, but was gradually extended to all of the people. It led to the system of Constitutional Monarchy, with further reforms shifting the balance of power from the monarchy and nobility to the House of Commons. The Nomocanon of Saint Sava () was the first Serbian constitution from 1219. St. Sava's Nomocanon was the compilation of civil law, based on Roman Law, and canon law, based on Ecumenical Councils. Its basic purpose was to organize the functioning of the young Serbian kingdom and the Serbian church. Saint Sava began the work on the Serbian Nomocanon in 1208 while he was at Mount Athos, using The Nomocanon in Fourteen Titles, Synopsis of Stefan the Efesian, Nomocanon of John Scholasticus, and Ecumenical Council documents, which he modified with the canonical commentaries of Aristinos and Joannes Zonaras, local church meetings, rules of the Holy Fathers, the law of Moses, the translation of Prohiron, and the Byzantine emperors' Novellae (most were taken from Justinian's Novellae). The Nomocanon was a completely new compilation of civil and canonical regulations, taken from Byzantine sources but completed and reformed by St. Sava to function properly in Serbia. Besides decrees that organized the life of church, there are various norms regarding civil life; most of these were taken from Prohiron. Legal transplants of Roman-Byzantine law became the basis of the Serbian medieval law. The essence of Zakonopravilo was based on Corpus Iuris Civilis. Stefan Dušan, emperor of Serbs and Greeks, enacted Dušan's Code () in Serbia, in two state congresses: in 1349 in Skopje and in 1354 in Serres. It regulated all social spheres, so it was the second Serbian constitution, after St. Sava's Nomocanon (Zakonopravilo). The Code was based on Roman-Byzantine law. The legal transplanting within articles 171 and 172 of Dušan's Code, which regulated the juridical independence, is notable. They were taken from the Byzantine code Basilika (book VII, 1, 16–17). In 1222, Hungarian King Andrew II issued the Golden Bull of 1222. Between 1220 and 1230, a Saxon administrator, Eike von Repgow, composed the Sachsenspiegel, which became the supreme law used in parts of Germany as late as 1900. Around 1240, the Coptic Egyptian Christian writer, 'Abul Fada'il Ibn al-'Assal, wrote the Fetha Negest in Arabic. 'Ibn al-Assal took his laws partly from apostolic writings and Mosaic law and partly from the former Byzantine codes. There are a few historical records claiming that this law code was translated into Ge'ez and entered Ethiopia around 1450 in the reign of Zara Yaqob. Even so, its first recorded use in the function of a constitution (supreme law of the land) is with Sarsa Dengel beginning in 1563. The Fetha Negest remained the supreme law in Ethiopia until 1931, when a modern-style Constitution was first granted by Emperor Haile Selassie I. In the Principality of Catalonia, the Catalan constitutions were promulgated by the Court from 1283 (or even two centuries before, if Usatges of Barcelona is considered part of the compilation of Constitutions) until 1716, when Philip V of Spain gave the Nueva Planta decrees, finishing with the historical laws of Catalonia. These Constitutions were usually made formally as a royal initiative, but required for its approval or repeal the favorable vote of the Catalan Courts, the medieval antecedent of the modern Parliaments. These laws, like other modern constitutions, had preeminence over other laws, and they could not be contradicted by mere decrees or edicts of the king. The Kouroukan Founga was a 13th-century charter of the Mali Empire, reconstructed from oral tradition in 1988 by Siriman Kouyaté. The Golden Bull of 1356 was a decree issued by a Reichstag in Nuremberg headed by Emperor Charles IV that fixed, for a period of more than four hundred years, an important aspect of the constitutional structure of the Holy Roman Empire. In China, the Hongwu Emperor created and refined a document he called Ancestral Injunctions (first published in 1375, revised twice more before his death in 1398). These rules served as a constitution for the Ming Dynasty for the next 250 years. The oldest written document still governing a sovereign nation today is that of San Marino. The Leges Statutae Republicae Sancti Marini was written in Latin and consists of six books. The first book, with 62 articles, establishes councils, courts, various executive officers, and the powers assigned to them. The remaining books cover criminal and civil law and judicial procedures and remedies. Written in 1600, the document was based upon the Statuti Comunali (Town Statute) of 1300, itself influenced by the Codex Justinianus, and it remains in force today. In 1392 the Carta de Logu was legal code of the Giudicato of Arborea promulgated by the giudicessa Eleanor. It was in force in Sardinia until it was superseded by the code of Charles Felix in April 1827. The Carta was a work of great importance in Sardinian history. It was an organic, coherent, and systematic work of legislation encompassing the civil and penal law. The Gayanashagowa, the oral constitution of the Haudenosaunee nation also known as the Great Law of Peace, established a system of governance as far back as 1190 AD (though perhaps more recently at 1451) in which the Sachems, or tribal chiefs, of the Iroquois League's member nations made decisions on the basis of universal consensus of all chiefs following discussions that were initiated by a single nation. The position of Sachem descends through families and are allocated by the senior female clan heads, though, prior to the filling of the position, candidacy is ultimately democratically decided by the community itself. Modern constitutions In 1634 the Kingdom of Sweden adopted the 1634 Instrument of Government, drawn up under the Lord High Chancellor of Sweden Axel Oxenstierna after the death of king Gustavus Adolphus, it can be seen as the first written constitution adopted by a modern state. In 1639, the Colony of Connecticut adopted the Fundamental Orders, which was the first North American constitution, and is the basis for every new Connecticut constitution since, and is also the reason for Connecticut's nickname, "the Constitution State". The English Protectorate that was set up by Oliver Cromwell after the English Civil War promulgated the first detailed written constitution adopted by a modern state; it was called the Instrument of Government. This formed the basis of government for the short-lived republic from 1653 to 1657 by providing a legal rationale for the increasing power of Cromwell after Parliament consistently failed to govern effectively. Most of the concepts and ideas embedded into modern constitutional theory, especially bicameralism, separation of powers, the written constitution, and judicial review, can be traced back to the experiments of that period. Drafted by Major-General John Lambert in 1653, the Instrument of Government included elements incorporated from an earlier document "Heads of Proposals", which had been agreed to by the Army Council in 1647, as a set of propositions intended to be a basis for a constitutional settlement after King Charles I was defeated in the First English Civil War. Charles had rejected the propositions, but before the start of the Second Civil War, the Grandees of the New Model Army had presented the Heads of Proposals as their alternative to the more radical Agreement of the People presented by the Agitators and their civilian supporters at the Putney Debates. On January 4, 1649, the Rump Parliament declared "that the people are, under God, the original of all just power; that the Commons of England, being chosen by and representing the people, have the supreme power in this nation". The Instrument of Government was adopted by Parliament on December 15, 1653, and Oliver Cromwell was installed as Lord Protector on the following day. The constitution set up a state council consisting of 21 members while executive authority was vested in the office of "Lord Protector of the Commonwealth." This position was designated as a non-hereditary life appointment. The Instrument also required the calling of triennial Parliaments, with each sitting for at least five months. The Instrument of Government was replaced in May 1657 by England's second, and last, codified constitution, the Humble Petition and Advice, proposed by Sir Christopher Packe. The Petition offered hereditary monarchy to Oliver Cromwell, asserted Parliament's control over issuing new taxation, provided an independent council to advise the king and safeguarded "Triennial" meetings of Parliament. A modified version of the Humble Petition with the clause on kingship removed was ratified on 25 May. This finally met its demise in conjunction with the death of Cromwell and the Restoration of the monarchy. Other examples of European constitutions of this era were the Corsican Constitution of 1755 and the Swedish Constitution of 1772. All of the British colonies in North America that were to become the 13 original United States, adopted their own constitutions in 1776 and 1777, during the American Revolution (and before the later Articles of Confederation and United States Constitution), with the exceptions of Massachusetts, Connecticut and Rhode Island. The Commonwealth of Massachusetts adopted its Constitution in 1780, the oldest still-functioning constitution of any U.S. state; while Connecticut and Rhode Island officially continued to operate under their old colonial charters, until they adopted their first state constitutions in 1818 and 1843, respectively. Democratic constitutions What is sometimes called the "enlightened constitution" model was developed by philosophers of the Age of Enlightenment such as Thomas Hobbes, Jean-Jacques Rousseau, and John Locke. The model proposed that constitutional governments should be stable, adaptable, accountable, open and should represent the people (i.e., support democracy). Agreements and Constitutions of Laws and Freedoms of the Zaporizian Host was written in 1710 by Pylyp Orlyk, hetman of the Zaporozhian Host. It was written to establish a free Zaporozhian-Ukrainian Republic, with the support of Charles XII of Sweden. It is notable in that it established a democratic standard for the separation of powers in government between the legislative, executive, and judiciary branches, well before the publication of Montesquieu's Spirit of the Laws. This Constitution also limited the executive authority of the hetman, and established a democratically elected Cossack parliament called the General Council. However, Orlyk's project for an independent Ukrainian State never materialized, and his constitution, written in exile, never went into effect. Corsican Constitutions of 1755 and 1794 were inspired by Jean-Jacques Rousseau. The latter introduced universal suffrage for property owners. The Swedish constitution of 1772 was enacted under King Gustavus III and was inspired by the separation of powers by Montesquieu. The king also cherished other enlightenment ideas (as an enlighted despot) and repealed torture, liberated agricultural trade, diminished the use of the death penalty and instituted a form of religious freedom. The constitution was commended by Voltaire. The United States Constitution, ratified June 21, 1788, was influenced by the writings of Polybius, Locke, Montesquieu, and others. The document became a benchmark for republicanism and codified constitutions written thereafter. The Polish–Lithuanian Commonwealth Constitution was passed on May 3, 1791. Its draft was developed by the leading minds of the Enlightenment in Poland such as King Stanislaw August Poniatowski, Stanisław Staszic, Scipione Piattoli, Julian Ursyn Niemcewicz, Ignacy Potocki and Hugo Kołłątaj. It was adopted by the Great Sejm and is considered the first constitution of its kind in Europe and the world's second oldest one after the American Constitution. Another landmark document was the French Constitution of 1791. The 1811 Constitution of Venezuela was the first Constitution of Venezuela and Latin America, promulgated and drafted by Cristóbal Mendoza and Juan Germán Roscio and in Caracas. It established a federal government but was repealed one year later. On March 19, the Spanish Constitution of 1812 was ratified by a parliament gathered in Cadiz, the only Spanish continental city which was safe from French occupation. The Spanish Constitution served as a model for other liberal constitutions of several South European and Latin American nations, for example, the Portuguese Constitution of 1822, constitutions of various Italian states during Carbonari revolts (i.e., in the Kingdom of the Two Sicilies), the Norwegian constitution of 1814, or the Mexican Constitution of 1824. In Brazil, the Constitution of 1824 expressed the option for the monarchy as political system after Brazilian Independence. The leader of the national emancipation process was the Portuguese prince Pedro I, elder son of the king of Portugal. Pedro was crowned in 1822 as first emperor of Brazil. The country was ruled by Constitutional monarchy until 1889, when it adopted the Republican model. In Denmark, as a result of the Napoleonic Wars, the absolute monarchy lost its personal possession of Norway to Sweden. Sweden had already enacted its 1809 Instrument of Government, which saw the division of power between the Riksdag, the king and the judiciary. However the Norwegians managed to infuse a radically democratic and liberal constitution in 1814, adopting many facets from the American constitution and the revolutionary French ones, but maintaining a hereditary monarch limited by the constitution, like the Spanish one. The first Swiss Federal Constitution was put in force in September 1848 (with official revisions in 1878, 1891, 1949, 1971, 1982 and 1999). The Serbian revolution initially led to a proclamation of a proto-constitution in 1811; the full-fledged Constitution of Serbia followed few decades later, in 1835. The first Serbian constitution (Sretenjski ustav) was adopted at the national assembly in Kragujevac on February 15, 1835. The Constitution of Canada came into force on July 1, 1867, as the British North America Act, an act of the British Parliament. Over a century later, the BNA Act was patriated to the Canadian Parliament and augmented with the Canadian Charter of Rights and Freedoms. Apart from the Constitution Acts, 1867 to 1982, Canada's constitution also has unwritten elements based in common law and convention. Principles of constitutional design After tribal people first began to live in cities and establish nations, many of these functioned according to unwritten customs, while some developed autocratic, even tyrannical monarchs, who ruled by decree, or mere personal whim. Such rule led some thinkers to take the position that what mattered was not the design of governmental institutions and operations, as much as the character of the rulers. This view can be seen in Plato, who called for rule by "philosopher-kings." Later writers, such as Aristotle, Cicero and Plutarch, would examine designs for government from a legal and historical standpoint. The Renaissance brought a series of political philosophers who wrote implied criticisms of the practices of monarchs and sought to identify principles of constitutional design that would be likely to yield more effective and just governance from their viewpoints. This began with revival of the Roman law of nations concept and its application to the relations among nations, and they sought to establish customary "laws of war and peace" to ameliorate wars and make them less likely. This led to considerations of what authority monarchs or other officials have and don't have, from where that authority derives, and the remedies for the abuse of such authority. A seminal juncture in this line of discourse arose in England from the Civil War, the Cromwellian Protectorate, the writings of Thomas Hobbes, Samuel Rutherford, the Levellers, John Milton, and James Harrington, leading to the debate between Robert Filmer, arguing for the divine right of monarchs, on the one side, and on the other, Henry Neville, James Tyrrell, Algernon Sidney, and John Locke. What arose from the latter was a concept of government being erected on the foundations of first, a state of nature governed by natural laws, then a state of society, established by a social contract or compact, which bring underlying natural or social laws, before governments are formally established on them as foundations. Along the way several writers examined how the design of government was important, even if the government were headed by a monarch. They also classified various historical examples of governmental designs, typically into democracies, aristocracies, or monarchies, and considered how just and effective each tended to be and why, and how the advantages of each might be obtained by combining elements of each into a more complex design that balanced competing tendencies. Some, such as Montesquieu, also examined how the functions of government, such as legislative, executive, and judicial, might appropriately be separated into branches. The prevailing theme among these writers was that the design of constitutions is not completely arbitrary or a matter of taste. They generally held that there are underlying principles of design that constrain all constitutions | Maurya king's rule in India. For constitutional principles almost lost to antiquity, see the code of Manu. Early Middle Ages Many of the Germanic peoples that filled the power vacuum left by the Western Roman Empire in the Early Middle Ages codified their laws. One of the first of these Germanic law codes to be written was the Visigothic Code of Euric (471 AD). This was followed by the Lex Burgundionum, applying separate codes for Germans and for Romans; the Pactus Alamannorum; and the Salic Law of the Franks, all written soon after 500. In 506, the Breviarum or "Lex Romana" of Alaric II, king of the Visigoths, adopted and consolidated the Codex Theodosianus together with assorted earlier Roman laws. Systems that appeared somewhat later include the Edictum Rothari of the Lombards (643), the Lex Visigothorum (654), the Lex Alamannorum (730), and the Lex Frisionum (c. 785). These continental codes were all composed in Latin, while Anglo-Saxon was used for those of England, beginning with the Code of Æthelberht of Kent (602). Around 893, Alfred the Great combined this and two other earlier Saxon codes, with various Mosaic and Christian precepts, to produce the Doom book code of laws for England. Japan's Seventeen-article constitution written in 604, reportedly by Prince Shōtoku, is an early example of a constitution in Asian political history. Influenced by Buddhist teachings, the document focuses more on social morality than on institutions of government, and remains a notable early attempt at a government constitution. The Constitution of Medina (, Ṣaḥīfat al-Madīna), also known as the Charter of Medina, was drafted by the Islamic prophet Muhammad after his flight (hijra) to Yathrib where he became political leader. It constituted a formal agreement between Muhammad and all of the significant tribes and families of Yathrib (later known as Medina), including Muslims, Jews, and pagans. The document was drawn up with the explicit concern of bringing to an end the bitter intertribal fighting between the clans of the Aws (Aus) and Khazraj within Medina. To this effect it instituted a number of rights and responsibilities for the Muslim, Jewish, and pagan communities of Medina bringing them within the fold of one community – the Ummah. The precise dating of the Constitution of Medina remains debated, but generally scholars agree it was written shortly after the Hijra (622). In Wales, the Cyfraith Hywel (Law of Hywel) was codified by Hywel Dda c. 942–950. Middle Ages after 1000 The Pravda Yaroslava, originally combined by Yaroslav the Wise the Grand Prince of Kyiv, was granted to Great Novgorod around 1017, and in 1054 was incorporated into the Ruska Pravda; it became the law for all of Kievan Rus. It survived only in later editions of the 15th century. In England, Henry I's proclamation of the Charter of Liberties in 1100 bound the king for the first time in his treatment of the clergy and the nobility. This idea was extended and refined by the English barony when they forced King John to sign Magna Carta in 1215. The most important single article of the Magna Carta, related to "habeas corpus", provided that the king was not permitted to imprison, outlaw, exile or kill anyone at a whim – there must be due process of law first. This article, Article 39, of the Magna Carta read: This provision became the cornerstone of English liberty after that point. The social contract in the original case was between the king and the nobility, but was gradually extended to all of the people. It led to the system of Constitutional Monarchy, with further reforms shifting the balance of power from the monarchy and nobility to the House of Commons. The Nomocanon of Saint Sava () was the first Serbian constitution from 1219. St. Sava's Nomocanon was the compilation of civil law, based on Roman Law, and canon law, based on Ecumenical Councils. Its basic purpose was to organize the functioning of the young Serbian kingdom and the Serbian church. Saint Sava began the work on the Serbian Nomocanon in 1208 while he was at Mount Athos, using The Nomocanon in Fourteen Titles, Synopsis of Stefan the Efesian, Nomocanon of John Scholasticus, and Ecumenical Council documents, which he modified with the canonical commentaries of Aristinos and Joannes Zonaras, local church meetings, rules of the Holy Fathers, the law of Moses, the translation of Prohiron, and the Byzantine emperors' Novellae (most were taken from Justinian's Novellae). The Nomocanon was a completely new compilation of civil and canonical regulations, taken from Byzantine sources but completed and reformed by St. Sava to function properly in Serbia. Besides decrees that organized the life of church, there are various norms regarding civil life; most of these were taken from Prohiron. Legal transplants of Roman-Byzantine law became the basis of the Serbian medieval law. The essence of Zakonopravilo was based on Corpus Iuris Civilis. Stefan Dušan, emperor of Serbs and Greeks, enacted Dušan's Code () in Serbia, in two state congresses: in 1349 in Skopje and in 1354 in Serres. It regulated all social spheres, so it was the second Serbian constitution, after St. Sava's Nomocanon (Zakonopravilo). The Code was based on Roman-Byzantine law. The legal transplanting within articles 171 and 172 of Dušan's Code, which regulated the juridical independence, is notable. They were taken from the Byzantine code Basilika (book VII, 1, 16–17). In 1222, Hungarian King Andrew II issued the Golden Bull of 1222. Between 1220 and 1230, a Saxon administrator, Eike von Repgow, composed the Sachsenspiegel, which became the supreme law used in parts of Germany as late as 1900. Around 1240, the Coptic Egyptian Christian writer, 'Abul Fada'il Ibn al-'Assal, wrote the Fetha Negest in Arabic. 'Ibn al-Assal took his laws partly from apostolic writings and Mosaic law and partly from the former Byzantine codes. There are a few historical records claiming that this law code was translated into Ge'ez and entered Ethiopia around 1450 in the reign of Zara Yaqob. Even so, its first recorded use in the function of a constitution (supreme law of the land) is with Sarsa Dengel beginning in 1563. The Fetha Negest remained the supreme law in Ethiopia until 1931, when a modern-style Constitution was first granted by Emperor Haile Selassie I. In the Principality of Catalonia, the Catalan constitutions were promulgated by the Court from 1283 (or even two centuries before, if Usatges of Barcelona is considered part of the compilation of Constitutions) until 1716, when Philip V of Spain gave the Nueva Planta decrees, finishing with the historical laws of Catalonia. These Constitutions were usually made formally as a royal initiative, but required for its approval or repeal the favorable vote of the Catalan Courts, the medieval antecedent of the modern Parliaments. These laws, like other modern constitutions, had preeminence over other laws, and they could not be contradicted by mere decrees or edicts of the king. The Kouroukan Founga was a 13th-century charter of the Mali Empire, reconstructed from oral tradition in 1988 by Siriman Kouyaté. The Golden Bull of 1356 was a decree issued by a Reichstag in Nuremberg headed by Emperor Charles IV that fixed, for a period of more than four hundred years, an important aspect of the constitutional structure of the Holy Roman Empire. In China, the Hongwu Emperor created and refined a document he called Ancestral Injunctions (first published in 1375, revised twice more before his death in 1398). These rules served as a constitution for the Ming Dynasty for the next 250 years. The oldest written document still governing a sovereign nation today is that of San Marino. The Leges Statutae Republicae Sancti Marini was written in Latin and consists of six books. The first book, with 62 articles, establishes councils, courts, various executive officers, and the powers assigned to them. The remaining books cover criminal and civil law and judicial procedures and remedies. Written in 1600, the document was based upon the Statuti Comunali (Town Statute) of 1300, itself influenced by the Codex Justinianus, and it remains in force today. In 1392 the Carta de Logu was legal code of the Giudicato of Arborea promulgated by the giudicessa Eleanor. It was in force in Sardinia until it was superseded by the code of Charles Felix in April 1827. The Carta was a work of great importance in Sardinian history. It was an organic, coherent, and systematic work of legislation encompassing the civil and penal law. The Gayanashagowa, the oral constitution of the Haudenosaunee nation also known as the Great Law of Peace, established a system of governance as far back as 1190 AD (though perhaps more recently at 1451) in which the Sachems, or tribal chiefs, of the Iroquois League's member nations made decisions on the basis of universal consensus of all chiefs following discussions that were initiated by a single nation. The position of Sachem descends through families and are allocated by the senior female clan heads, though, prior to the filling of the position, candidacy is ultimately democratically decided by the community itself. Modern constitutions In 1634 the Kingdom of Sweden adopted the 1634 Instrument of Government, drawn up under the Lord High Chancellor of Sweden Axel Oxenstierna after the death of king Gustavus Adolphus, it can be seen as the first written constitution adopted by a modern state. In 1639, the Colony of Connecticut adopted the Fundamental Orders, which was the first North American constitution, and is the basis for every new Connecticut constitution since, and is also the reason for Connecticut's nickname, "the Constitution State". The English Protectorate that was set up by Oliver Cromwell after the English Civil War promulgated the first detailed written constitution adopted by a modern state; it was called the Instrument of Government. This formed the basis of government for the short-lived republic from 1653 to 1657 by providing a legal rationale for the increasing power of Cromwell after Parliament consistently failed to govern effectively. Most of the concepts and ideas embedded into modern constitutional theory, especially bicameralism, separation of powers, the written constitution, and judicial review, can be traced back to the experiments of that period. Drafted by Major-General John Lambert in 1653, the Instrument of Government included elements incorporated from an earlier document "Heads of Proposals", which had been agreed to by the Army Council in 1647, as a set of propositions intended to be a basis for a constitutional settlement after King Charles I was defeated in the First English Civil War. Charles had rejected the propositions, but before the start of the Second Civil War, the Grandees of the New Model Army had presented the Heads of Proposals as their alternative to the more radical Agreement of the People presented by the Agitators and their civilian supporters at the Putney Debates. On January 4, 1649, the Rump Parliament declared "that the people are, under God, the original of all just power; that the Commons of England, being chosen by and representing the people, have the supreme power in this nation". The Instrument of Government was adopted by Parliament on December 15, 1653, and Oliver Cromwell was installed as Lord Protector on the following day. The constitution set up a state council consisting of 21 members while executive authority was vested in the office of "Lord Protector of the Commonwealth." This position was designated as a non-hereditary life appointment. The Instrument also required the calling of triennial Parliaments, with each sitting for at least five months. The Instrument of Government was replaced in May 1657 by England's second, and last, codified constitution, the Humble Petition and Advice, proposed by Sir Christopher Packe. The Petition offered hereditary monarchy to Oliver Cromwell, asserted Parliament's control over issuing new taxation, provided an independent council to advise the king and safeguarded "Triennial" meetings of Parliament. A modified version of the Humble Petition with the clause on kingship removed was ratified on 25 May. This finally met its demise in conjunction with the death of Cromwell and the Restoration of the monarchy. Other examples of European constitutions of this era were the Corsican Constitution of 1755 and the Swedish Constitution of 1772. All of the British colonies in North America that were to become the 13 original United States, adopted their own constitutions in 1776 and 1777, during the American Revolution (and before the later Articles of Confederation and United States Constitution), with the exceptions of Massachusetts, Connecticut and Rhode Island. The Commonwealth of Massachusetts adopted its Constitution in 1780, the oldest still-functioning constitution of any U.S. state; while Connecticut and Rhode Island officially continued to operate under their old colonial charters, until they adopted their first state constitutions in 1818 and 1843, respectively. Democratic constitutions What is sometimes called the "enlightened constitution" model was developed by philosophers of the Age of Enlightenment such as Thomas Hobbes, Jean-Jacques Rousseau, and John Locke. The model proposed that constitutional governments should be stable, adaptable, accountable, open and should represent the people (i.e., support democracy). Agreements and Constitutions of Laws and Freedoms of the Zaporizian Host was written in 1710 by Pylyp Orlyk, hetman of the Zaporozhian Host. It was written to establish a free Zaporozhian-Ukrainian Republic, with the support of Charles XII of Sweden. It is notable in that it established a democratic standard for the separation of powers in government between the legislative, executive, and judiciary branches, well before the publication of Montesquieu's Spirit of the Laws. This Constitution also limited the executive authority of the hetman, and established a democratically elected Cossack parliament called the General Council. However, Orlyk's project for an independent Ukrainian State never materialized, and his constitution, written in exile, never went into effect. Corsican Constitutions of 1755 and 1794 were inspired by Jean-Jacques Rousseau. The latter introduced universal suffrage for property owners. The Swedish constitution of 1772 was enacted under King Gustavus III and was inspired by the separation of powers by Montesquieu. The king also cherished other enlightenment ideas (as an enlighted despot) and repealed torture, liberated agricultural trade, diminished the use of the death penalty and instituted a form of religious freedom. The constitution was commended by Voltaire. The United States Constitution, ratified June 21, 1788, was influenced by the writings of Polybius, Locke, Montesquieu, and others. The document became a benchmark for republicanism and codified constitutions written thereafter. The Polish–Lithuanian Commonwealth Constitution was passed on May 3, 1791. Its draft was developed by the leading minds of the Enlightenment in Poland such as King Stanislaw August Poniatowski, Stanisław Staszic, Scipione Piattoli, Julian Ursyn Niemcewicz, Ignacy Potocki and Hugo Kołłątaj. It was adopted by the Great Sejm and is considered the first constitution of its kind in Europe and the world's second oldest one after the American Constitution. Another landmark document was the French Constitution of 1791. The 1811 Constitution of Venezuela was the first Constitution of Venezuela and Latin America, promulgated and drafted by Cristóbal Mendoza and Juan Germán Roscio and in Caracas. It established a federal government but was repealed one year later. On March 19, the Spanish Constitution of 1812 was ratified by a parliament gathered in Cadiz, the only Spanish continental city which was safe from French occupation. The Spanish Constitution served as a model for other liberal constitutions of several South European and Latin American nations, for example, the Portuguese Constitution of 1822, constitutions of various Italian states during Carbonari revolts (i.e., in the Kingdom of the Two Sicilies), the Norwegian constitution of 1814, or the Mexican Constitution of 1824. In Brazil, the Constitution of 1824 expressed the option for the monarchy as political system after Brazilian Independence. The leader of the national emancipation process was the Portuguese prince Pedro I, elder son of the king of Portugal. Pedro was crowned in 1822 as first emperor of Brazil. The country was ruled by Constitutional monarchy until 1889, when it adopted the Republican model. In Denmark, as a result of the Napoleonic Wars, the absolute monarchy lost its personal possession of Norway to Sweden. Sweden had already enacted its 1809 Instrument of Government, which saw the division of power between the Riksdag, the king and the judiciary. However the Norwegians managed to infuse a radically democratic and liberal constitution in 1814, adopting many facets from the American constitution and the revolutionary French ones, but maintaining a hereditary monarch limited by the constitution, like the Spanish one. The first Swiss Federal Constitution was put in force in September 1848 (with official revisions in 1878, 1891, 1949, 1971, 1982 and 1999). The Serbian revolution initially led to a proclamation of a proto-constitution in 1811; the full-fledged Constitution of Serbia followed few decades later, in 1835. The first Serbian constitution (Sretenjski ustav) was adopted at the national assembly in Kragujevac on February 15, 1835. The Constitution of Canada came into force on July 1, 1867, as the British North America Act, an act of the British Parliament. Over a century later, the BNA Act was patriated to the Canadian Parliament and augmented with the Canadian Charter of Rights and Freedoms. Apart from the Constitution Acts, 1867 to 1982, Canada's constitution also has unwritten elements based in common law and convention. Principles of constitutional design After tribal people first began to live in cities and establish nations, many of these functioned according to unwritten customs, while some developed autocratic, even tyrannical monarchs, who ruled by decree, or mere personal whim. Such rule led some |
law—"legislating from the bench" is a traditional and essential function of courts, which was carried over into the U.S. system as an essential component of the "judicial power" specified by Article III of the U.S. Constitution. Justice Oliver Wendell Holmes Jr. summarized centuries of history in 1917, "judges do and must legislate." There are legitimate debates on how the powers of courts and legislatures should be balanced. However, the view that courts lack law-making power is historically inaccurate and constitutionally unsupportable. In England, judges have devised a number of rules as to how to deal with precedent decisions. The early development of case-law in the thirteenth century has been traced to Bracton's On the Laws and Customs of England and led to the yearly compilations of court cases known as Year Books, of which the first extant was published in 1268, the same year that Bracton died. The Year Books are known as the law reports of medieval England, and are a principal source for knowledge of the developing legal doctrines, concepts, and methods in the period from the 13th to the 16th centuries, when the common law developed into recognizable form. Influence of Roman law The term "common law" is often used as a contrast to Roman-derived "civil law", and the fundamental processes and forms of reasoning in the two are quite different. Nonetheless, there has been considerable cross-fertilization of ideas, while the two traditions and sets of foundational principles remain distinct. By the time of the rediscovery of the Roman law in Europe in the 12th and 13th centuries, the common law had already developed far enough to prevent a Roman law reception as it occurred on the continent. However, the first common law scholars, most notably Glanvill and Bracton, as well as the early royal common law judges, had been well accustomed with Roman law. Often, they were clerics trained in the Roman canon law. One of the first and throughout its history one of the most significant treatises of the common law, Bracton's De Legibus et Consuetudinibus Angliae (On the Laws and Customs of England), was heavily influenced by the division of the law in Justinian's Institutes. The impact of Roman law had decreased sharply after the age of Bracton, but the Roman divisions of actions into in rem (typically, actions against a thing or property for the purpose of gaining title to that property; must be filed in a court where the property is located) and in personam (typically, actions directed against a person; these can affect a person's rights and, since a person often owns things, his property too) used by Bracton had a lasting effect and laid the groundwork for a return of Roman law structural concepts in the 18th and 19th centuries. Signs of this can be found in Blackstone's Commentaries on the Laws of England, and Roman law ideas regained importance with the revival of academic law schools in the 19th century. As a result, today, the main systematic divisions of the law into property, contract, and tort (and to some extent unjust enrichment) can be found in the civil law as well as in the common law. Coke and Blackstone The first attempt at a comprehensive compilation of centuries of common law was by Lord Chief Justice Edward Coke, in his treatise, Institutes of the Lawes of England in the 17th century. The next definitive historical treatise on the common law is Commentaries on the Laws of England, written by Sir William Blackstone and first published in 1765–1769. Propagation of the common law to the colonies and Commonwealth by reception statutes A reception statute is a statutory law adopted as a former British colony becomes independent, by which the new nation adopts (i.e. receives) pre-independence common law, to the extent not explicitly rejected by the legislative body or constitution of the new nation. Reception statutes generally consider the English common law dating prior to independence, and the precedent originating from it, as the default law, because of the importance of using an extensive and predictable body of law to govern the conduct of citizens and businesses in a new state. All U.S. states, with the partial exception of Louisiana, have either implemented reception statutes or adopted the common law by judicial opinion. Other examples of reception statutes in the United States, the states of the U.S., Canada and its provinces, and Hong Kong, are discussed in the reception statute article. Yet, adoption of the common law in the newly independent nation was not a foregone conclusion, and was controversial. Immediately after the American Revolution, there was widespread distrust and hostility to anything British, and the common law was no exception. Jeffersonians decried lawyers and their common law tradition as threats to the new republic. The Jeffersonians preferred a legislatively enacted civil law under the control of the political process, rather than the common law developed by judges that—by design—were insulated from the political process. The Federalists believed that the common law was the birthright of Independence: after all, the natural rights to "life, liberty, and the pursuit of happiness" were the rights protected by common law. Even advocates for the common law approach noted that it was not an ideal fit for the newly independent colonies: judges and lawyers alike were severely hindered by a lack of printed legal materials. Before Independence, the most comprehensive law libraries had been maintained by Tory lawyers, and those libraries vanished with the loyalist expatriation, and the ability to print books was limited. Lawyer (later President) John Adams complained that he "suffered very much for the want of books". To bootstrap this most basic need of a common law system—knowable, written law—in 1803, lawyers in Massachusetts donated their books to found a law library. A Jeffersonian newspaper criticized the library, as it would carry forward "all the old authorities practiced in England for centuries back ... whereby a new system of jurisprudence [will be founded] on the high monarchical system [to] become the Common Law of this Commonwealth... [The library] may hereafter have a very unsocial purpose." For several decades after independence, English law still exerted influence over American common law—for example, with Byrne v Boadle (1863), which first applied the res ipsa loquitur doctrine. Decline of Latin maxims and "blind imitation of the past", and adding flexibility to stare decisis Well into the 19th century, ancient maxims played a large role in common law adjudication. Many of these maxims had originated in Roman Law, migrated to England before the introduction of Christianity to the British Isles, and were typically stated in Latin even in English decisions. Many examples are familiar in everyday speech even today, "One cannot be a judge in one's own cause" (see Dr. Bonham's Case), rights are reciprocal to obligations, and the like. Judicial decisions and treatises of the 17th and 18th centuries, such at those of Lord Chief Justice Edward Coke, presented the common law as a collection of such maxims. Reliance on old maxims and rigid adherence to precedent, no matter how old or ill-considered, came under critical discussion in the late 19th century, starting in the United States. Oliver Wendell Holmes Jr. in his famous article, "The Path of the Law", commented, "It is revolting to have no better reason for a rule of law than that so it was laid down in the time of Henry IV. It is still more revolting if the grounds upon which it was laid down have vanished long since, and the rule simply persists from blind imitation of the past." Justice Holmes noted that study of maxims might be sufficient for "the man of the present", but "the man of the future is the man of statistics and the master of economics". In an 1880 lecture at Harvard, he wrote: The life of the law has not been logic; it has been experience. The felt necessities of the time, the prevalent moral and political theories, intuitions of public policy, avowed or unconscious, even the prejudices which judges share with their fellow men, have had a good deal more to do than the syllogism in determining the rules by which men should be governed. The law embodies the story of a nation's development through many centuries, and it cannot be dealt with as if it contained only the axioms and corollaries of a book of mathematics. In the early 20th century, Louis Brandeis, later appointed to the United States Supreme Court, became noted for his use of policy-driving facts and economics in his briefs, and extensive appendices presenting facts that lead a judge to the advocate's conclusion. By this time, briefs relied more on facts than on Latin maxims. Reliance on old maxims is now deprecated. Common law decisions today reflect both precedent and policy judgment drawn from economics, the social sciences, business, decisions of foreign courts, and the like. The degree to which these external factors should influence adjudication is the subject of active debate, but it is indisputable that judges do draw on experience and learning from everyday life, from other fields, and from other jurisdictions. 1870 through 20th century, and the procedural merger of law and equity As early as the 15th century, it became the practice that litigants who felt they had been cheated by the common law system would petition the King in person. For example, they might argue that an award of damages (at common law (as opposed to equity)) was not sufficient redress for a trespasser occupying their land, and instead request that the trespasser be evicted. From this developed the system of equity, administered by the Lord Chancellor, in the courts of chancery. By their nature, equity and law were frequently in conflict and litigation would frequently continue for years as one court countermanded the other, even though it was established by the 17th century that equity should prevail. In England, courts of law (as opposed to equity) were combined with courts of equity by the Judicature Acts of 1873 and 1875, with equity prevailing in case of conflict. In the United States, parallel systems of law (providing money damages, with cases heard by a jury upon either party's request) and equity (fashioning a remedy to fit the situation, including injunctive relief, heard by a judge) survived well into the 20th century. The United States federal courts procedurally separated law and equity: the same judges could hear either kind of case, but a given case could only pursue causes in law or in equity, and the two kinds of cases proceeded under different procedural rules. This became problematic when a given case required both money damages and injunctive relief. In 1937, the new Federal Rules of Civil Procedure combined law and equity into one form of action, the "civil action". Fed.R.Civ.P. . The distinction survives to the extent that issues that were "common law (as opposed to equity)" as of 1791 (the date of adoption of the Seventh Amendment) are still subject to the right of either party to request a jury, and "equity" issues are decided by a judge. The states of Delaware, Illinois, Mississippi, South Carolina, and Tennessee continue to have divided courts of law and courts of chancery, for example, the Delaware Court of Chancery. In New Jersey, the appellate courts are unified, but the trial courts are organized into a Chancery Division and a Law Division. Common law pleading and its abolition in the early 20th century For centuries, through to the 19th century, the common law recognized only specific forms of action, and required very careful drafting of the opening pleading (called a writ) to slot into exactly one of them: debt, detinue, covenant, special assumpsit, general assumpsit, trespass, trover, replevin, case (or trespass on the case), and ejectment. To initiate a lawsuit, a pleading had to be drafted to meet myriad technical requirements: correctly categorizing the case into the correct legal pigeonhole (pleading in the alternative was not permitted), and using specific "magic words" encrusted over the centuries. Under the old common law pleading standards, a suit by a pro se ("for oneself", without a lawyer) party was all but impossible, and there was often considerable procedural jousting at the outset of a case over minor wording issues. One of the major reforms of the late 19th century and early 20th century was the abolition of common law pleading requirements. A plaintiff can initiate a case by giving the defendant "a short and plain statement" of facts that constitute an alleged wrong. This reform moved the attention of courts from technical scrutiny of words to a more rational consideration of the facts, and opened access to justice far more broadly. Alternatives to common law systems Civil law systems—comparisons and contrasts to common law The main alternative to the common law system is the civil law system, which is used in Continental Europe, and most of Central and South America. Judicial decisions play only a minor role in shaping civil law The primary contrast between the two systems is the role of written decisions and precedent. In common law jurisdictions, nearly every case that presents a bona fide disagreement on the law is resolved in a written opinion. The legal reasoning for the decision, known as ratio decidendi, not only determines the court's judgment between the parties, but also stands as precedent for resolving future disputes. In contrast, civil law decisions typically do not include explanatory opinions, and thus no precedent flows from one decision to the next. In common law systems, a single decided case is binding common law (connotation 1) to the same extent as statute or regulation, under the principle of stare decisis. In contrast, in civil law systems, individual decisions have only advisory, not binding effect. In civil law systems, case law only acquires weight when a long series of cases use consistent reasoning, called jurisprudence constante. Civil law lawyers consult case law to obtain their best prediction of how a court will rule, but comparatively, civil law judges are less bound to follow it. For that reason, statutes in civil law systems are more comprehensive, detailed, and continuously updated, covering all matters capable of being brought before a court. Adversarial system vs. inquisitorial system Common law systems tend to give more weight to separation of powers between the judicial branch and the executive branch. In contrast, civil law systems are typically more tolerant of allowing individual officials to exercise both powers. One example of this contrast is the difference between the two systems in allocation of responsibility between prosecutor and adjudicator. Common law courts usually use an adversarial system, in which two sides present their cases to a neutral judge. In contrast, in civil law systems, criminal proceedings proceed under an inquisitorial system in which an examining magistrate serves two roles by developing the evidence and arguments for one side and then the other during the investigation phase. The examining magistrate then presents the dossier detailing his or her findings to the president of the bench that will adjudicate on the case where it has been decided that a trial shall be conducted. Therefore, the president of the bench's view of the case is not neutral and may be biased while conducting the trial after the reading of the dossier. Unlike the common law proceedings, the president of the bench in the inquisitorial system is not merely an umpire and is entitled to directly interview the witnesses or express comments during the trial, as long as he or she does not express his or her view on the guilt of the accused. The proceeding in the inquisitorial system is essentially by writing. Most of the witnesses would have given evidence in the investigation phase and such evidence will be contained in the dossier under the form of police reports. In the same way, the accused would have already put his or her case at the investigation phase but he or she will be free to change his or her evidence at trial. Whether the accused pleads guilty or not, a trial will be conducted. Unlike the adversarial system, the conviction and sentence to be served (if any) will be released by the trial jury together with the president of the trial bench, following their common deliberation. In contrast, in an adversarial system, the onus of framing the case rests on the parties, and judges generally decide the case presented to them, rather than acting as active investigators, or actively reframing the issues presented. "In our adversary system, in both civil and criminal cases, in the first instance and on appeal, we follow the principle of party presentation. That is, we rely on the parties to frame the issues for decision and assign to courts the role of neutral arbiter of matters the parties present." This principle applies with force in all issues in criminal matters, and to factual issues: courts seldom engage in fact gathering on their own initiative, but decide facts on the evidence presented (even here, there are exceptions, for "legislative facts" as opposed to "adjudicative facts"). On the other hand, on issues of law, courts regularly raise new issues (such as matters of jurisdiction or standing), perform independent research, and reformulate the legal grounds on which to analyze the facts presented to them. The United States Supreme Court regularly decides based on issues raised only in amicus briefs from non-parties. One of the most notable such cases was Erie Railroad v. Tompkins, a 1938 case in which neither party questioned the ruling from the 1842 case Swift v. Tyson that served as the foundation for their arguments, but which led the Supreme Court to overturn Swift during their deliberations. To avoid lack of notice, courts may invite briefing on an issue to ensure adequate notice. However, there are limits—an appeals court may not introduce a theory that contradicts the party's own contentions. There are many exceptions in both directions. For example, most proceedings before U.S. federal and state agencies are inquisitorial in nature, at least the initial stages (e.g., a patent examiner, a social security hearing officer, and so on), even though the law to be applied is developed through common law processes. Contrasting role of treatises and academic writings in common law and civil law systems The role of the legal academy presents a significant "cultural" difference between common law (connotation 2) and civil law jurisdictions. In both systems, treatises compile decisions and state overarching principles that (in the author's opinion) explain the results of the cases. In neither system are treatises considered "law," but the weight given them is nonetheless quite different. In common law jurisdictions, lawyers and judges tend to use these treatises as only "finding aids" to locate the relevant cases. In common law jurisdictions, scholarly work is seldom cited as authority for what the law is. Chief Justice Roberts noted the "great disconnect between the academy and the profession." When common law courts rely on scholarly work, it is almost always only for factual findings, policy justification, or the history and evolution of the law, but the court's legal conclusion is reached through analysis of relevant statutes and common law, seldom scholarly commentary. In contrast, in civil law jurisdictions, courts give the writings of law professors significant weight, partly because civil law decisions traditionally were very brief, sometimes no more than a paragraph stating who wins and who loses. The rationale had to come from somewhere else: the academy often filled that role. Narrowing of differences between common law and civil law The contrast between civil law and common law legal systems has become increasingly blurred, with the growing importance of jurisprudence (similar to case law but not binding) in civil law countries, and the growing importance of statute law and codes in common law countries. Examples of common law being replaced by statute or codified rule in the United States include criminal law (since 1812, U.S. federal courts and most but not all of the states have held that criminal law must be embodied in statute if the public is to have fair notice), commercial law (the Uniform Commercial Code in the early 1960s) and procedure (the Federal Rules of Civil Procedure in the 1930s and the Federal Rules of Evidence in the 1970s). But note that in each case, the statute sets the general principles, but the interstitial common law process determines the scope and application of the statute. An example of convergence from the other direction is shown in the 1982 decision Srl CILFIT and Lanificio di Gavardo SpA v Ministry of Health (), in which the European Court of Justice held that questions it has already answered need not be resubmitted. This showed how a historically distinctly common law principle is used by a court composed of judges (at that time) of essentially civil law jurisdiction. Other alternatives The former Soviet Bloc and other socialist countries used a socialist law system, although there is controversy as to whether socialist law ever constituted a separate legal system or not. Much of the Muslim world uses legal systems based on Sharia (also called Islamic law). Many churches use a system of canon law. The canon law of the Catholic Church influenced the common law during the medieval period through its preservation of Roman law doctrine such as the presumption of innocence. Common law legal systems in the present day In jurisdictions around the world The common law constitutes the basis of the legal systems of: Australia (both federal and individual states), Bangladesh, Belize, Brunei, Canada (both federal and the individual provinces (except Quebec)), the Caribbean jurisdictions of Antigua and Barbuda, Barbados, Bahamas, Dominica, Grenada, Jamaica, St Vincent and the Grenadines, Saint Kitts and Nevis, Trinidad and Tobago, Ghana, Hong Kong, India, Ireland, Israel, Kenya, Nigeria, Malaysia, Myanmar, New Zealand, Pakistan, Philippines, Singapore, South Africa, United Kingdom: England and Wales, Northern Ireland, United States (both the federal system and the individual states (with the partial exception of Louisiana)), and many other generally English-speaking countries or Commonwealth countries (except the UK's Scotland, which is bijuridicial, and Malta). Essentially, every country that was colonised at some time by England, Great Britain, or the United Kingdom uses common law except those that were formerly colonised by other nations, such as Quebec (which follows the bijuridicial law or civil code of France in part), South Africa and Sri Lanka (which follow Roman Dutch law), where the prior civil law system was retained to respect the civil rights of the local colonists. Guyana and Saint Lucia have mixed Common Law and Civil Law systems. The remainder of this section discusses jurisdiction-specific variants, arranged chronologically. Scotland Scotland is often said to use the civil law system, but it has a unique system that combines elements of an uncodified civil law dating back to the Corpus Juris Civilis with an element of its own common law long predating the Treaty of Union with England in 1707 (see Legal institutions of Scotland in the High Middle Ages), founded on the customary laws of the tribes residing there. Historically, Scottish common law differed in that the use of precedent was subject to the courts' seeking to discover the principle that justifies a law rather than searching for an example as a precedent, and principles of natural justice and fairness have always played a role in Scots Law. From the 19th century, the Scottish approach to precedent developed into a stare decisis akin to that already established in England thereby reflecting a narrower, more modern approach to the application of case law in subsequent instances. This is not to say that the substantive rules of the common laws of both countries are the same, but in many matters (particularly those of UK-wide interest), they are similar. Scotland shares the Supreme Court with England, Wales and Northern Ireland for civil cases; the court's decisions are binding on the jurisdiction from which a case arises but only influential on similar cases arising in Scotland. This has had the effect of converging the law in certain areas. For instance, the modern UK law of negligence is based on Donoghue v Stevenson, a case originating in Paisley, Scotland. Scotland maintains a separate criminal law system from the rest of the UK, with the High Court of Justiciary being the final court for criminal appeals. The highest court of appeal in civil cases brought in Scotland is now the Supreme Court of the United Kingdom (before October 2009, final appellate jurisdiction lay with the House of Lords). United States States of the United States (17th century on) The centuries-old authority of the common law courts in England to develop law case by case and to apply statute law—"legislating from the bench"—is a traditional function of courts, which was carried over into the U.S. system as an essential component of the "judicial power" specified by Article III of the U.S. constitution. Justice Oliver Wendell Holmes Jr. summarized centuries of history in 1917, "judges do and must legislate” (in the federal courts, only interstitially, in state courts, to the full limits of common law adjudicatory authority). New York (17th century) The original colony of New Netherland was settled by the Dutch and the law was also Dutch. When the English captured pre-existing colonies they continued to allow the local settlers to keep their civil law. However, the Dutch settlers revolted against the English and the colony was recaptured by the Dutch. In 1664, the colony of New York had two distinct legal systems: on Manhattan Island and along the Hudson River, sophisticated courts modeled on those of the Netherlands were resolving disputes learnedly in accordance with Dutch customary law. On Long Island, Staten Island, and in Westchester, on the other hand, English courts were administering a crude, untechnical variant of the common law carried from Puritan New England and practiced without the intercession of lawyers. When the English finally regained control of New Netherland they imposed common law upon all the colonists, including the Dutch. This was problematic, as the patroon system of land holding, based on the feudal system and civil law, continued to operate in the colony until it was abolished in the mid-19th century. New York began a codification of its law in the 19th century. The only part of this codification process that was considered complete is known as the Field Code applying to civil procedure. The influence of Roman-Dutch law continued in the colony well into the late 19th century. The codification of a law of general obligations shows how remnants of the civil law tradition in New York continued on from the Dutch days. Louisiana (1700s) Under Louisiana's codified system, the Louisiana Civil Code, private law—that is, substantive law between private sector parties—is based on principles of law from continental Europe, with some common law influences. These principles derive ultimately from Roman law, transmitted through French law and Spanish law, as the state's current territory intersects the area of North America colonized by Spain and by France. Contrary to popular belief, the Louisiana code does not directly derive from the Napoleonic Code, as the latter was enacted in 1804, one year after the Louisiana Purchase. However, the two codes are similar in many respects due to common roots. Louisiana's criminal law largely rests on English common law. Louisiana's administrative law is generally similar to the administrative law of the U.S. federal government and other U.S. states. Louisiana's procedural law is generally in line with that of other U.S. states, which in turn is generally based on the U.S. Federal Rules of Civil Procedure. Historically notable among the Louisiana code's differences from common law is the role of property rights among women, particularly in inheritance gained by widows. California (1850s) The U.S. state of California has a system based on common law, but it has codified the law in the manner of civil law jurisdictions. The reason for the enactment of the California Codes in the 19th century was to replace a pre-existing system based on Spanish civil law with a system based on common law, similar to that in most other states. California and a number of other Western states, however, have retained the concept of community property derived from civil law. The California courts have treated portions of the codes as an extension of the common-law tradition, subject to judicial development in the same manner as judge-made common law. (Most notably, in the case Li v. Yellow Cab Co., 13 Cal.3d 804 (1975), the California Supreme Court adopted the principle of comparative negligence in the face of a California Civil Code provision codifying the traditional common-law doctrine of contributory negligence.) United States federal courts (1789 and 1938) The United States federal government (as opposed to the states) has a variant on a common law system. United States federal courts only act as interpreters of statutes and the constitution by elaborating and precisely defining broad statutory language (connotation 1(b) above), but, unlike state courts, do not generally act as an independent source of common law. Before 1938, the federal courts, like almost all other common law courts, decided the law on any issue where the relevant legislature (either the U.S. Congress or state legislature, depending on the issue), had not acted, by looking to courts in the same system, that is, other federal courts, even on issues of state law, and even where there was no express grant of authority from Congress or the Constitution. In 1938, the U.S. Supreme Court in Erie Railroad Co. v. Tompkins 304 U.S. 64, 78 (1938), overruled earlier precedent, and held "There is no federal general common law," thus confining the federal courts to act only as interstitial interpreters of law originating elsewhere. E.g., Texas Industries v. Radcliff, (without an express grant of statutory authority, federal courts cannot create rules of intuitive justice, for example, a right to contribution from co-conspirators). Post-1938, federal courts deciding issues that arise under state law are required to defer to state court interpretations of state statutes, or reason what a state's highest court would rule if presented with the issue, or to certify the question to the state's highest court for resolution. Later courts have limited Erie slightly, to create a few situations where United States federal courts are permitted to create federal common law rules without express statutory authority, for example, where a federal rule of decision is necessary to protect uniquely federal interests, such as foreign affairs, or financial instruments issued by the federal government. See, e.g., Clearfield Trust Co. v. United States, (giving federal courts the authority to fashion common law rules with respect to issues of federal power, in this case negotiable instruments backed by the federal government); see also International News Service v. Associated Press, 248 U.S. 215 (1918) (creating a cause of action for misappropriation of "hot news" that lacks any statutory grounding); but see National Basketball Association v. Motorola, Inc., 105 F.3d 841, 843–44, 853 (2d Cir. 1997) (noting continued vitality of INS "hot news" tort under New York state law, but leaving open the question of whether it survives under federal law). Except on Constitutional issues, Congress is free to legislatively overrule federal courts' common law. United States executive branch agencies (1946) Most executive branch agencies in the United States federal government have some adjudicatory authority. To greater or lesser extent, agencies honor their own precedent to ensure consistent results. Agency decision making is governed by the Administrative Procedure Act of 1946. For example, the National Labor Relations Board issues relatively few regulations, but instead promulgates most of its substantive rules through common law (connotation 1). India, Pakistan, and Bangladesh (19th century and 1948) The law of India, Pakistan, and Bangladesh are largely based on English common law because of the long period of British colonial influence during the period of the British Raj. Ancient India represented a distinct tradition of law, and had an historically independent school of legal theory and practice. The Arthashastra, dating from 400 BCE and the Manusmriti, from 100 CE, were influential treatises in India, texts that were considered authoritative legal guidance. Manu's central philosophy was tolerance and pluralism, and was cited across Southeast Asia. Early in this period, which finally culminated in the creation of the Gupta Empire, relations with ancient Greece and Rome were not infrequent. The appearance of similar fundamental institutions of international law in various parts of the world show that they are inherent in international society, irrespective of culture and tradition. Inter-State relations in the pre-Islamic period resulted in clear-cut rules of warfare of a high humanitarian standard, in rules of neutrality, of treaty law, of customary law embodied in religious charters, in exchange of embassies of a temporary or semi-permanent character. When India became part of the British Empire, there was a break in tradition, and Hindu and Islamic law were supplanted by the common law. After the failed rebellion against the British in 1857, the British Parliament took over control of India from the British East India Company, and British India came under the direct rule of the Crown. The British Parliament passed the Government of India Act 1858 to this effect, which set up the structure of British government in India. It established in Britain the office of the Secretary of State for India through whom the Parliament would exercise its rule, along with a Council of India to aid him. It also established the office of the Governor-General of India along with an Executive Council in India, which consisted of high officials of the British Government. As a result, the present judicial system of the country derives largely from the British system and has little correlation to the institutions of the pre-British era. Post-partition India (1948) Post-partition, India retained its common law system. Much of contemporary Indian law shows substantial European and American influence. Legislation first introduced by the British is still in effect in modified form today. During the drafting of the Indian Constitution, laws from Ireland, the United States, Britain, and France were all synthesized to produce a refined set of Indian laws. Indian laws also adhere to the United Nations guidelines on human rights law | and reason from those decisions by analogy. In common law jurisdictions (in the sense opposed to "civil law"), legislatures operate under the assumption that statutes will be interpreted against the backdrop of the pre-existing common law. As the United States Supreme Court explained in United States v Texas, 507 U.S. 529 (1993): Just as longstanding is the principle that "[s]tatutes which invade the common law ... are to be read with a presumption favoring the retention of long-established and familiar principles, except when a statutory purpose to the contrary is evident." Isbrandtsen Co. v. Johnson, 343 U.S. 779, 783 (1952); Astoria Federal Savings & Loan Assn. v. Solimino, 501 U.S. 104, 108 (1991). In such cases, Congress does not write upon a clean slate. Astoria, 501 U.S. at 108. In order to abrogate a common-law principle, the statute must "speak directly" to the question addressed by the common law. Mobil Oil Corp. v. Higginbotham, 436 U. S. 618, 625 (1978); Milwaukee v. Illinois, 451 U. S. 304, 315 (1981). For example, in most U.S. states, the criminal statutes are primarily codification of pre-existing common law. (Codification is the process of enacting a statute that collects and restates pre-existing law in a single document—when that pre-existing law is common law, the common law remains relevant to the interpretation of these statutes.) In reliance on this assumption, modern statutes often leave a number of terms and fine distinctions unstated—for example, a statute might be very brief, leaving the precise definition of terms unstated, under the assumption that these fine distinctions would be resolved in the future by the courts based upon what they then understand to be the pre-existing common law. (For this reason, many modern American law schools teach the common law of crime as it stood in England in 1789, because that centuries-old English common law is a necessary foundation to interpreting modern criminal statutes.) With the transition from English law, which had common law crimes, to the new legal system under the U.S. Constitution, which prohibited ex post facto laws at both the federal and state level, the question was raised whether there could be common law crimes in the United States. It was settled in the case of United States v. Hudson, which decided that federal courts had no jurisdiction to define new common law crimes, and that there must always be a (constitutional) statute defining the offense and the penalty for it. Still, many states retain selected common law crimes. For example, in Virginia, the definition of the conduct that constitutes the crime of robbery exists only in the common law, and the robbery statute only sets the punishment. Virginia Code section 1-200 establishes the continued existence and vitality of common law principles and provides that "The common law of England, insofar as it is not repugnant to the principles of the Bill of Rights and Constitution of this Commonwealth, shall continue in full force within the same, and be the rule of decision, except as altered by the General Assembly." By contrast to statutory codification of common law, some statutes displace common law, for example to create a new cause of action that did not exist in the common law, or to legislatively overrule the common law. An example is the tort of wrongful death, which allows certain persons, usually a spouse, child or estate, to sue for damages on behalf of the deceased. There is no such tort in English common law; thus, any jurisdiction that lacks a wrongful death statute will not allow a lawsuit for the wrongful death of a loved one. Where a wrongful death statute exists, the compensation or other remedy available is limited to the remedy specified in the statute (typically, an upper limit on the amount of damages). Courts generally interpret statutes that create new causes of action narrowly—that is, limited to their precise terms—because the courts generally recognize the legislature as being supreme in deciding the reach of judge-made law unless such statute should violate some "second order" constitutional law provision (cf. judicial activism). This principle is applied more strongly in fields of commercial law (contracts and the like) where predictability is of relatively higher value, and less in torts, where courts recognize a greater responsibility to “do justice.” Where a tort is rooted in common law, all traditionally recognized damages for that tort may be sued for, whether or not there is mention of those damages in the current statutory law. For instance, a person who sustains bodily injury through the negligence of another may sue for medical costs, pain, suffering, loss of earnings or earning capacity, mental and/or emotional distress, loss of quality of life, disfigurement and more. These damages need not be set forth in statute as they already exist in the tradition of common law. However, without a wrongful death statute, most of them are extinguished upon death. In the United States, the power of the federal judiciary to review and invalidate unconstitutional acts of the federal executive branch is stated in the constitution, Article III sections 1 and 2: "The judicial Power of the United States, shall be vested in one supreme Court, and in such inferior Courts as the Congress may from time to time ordain and establish. ... The judicial Power shall extend to all Cases, in Law and Equity, arising under this Constitution, the Laws of the United States, and Treaties made, or which shall be made, under their Authority..." The first landmark decision on "the judicial power" was Marbury v. Madison, . Later cases interpreted the "judicial power" of Article III to establish the power of federal courts to consider or overturn any action of Congress or of any state that conflicts with the Constitution. The interactions between decisions of different courts is discussed further in the article on precedent. Further interactions between common law and either statute or regulation are discussed further in the articles on Skidmore deference, Chevron deference, and Auer deference. Overruling precedent—the limits of stare decisis The United States federal courts are divided into twelve regional circuits, each with a circuit court of appeals (plus a thirteenth, the Court of Appeals for the Federal Circuit, which hears appeals in patent cases and cases against the federal government, without geographic limitation). Decisions of one circuit court are binding on the district courts within the circuit and on the circuit court itself, but are only persuasive authority on sister circuits. District court decisions are not binding precedent at all, only persuasive. Most of the U.S. federal courts of appeal have adopted a rule under which, in the event of any conflict in decisions of panels (most of the courts of appeal almost always sit in panels of three), the earlier panel decision is controlling, and a panel decision may only be overruled by the court of appeals sitting en banc (that is, all active judges of the court) or by a higher court. In these courts, the older decision remains controlling when an issue comes up the third time. Other courts, for example, the Court of Customs and Patent Appeals and the Supreme Court, always sit en banc, and thus the later decision controls. These courts essentially overrule all previous cases in each new case, and older cases survive only to the extent they do not conflict with newer cases. The interpretations of these courts—for example, Supreme Court interpretations of the constitution or federal statutes—are stable only so long as the older interpretation maintains the support of a majority of the court. Older decisions persist through some combination of belief that the old decision is right, and that it is not sufficiently wrong to be overruled. In the jurisdictions of England and Wales and of Northern Ireland, since 2009, the Supreme Court of the United Kingdom has the authority to overrule and unify criminal law decisions of lower courts; it is the final court of appeal for civil law cases in all three of the UK jurisdictions but not for criminal law cases in Scotland. From 1966 to 2009, this power lay with the House of Lords, granted by the Practice Statement of 1966. Canada's federal system, described below, avoids regional variability of federal law by giving national jurisdiction to both layers of appellate courts. Common law as a foundation for commercial economies The reliance on judicial opinion is a strength of common law systems, and is a significant contributor to the robust commercial systems in the United Kingdom and United States. Because there is reasonably precise guidance on almost every issue, parties (especially commercial parties) can predict whether a proposed course of action is likely to be lawful or unlawful, and have some assurance of consistency. As Justice Brandeis famously expressed it, "in most matters it is more important that the applicable rule of law be settled than that it be settled right." This ability to predict gives more freedom to come close to the boundaries of the law. For example, many commercial contracts are more economically efficient, and create greater wealth, because the parties know ahead of time that the proposed arrangement, though perhaps close to the line, is almost certainly legal. Newspapers, taxpayer-funded entities with some religious affiliation, and political parties can obtain fairly clear guidance on the boundaries within which their freedom of expression rights apply. In contrast, in jurisdictions with very weak respect for precedent, fine questions of law are redetermined anew each time they arise, making consistency and prediction more difficult, and procedures far more protracted than necessary because parties cannot rely on written statements of law as reliable guides. In jurisdictions that do not have a strong allegiance to a large body of precedent, parties have less a priori guidance (unless the written law is very clear and kept updated) and must often leave a bigger "safety margin" of unexploited opportunities, and final determinations are reached only after far larger expenditures on legal fees by the parties. This is the reason for the frequent choice of the law of the State of New York in commercial contracts, even when neither entity has extensive contacts with New York—and remarkably often even when neither party has contacts with the United States. Commercial contracts almost always include a "choice of law clause" to reduce uncertainty. Somewhat surprisingly, contracts throughout the world (for example, contracts involving parties in Japan, France and Germany, and from most of the other states of the United States) often choose the law of New York, even where the relationship of the parties and transaction to New York is quite attenuated. Because of its history as the United States' commercial center, New York common law has a depth and predictability not (yet) available in any other jurisdictions of the United States. Similarly, American corporations are often formed under Delaware corporate law, and American contracts relating to corporate law issues (merger and acquisitions of companies, rights of shareholders, and so on.) include a Delaware choice of law clause, because of the deep body of law in Delaware on these issues. On the other hand, some other jurisdictions have sufficiently developed bodies of law so that parties have no real motivation to choose the law of a foreign jurisdiction (for example, England and Wales, and the state of California), but not yet so fully developed that parties with no relationship to the jurisdiction choose that law. Outside the United States, parties that are in different jurisdictions from each other often choose the law of England and Wales, particularly when the parties are each in former British colonies and members of the Commonwealth. The common theme in all cases is that commercial parties seek predictability and simplicity in their contractual relations, and frequently choose the law of a common law jurisdiction with a well-developed body of common law to achieve that result. Likewise, for litigation of commercial disputes arising out of unpredictable torts (as opposed to the prospective choice of law clauses in contracts discussed in the previous paragraph), certain jurisdictions attract an unusually high fraction of cases, because of the predictability afforded by the depth of decided cases. For example, London is considered the pre-eminent centre for litigation of admiralty cases. This is not to say that common law is better in every situation. For example, civil law can be clearer than case law when the legislature has had the foresight and diligence to address the precise set of facts applicable to a particular situation. For that reason, civil law statutes tend to be somewhat more detailed than statutes written by common law legislatures—but, conversely, that tends to make the statute more difficult to read (the United States tax code is an example). History Origins The common lawso named because it was "common" to all the king's courts across Englandoriginated in the practices of the courts of the English kings in the centuries following the Norman Conquest in 1066. Prior to the Norman Conquest, much of England's legal business took place in the local folk courts of its various shires and hundreds. A variety of other individual courts also existed across the land: urban boroughs and merchant fairs held their own courts, as did the universities of Oxford and Cambridge, and large landholders also held their own manorial and seigniorial courts as needed. The degree to which common law drew from earlier Anglo-Saxon traditions such as the jury, ordeals, the penalty of outlawry, and writs all of which were incorporated into the Norman common law is still a subject of much discussion. Additionally, the Catholic Church operated its own court system that adjudicated issues of canon law. The main sources for the history of the common law in the Middle Ages are the plea rolls and the Year Books. The plea rolls, which were the official court records for the Courts of Common Pleas and King's Bench, were written in Latin. The rolls were made up in bundles by law term: Hilary, Easter, Trinity, and Michaelmas, or winter, spring, summer, and autumn. They are currently deposited in the UK National Archives, by whose permission images of the rolls for the Courts of Common Pleas, King's Bench, and Exchequer of Pleas, from the 13th century to the 17th, can be viewed online at the Anglo-American Legal Tradition site (The O'Quinn Law Library of the University of Houston Law Center). The doctrine of precedent developed during the 12th and 13th centuries, as the collective judicial decisions that were based in tradition, custom and precedent. The form of reasoning used in common law is known as casuistry or case-based reasoning. The common law, as applied in civil cases (as distinct from criminal cases), was devised as a means of compensating someone for wrongful acts known as torts, including both intentional torts and torts caused by negligence, and as developing the body of law recognizing and regulating contracts. The type of procedure practiced in common law courts is known as the adversarial system; this is also a development of the common law. Medieval English common law In 1154, Henry II became the first Plantagenet king. Among many achievements, Henry institutionalized common law by creating a unified system of law "common" to the country through incorporating and elevating local custom to the national, ending local control and peculiarities, eliminating arbitrary remedies and reinstating a jury system—citizens sworn on oath to investigate reliable criminal accusations and civil claims. The jury reached its verdict through evaluating common local knowledge, not necessarily through the presentation of evidence, a distinguishing factor from today's civil and criminal court systems. At the time, royal government centered on the Curia Regis (king's court), the body of aristocrats and prelates who assisted in the administration of the realm and the ancestor of Parliament, the Star Chamber, and Privy Council. Henry II developed the practice of sending judges (numbering around 20 to 30 in the 1180s) from his Curia Regis to hear the various disputes throughout the country, and return to the court thereafter. The king's itinerant justices would generally receive a writ or commission under the great seal. They would then resolve disputes on an ad hoc basis according to what they interpreted the customs to be. The king's judges would then return to London and often discuss their cases and the decisions they made with the other judges. These decisions would be recorded and filed. In time, a rule, known as stare decisis (also commonly known as precedent) developed, whereby a judge would be bound to follow the decision of an earlier judge; he was required to adopt the earlier judge's interpretation of the law and apply the same principles promulgated by that earlier judge if the two cases had similar facts to one another. Once judges began to regard each other's decisions to be binding precedent, the pre-Norman system of local customs and law varying in each locality was replaced by a system that was (at least in theory, though not always in practice) common throughout the whole country, hence the name "common law". The king's object was to preserve public order, but providing law and order was also extremely profitable–cases on forest use as well as fines and forfeitures can generate "great treasure" for the government. Eyres (a Norman French word for judicial circuit, originating from Latin iter) are more than just courts; they would supervise local government, raise revenue, investigate crimes, and enforce feudal rights of the king. There were complaints that the eyre of 1198 reducing the kingdom to poverty and Cornishmen fleeing to escape the eyre of 1233. Henry II's creation of a powerful and unified court system, which curbed somewhat the power of canonical (church) courts, brought him (and England) into conflict with the church, most famously with Thomas Becket, the Archbishop of Canterbury. The murder of the Archbishop gave rise to a wave of popular outrage against the King. Henry was forced to repeal the disputed laws and to abandon his efforts to hold church members accountable for secular crimes (see also Constitutions of Clarendon). The English Court of Common Pleas was established after Magna Carta to try lawsuits between commoners in which the monarch had no interest. Its judges sat in open court in the Great Hall of the king's Palace of Westminster, permanently except in the vacations between the four terms of the Legal year. Judge-made common law operated as the primary source of law for several hundred years, before Parliament acquired legislative powers to create statutory law. It is important to understand that common law is the older and more traditional source of law, and legislative power is simply a layer applied on top of the older common law foundation. Since the 12th century, courts have had parallel and co-equal authority to make law—"legislating from the bench" is a traditional and essential function of courts, which was carried over into the U.S. system as an essential component of the "judicial power" specified by Article III of the U.S. Constitution. Justice Oliver Wendell Holmes Jr. summarized centuries of history in 1917, "judges do and must legislate." There are legitimate debates on how the powers of courts and legislatures should be balanced. However, the view that courts lack law-making power is historically inaccurate and constitutionally unsupportable. In England, judges have devised a number of rules as to how to deal with precedent decisions. The early development of case-law in the thirteenth century has been traced to Bracton's On the Laws and Customs of England and led to the yearly compilations of court cases known as Year Books, of which the first extant was published in 1268, the same year that Bracton died. The Year Books are known as the law reports of medieval England, and are a principal source for knowledge of the developing legal doctrines, concepts, and methods in the period from the 13th to the 16th centuries, when the common law developed into recognizable form. Influence of Roman law The term "common law" is often used as a contrast to Roman-derived "civil law", and the fundamental processes and forms of reasoning in the two are quite different. Nonetheless, there has been considerable cross-fertilization of ideas, while the two traditions and sets of foundational principles remain distinct. By the time of the rediscovery of the Roman law in Europe in the 12th and 13th centuries, the common law had already developed far enough to prevent a Roman law reception as it occurred on the continent. However, the first common law scholars, most notably Glanvill and Bracton, as well as the early royal common law judges, had been well accustomed with Roman law. Often, they were clerics trained in the Roman canon law. One of the first and throughout its history one of the most significant treatises of the common law, Bracton's De Legibus et Consuetudinibus Angliae (On the Laws and Customs of England), was heavily influenced by the division of the law in Justinian's Institutes. The impact of Roman law had decreased sharply after the age of Bracton, but the Roman divisions of actions into in rem (typically, actions against a thing or property for the purpose of gaining title to that property; must be filed in a court where the property is located) and in personam (typically, actions directed against a person; these can affect a person's rights and, since a person often owns things, his property too) used by Bracton had a lasting effect and laid the groundwork for a return of Roman law structural concepts in the 18th and 19th centuries. Signs of this can be found in Blackstone's Commentaries on the Laws of England, and Roman law ideas regained importance with the revival of academic law schools in the 19th century. As a result, today, the main systematic divisions of the law into property, contract, and tort (and to some extent unjust enrichment) can be found in the civil law as well as in the common law. Coke and Blackstone The first attempt at a comprehensive compilation of centuries of common law was by Lord Chief Justice Edward Coke, in his treatise, Institutes of the Lawes of England in the 17th century. The next definitive historical treatise on the common law is Commentaries on the Laws of England, written by Sir William Blackstone and first published in 1765–1769. Propagation of the common law to the colonies and Commonwealth by reception statutes A reception statute is a statutory law adopted as a former British colony becomes independent, by which the new nation adopts (i.e. receives) pre-independence common law, to the extent not explicitly rejected by the legislative body or constitution of the new nation. Reception statutes generally consider the English common law dating prior to independence, and the precedent originating from it, as the default law, because of the importance of using an extensive and predictable body of law to govern the conduct of citizens and businesses in a new state. All U.S. states, with the partial exception of Louisiana, have either implemented reception statutes or adopted the common law by judicial opinion. Other examples of reception statutes in the United States, the states of the U.S., Canada and its provinces, and Hong Kong, are discussed in the reception statute article. Yet, adoption of the common law in the newly independent nation was not a foregone conclusion, and was controversial. Immediately after the American Revolution, there was widespread distrust and hostility to anything British, and the common law was no exception. Jeffersonians decried lawyers and their common law tradition as threats to the new republic. The Jeffersonians preferred a legislatively enacted civil law under the control of the political process, rather than the common law developed by judges that—by design—were insulated from the political process. The Federalists believed that the common law was the birthright of Independence: after all, the natural rights to "life, liberty, and the pursuit of happiness" were the rights protected by common law. Even advocates for the common law approach noted that it was not an ideal fit for the newly independent colonies: judges and lawyers alike were severely hindered by a lack of printed legal materials. Before Independence, the most comprehensive law libraries had been maintained by Tory lawyers, and those libraries vanished with the loyalist expatriation, and the ability to print books was limited. Lawyer (later President) John Adams complained that he "suffered very much for the want of books". To bootstrap this most basic need of a common law system—knowable, written law—in 1803, lawyers in Massachusetts donated their books to found a law library. A Jeffersonian newspaper criticized the library, as it would carry forward "all the old authorities practiced in England for centuries back ... whereby a new system of jurisprudence [will be founded] on the high monarchical system [to] become the Common Law of this Commonwealth... [The library] may hereafter have a very unsocial purpose." For several decades after independence, English law still exerted influence over American common law—for example, with Byrne v Boadle (1863), which first applied the res ipsa loquitur doctrine. Decline of Latin maxims and "blind imitation of the past", and adding flexibility to stare decisis Well into the 19th century, ancient maxims played a large role in common law adjudication. Many of these maxims had originated in Roman Law, migrated to England before the introduction of Christianity to the British Isles, and were typically stated in Latin even in English decisions. Many examples are familiar in everyday speech even today, "One cannot be a judge in one's own cause" (see Dr. Bonham's Case), rights are reciprocal to obligations, and the like. Judicial decisions and treatises of the 17th and 18th centuries, such at those of Lord Chief Justice Edward Coke, presented the common law as a collection of such maxims. Reliance on old maxims and rigid adherence to precedent, no matter how old or ill-considered, came under critical discussion in the late 19th century, starting in the United States. Oliver Wendell Holmes Jr. in his famous article, "The Path of the Law", commented, "It is revolting to have no better reason for a rule of law than that so it was laid down in the time of Henry IV. It is still more revolting if the grounds upon which it was laid down have vanished long since, and the rule simply persists from blind imitation of the past." Justice Holmes noted that study of maxims might be sufficient for "the man of the present", but "the man of the future is the man of statistics and the master of economics". In an 1880 lecture at Harvard, he wrote: The life of the law has not been logic; it has been experience. The felt necessities of the time, the prevalent moral and political theories, intuitions of public policy, avowed or unconscious, even the prejudices which judges share with their fellow men, have had a good deal more to do than the syllogism in determining the rules by which men should be governed. The law embodies the story of a nation's development through many centuries, and it cannot be dealt with as if it contained only the axioms and corollaries of a book of mathematics. In the early 20th century, Louis Brandeis, later appointed to the United States Supreme Court, became noted for his use of policy-driving facts and economics in his briefs, and extensive appendices presenting facts that lead a judge to the advocate's conclusion. By this time, briefs relied more on facts than on Latin maxims. Reliance on old maxims is now deprecated. Common law decisions today reflect both precedent and policy judgment drawn from economics, the social sciences, business, decisions of foreign courts, and the like. The degree to which these external factors should influence adjudication is the subject of active debate, but it is indisputable that judges do draw on experience and learning from everyday life, from other fields, and from other jurisdictions. 1870 through 20th century, and the procedural merger of law and equity As early as the 15th century, it became the practice that litigants who felt they had been cheated by the common law system would petition the King in person. For example, they might argue that an award of damages (at common law (as opposed to equity)) was not sufficient redress for a trespasser occupying their land, and instead request that the trespasser be evicted. From this developed the system of equity, administered by the Lord Chancellor, in the courts of chancery. By their nature, equity and law were frequently in conflict and litigation would frequently continue for years as one court countermanded the other, even though it was established by the 17th century that equity should prevail. In England, courts of law (as opposed to equity) were combined with courts of equity by the Judicature Acts of 1873 and 1875, with equity prevailing in case of conflict. In the United States, parallel systems of law (providing money damages, with cases heard by a jury upon either party's request) and equity (fashioning a remedy to fit the situation, including injunctive relief, heard by a judge) survived well into the 20th century. The United States federal courts procedurally separated law and equity: the same judges could hear either kind of case, but a given case could only pursue causes in law or in equity, and the two kinds of cases proceeded under different procedural rules. This became problematic when a given case required both money damages and injunctive relief. In 1937, the new Federal Rules of Civil Procedure combined law and equity into one form of action, the "civil action". Fed.R.Civ.P. . The distinction survives to the extent that issues that were "common law (as opposed to equity)" as of 1791 (the date of adoption of the Seventh Amendment) are still subject to the right of either party to request a jury, and "equity" issues are decided by a judge. The states of Delaware, Illinois, Mississippi, South Carolina, and Tennessee continue to have divided courts of law and courts of chancery, for example, the Delaware Court of Chancery. In New Jersey, the appellate courts are unified, but the trial courts are organized into a Chancery Division and a Law Division. Common law pleading and its abolition in the early 20th century For centuries, through to the 19th century, the common law recognized only specific forms of action, and required very careful drafting of the opening pleading (called a writ) to slot into exactly one of them: debt, detinue, covenant, special assumpsit, general assumpsit, trespass, trover, replevin, case (or trespass on the case), and ejectment. To initiate a lawsuit, a pleading had to be drafted to meet myriad technical requirements: correctly categorizing the case into the correct legal pigeonhole (pleading in the alternative was not permitted), and using specific "magic words" encrusted over the centuries. Under the old common law pleading standards, a suit by a pro se ("for oneself", without a lawyer) party was all but impossible, and there was often considerable procedural jousting at the outset of a case over minor wording issues. One of the major reforms of the late 19th century and early 20th century was the abolition of common law pleading requirements. A plaintiff can initiate a case by giving the defendant "a short and plain statement" of facts that constitute an alleged wrong. This reform moved the attention of courts from technical scrutiny of words to a more rational consideration of the facts, and opened access to justice far more broadly. Alternatives to common law systems Civil law systems—comparisons and contrasts to common law The main alternative to the common law system is the civil law system, which is used in Continental Europe, and most of Central and South America. Judicial decisions play only a minor role in shaping civil law The primary contrast between the two systems is the role of written decisions and precedent. In common law jurisdictions, nearly every case that presents a bona fide disagreement on the law is resolved in a written opinion. The legal reasoning for the decision, known as ratio decidendi, not only determines the court's judgment between the parties, but also stands as precedent for resolving future disputes. In contrast, civil law decisions typically do not include explanatory opinions, and thus no precedent flows from one decision to the next. In common law systems, a single decided case is binding common law (connotation 1) to the same extent as statute or regulation, under the principle of stare decisis. In contrast, in civil law systems, individual decisions have only advisory, not binding effect. In civil law systems, case law only acquires weight when a long series of cases use consistent reasoning, called jurisprudence constante. Civil law lawyers consult case law to obtain their best prediction of how a court will rule, but comparatively, civil law judges are less bound to follow it. For that reason, statutes in civil law systems are more comprehensive, detailed, and continuously updated, covering all matters capable of being brought before a court. Adversarial system vs. inquisitorial system Common law systems tend to give more weight to separation of powers between the judicial branch and the executive branch. In contrast, civil law systems are typically more tolerant of allowing individual officials to exercise both powers. One example of this contrast is the difference between the two systems in allocation of responsibility between prosecutor and adjudicator. Common law courts usually use an adversarial system, in which two sides present their cases to a neutral judge. In contrast, in civil law systems, criminal proceedings proceed under an inquisitorial system in which an examining magistrate serves two roles by developing the evidence and arguments for one side and then the other during the investigation phase. The examining magistrate then presents the dossier detailing his or her findings to the president of the bench that will adjudicate on the case where it has been decided that a trial shall be conducted. Therefore, the president of the bench's view of the case is not neutral and may be biased while conducting the trial after the reading of the dossier. Unlike the common law proceedings, the president of the bench in the inquisitorial system is not merely an umpire and is entitled to directly interview the witnesses or express comments during the trial, as long as he or she does not express his or her view on the guilt of the accused. The proceeding in the inquisitorial system is essentially by writing. Most of the witnesses would have given evidence in the investigation phase and such evidence will be contained in the dossier under the form of police reports. In the same way, the accused would have already put his or her case at the investigation phase but he or she will be free to change his or her evidence at trial. Whether the accused pleads guilty or not, a trial will be conducted. Unlike the adversarial system, the conviction and sentence to be served (if any) will be released by the trial jury together with the president of the trial bench, following their common deliberation. In contrast, in an adversarial system, the onus of framing the case rests on the parties, and judges generally decide the case presented to them, rather than acting as active investigators, or actively reframing the issues presented. "In our adversary system, in both civil and criminal cases, in the first instance and on appeal, we follow the principle of party presentation. That is, we rely on the parties to frame the issues for decision and assign to courts the role of neutral arbiter of matters the parties present." This principle applies with force in all issues in criminal matters, and to factual issues: courts seldom engage in fact gathering on their own initiative, but decide facts on the evidence presented (even here, there are exceptions, for "legislative facts" as opposed to "adjudicative facts"). On the other hand, on issues of law, courts regularly raise new issues (such as matters of jurisdiction or standing), perform independent research, and reformulate the legal grounds on which to analyze the facts presented to them. The United States Supreme Court regularly decides based on issues raised only in amicus briefs from non-parties. One of the most notable such cases was Erie Railroad v. Tompkins, a 1938 case in which neither party questioned the ruling from the 1842 case Swift v. Tyson that served as the foundation for their arguments, but which led the Supreme Court to overturn Swift during their deliberations. To avoid lack of notice, courts may invite briefing on an issue to ensure adequate notice. However, there are limits—an appeals court may not introduce a theory that contradicts the party's own contentions. There are many exceptions in both directions. For example, most proceedings before U.S. federal and state agencies are inquisitorial in nature, at least the initial stages (e.g., a patent examiner, a social security hearing officer, and so on), even though the law to be applied is developed through common law processes. Contrasting role of treatises and academic writings in common law and civil law systems The role of the legal academy presents a significant "cultural" difference between common law (connotation 2) and civil law jurisdictions. In both systems, treatises compile decisions and state overarching principles that (in the author's opinion) explain the results of the cases. In neither system are treatises considered "law," but the weight given them is nonetheless quite different. In common law jurisdictions, lawyers and judges tend to use these treatises as only "finding aids" to locate the relevant cases. In common law jurisdictions, scholarly work is seldom cited as authority for what the law is. Chief Justice Roberts noted the "great disconnect between the academy and the profession." When common law courts rely on scholarly work, it is almost always only for factual findings, policy justification, or the history and evolution of the law, but the court's legal conclusion is reached through analysis of relevant statutes and common law, seldom scholarly commentary. In contrast, in civil law jurisdictions, courts give the writings of law professors significant weight, partly because civil law decisions traditionally were very brief, sometimes no more than a paragraph stating who wins and who loses. The rationale had to come from somewhere else: the academy often filled that role. Narrowing of differences between common law and civil law The contrast between civil law and common law legal systems has become increasingly blurred, with the growing importance of jurisprudence (similar to case law but not binding) in civil law countries, and the growing importance of statute law and codes in common law countries. Examples of common law being replaced by statute or codified rule in the United States include criminal law (since 1812, U.S. federal courts and most but not all of the states have held that criminal law must be embodied in statute if the public is to have fair notice), commercial law (the Uniform Commercial Code in the early 1960s) and procedure (the Federal Rules of Civil Procedure in the 1930s and the Federal Rules of Evidence in the 1970s). But note that in each case, the statute sets the general principles, but the interstitial common law process determines the scope and application of the statute. An example of convergence from the other direction is shown in the 1982 decision Srl CILFIT and Lanificio di Gavardo SpA v Ministry of Health (), in which the European Court of Justice held that questions it has already answered need not be resubmitted. This showed how a historically distinctly common law principle is used by a court composed of judges (at that time) of essentially civil law jurisdiction. Other alternatives The former Soviet Bloc and other socialist countries used a socialist law system, although there is controversy as to whether socialist law ever constituted a separate legal system or not. Much of the Muslim world uses legal systems based on Sharia (also called Islamic law). Many churches use a system of canon law. The canon law of the Catholic Church influenced the common law during the medieval period through its preservation of Roman law doctrine such as the presumption of innocence. Common law legal systems in the present day In jurisdictions around the world The common law constitutes the basis of the legal systems of: Australia (both federal and individual states), Bangladesh, Belize, Brunei, Canada (both federal and the individual provinces (except Quebec)), the Caribbean jurisdictions of Antigua and Barbuda, Barbados, Bahamas, Dominica, Grenada, Jamaica, St Vincent and the Grenadines, Saint Kitts and Nevis, Trinidad and Tobago, Ghana, Hong Kong, India, Ireland, Israel, Kenya, Nigeria, Malaysia, Myanmar, New Zealand, Pakistan, Philippines, Singapore, South Africa, United Kingdom: England and Wales, Northern Ireland, United States (both the federal system and the individual states (with the partial exception of Louisiana)), and many other generally English-speaking countries or Commonwealth countries (except the UK's Scotland, which is bijuridicial, and Malta). Essentially, every country that was colonised at some time by England, Great Britain, or the United Kingdom uses common law except those that were formerly colonised by other nations, such as Quebec (which follows the bijuridicial law or civil code of France in part), South Africa and Sri Lanka (which follow Roman Dutch law), where the prior civil law system was retained to respect |
(common law), the non-criminal branch of law in a common law legal system Civil law (legal system), or continental law, a legal system originating in continental Europe and based on Roman law Private | law of a state, as opposed to international law See also Civil code Civil (disambiguation) Ius civile, Latin for "civil law" Common law (disambiguation) Criminal law |
Appeals for the Fourth Circuit United States Court of Appeals for the Fifth Circuit United States Court of Appeals for the Sixth Circuit United States Court of Appeals for the Seventh Circuit United States Court of Appeals for the Eighth Circuit United States Court of Appeals for the Ninth Circuit United States Court of Appeals for the Tenth Circuit United States Court of Appeals for the Eleventh Circuit Temporary Emergency Court of Appeals (defunct) Alabama Court of Appeals (which existed until 1969) Alaska Court of Appeals Arizona Court of Appeals Arkansas Court of Appeals Colorado Court of Appeals District of Columbia Court of Appeals Georgia Court of Appeals Hawaii Intermediate Court of Appeals Idaho Court of Appeals Illinois Court of Appeals Indiana Court of Appeals Iowa Court of Appeals Kansas Court of Appeals Kentucky | Appeals Kentucky Court of Appeals Louisiana Court of Appeals Maryland Court of Appeals Michigan Court of Appeals Minnesota Court of Appeals Mississippi Court of Appeals Missouri Court of Appeals Nebraska Court of Appeals New Mexico Court of Appeals New York Court of Appeals North Carolina Court of Appeals North Dakota Court of Appeals Ohio Seventh District Court of Appeals Oregon Court of Appeals South Carolina Court of Appeals Tennessee Court of Appeals Texas Courts of Appeals First Court of Appeals of Texas Second Court of Appeals of Texas Third Court of Appeals of Texas Fourth Court of Appeals of Texas Fifth Court of Appeals of Texas Sixth Court of Appeals of Texas Seventh Court of Appeals of Texas Eighth Court of Appeals of Texas Ninth Court of Appeals of Texas Tenth Court of Appeals of Texas Eleventh Court of Appeals of Texas Twelfth Court of Appeals of Texas Thirteenth Court of Appeals of Texas Fourteenth Court of Appeals of Texas Utah Court of Appeals Court of Appeals of Virginia Washington Court of Appeals Supreme Court of Appeals of West Virginia Wisconsin Court of Appeals See also Court of Appeal (disambiguation) Court of Criminal Appeals (disambiguation) Court of Criminal Appeal (disambiguation) Appeal |
of Species, were that it was probable that there was only one progenitor for all life forms: Therefore I should infer from analogy that probably all the organic beings which have ever lived on this earth have descended from some one primordial form, into which life was first breathed. But he precedes that remark by, "Analogy would lead me one step further, namely, to the belief that all animals and plants have descended from some one prototype. But analogy may be a deceitful guide." And in the subsequent edition, he asserts rather, "We do not know all the possible transitional gradations between the simplest and the most perfect organs; it cannot be pretended that we know all the varied means of Distribution during the long lapse of years, or that we know how imperfect the Geological Record is. Grave as these several difficulties are, in my judgment they do not overthrow the theory of descent from a few created forms with subsequent modification". Common descent was widely accepted amongst the scientific community after Darwin's publication. In 1907, Vernon Kellogg commented that "practically no naturalists of position and recognized attainment doubt the theory of descent." In 2008, biologist T. Ryan Gregory noted that: No reliable observation has ever been found to contradict the general notion of common descent. It should come as no surprise, then, that the scientific community at large has accepted evolutionary descent as a historical reality since Darwin’s time and considers it among the most reliably established and fundamentally important facts in all of science. Evidence Common biochemistry All known forms of life are based on the same fundamental biochemical organization: genetic information encoded in DNA, transcribed into RNA, through the effect of protein- and RNA-enzymes, then translated into proteins by (highly similar) ribosomes, with ATP, NADPH and others as energy sources. Analysis of small sequence differences in widely shared substances such as cytochrome c further supports universal common descent. Some 23 proteins are found in all organisms, serving as enzymes carrying out core functions like DNA replication. The fact that only one such set of enzymes exists is convincing evidence of a single ancestry. 6,331 genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago in the Precambrian. Common genetic code The genetic code (the "translation table" according to which DNA information is translated into amino acids, and hence proteins) is nearly identical for all known lifeforms, from bacteria and archaea to animals and plants. The universality of this code is generally regarded by biologists as definitive evidence in favor of universal common descent. The way that codons (DNA triplets) are mapped to amino acids seems to be strongly optimised. Richard Egel argues that in particular the hydrophobic (non-polar) side-chains are well organised, suggesting that these enabled the earliest organisms to create peptides with water-repelling regions able to support the essential electron exchange (redox) reactions for energy transfer. Selectively neutral similarities Similarities which have no adaptive relevance cannot be explained by convergent evolution, and therefore they provide compelling support for universal common descent. Such evidence has come from two areas: amino acid sequences and DNA sequences. Proteins with the same three-dimensional structure need not have identical amino acid sequences; any irrelevant similarity between the sequences is evidence for common descent. In certain cases, there are several codons (DNA triplets) that code redundantly for the same amino acid. Since many species use the same codon at the same place to specify an amino acid that can be represented by more than one codon, that is evidence for their sharing a recent common ancestor. Had the amino acid sequences come from different ancestors, they would have been coded for by any of the redundant codons, and since the correct amino acids would already have been in place, natural selection would not have driven any change in the codons, however much time was available. Genetic drift could change the codons, but it would be extremely unlikely to make all the redundant codons in a whole sequence match exactly across multiple lineages. Similarly, shared nucleotide sequences, especially where these are apparently neutral such as the positioning of introns and pseudogenes, provide strong evidence of common ancestry. Other similarities Biologists often point | (redox) reactions for energy transfer. Selectively neutral similarities Similarities which have no adaptive relevance cannot be explained by convergent evolution, and therefore they provide compelling support for universal common descent. Such evidence has come from two areas: amino acid sequences and DNA sequences. Proteins with the same three-dimensional structure need not have identical amino acid sequences; any irrelevant similarity between the sequences is evidence for common descent. In certain cases, there are several codons (DNA triplets) that code redundantly for the same amino acid. Since many species use the same codon at the same place to specify an amino acid that can be represented by more than one codon, that is evidence for their sharing a recent common ancestor. Had the amino acid sequences come from different ancestors, they would have been coded for by any of the redundant codons, and since the correct amino acids would already have been in place, natural selection would not have driven any change in the codons, however much time was available. Genetic drift could change the codons, but it would be extremely unlikely to make all the redundant codons in a whole sequence match exactly across multiple lineages. Similarly, shared nucleotide sequences, especially where these are apparently neutral such as the positioning of introns and pseudogenes, provide strong evidence of common ancestry. Other similarities Biologists often point to the universality of many aspects of cellular life as supportive evidence to the more compelling evidence listed above. These similarities include the energy carrier adenosine triphosphate (ATP), and the fact that all amino acids found in proteins are left-handed. It is, however, possible that these similarities resulted because of the laws of physics and chemistry - rather than through universal common descent - and therefore resulted in convergent evolution. In contrast, there is evidence for homology of the central subunits of Transmembrane ATPases throughout all living organisms, especially how the rotating elements are bound to the membrane. This supports the assumption of a LUCA as a cellular organism, although primordial membranes may have been semipermeable and evolved later to the membranes of modern bacteria, and on a second path to those of modern archaea also. Phylogenetic trees Another important piece of evidence is from detailed phylogenetic trees (i.e., "genealogic trees" of species) mapping out the proposed divisions and common ancestors of all living species. In 2010, Douglas L. Theobald published a statistical analysis of available genetic data, mapping them to phylogenetic trees, that gave "strong quantitative support, by a formal test, for the unity of life." Traditionally, these trees have been built using morphological methods, such as appearance, embryology, etc. Recently, it has been possible to construct these trees using molecular data, based on similarities and differences between genetic and protein sequences. All these methods produce essentially similar results, even though most genetic variation has no influence over external morphology. That phylogenetic trees based on different types of information agree with each other is strong evidence of a real |
Northwestern Europe. It refers to both orally-transmitted traditional music and recorded music and the styles vary considerably to include everything from "trad" (traditional) music to a wide range of hybrids. Description and definition Celtic music means two things mainly. First, it is the music of the people that identify themselves as Celts. Secondly, it refers to whatever qualities may be unique to the music of the Celtic nations. Many notable Celtic musicians such as Alan Stivell and Paddy Moloney claim that the different Celtic music genres have a lot in common. These following melodic practices may be used widely across the different variants of Celtic Music: It is common for the melodic line to move up and down the primary chords in many Celtic songs. There are a number of possible reasons for this: Melodic variation can be easily introduced. Melodic variation is widely used in Celtic music, especially by the pipes and harp. It is easier to anticipate the direction that the melody will take, so that harmony either composed or improvised can be introduced: clichéd cadences that are essential for impromptu harmony are also more easily formed. The relatively wider tonal intervals in some songs make it possible for stress accents within the poetic line to be more in keeping with the local Celtic accent. Across just one Celtic group. By more than one Celtic language population belonging to different Celtic groups. These two latter usage patterns may simply be remnants of formerly widespread melodic practices. Often, the term Celtic music is applied to the music of Ireland and Scotland because both lands have produced well-known distinctive styles which actually have genuine commonality and clear mutual influences. The definition is further complicated by the fact that Irish independence has allowed Ireland to promote 'Celtic' music as a specifically Irish product. However, these are modern geographical references to a people who share a common Celtic ancestry and consequently, a common musical heritage. These styles are known because of the importance of Irish and Scottish people in the English speaking world, especially in the United States, where they had a profound impact on American music, particularly bluegrass and country music. The music of Wales, Cornwall, the Isle of Man, Brittany, Galician traditional music (Spain) and music of Portugal are also considered Celtic music, the tradition being particularly strong in Brittany, where Celtic festivals large and small take place throughout the year, and in Wales, where the ancient eisteddfod tradition has been revived and flourishes. Additionally, the musics of ethnically Celtic peoples abroad are vibrant, especially in Canada and the United States. In Canada the provinces of Atlantic Canada are known for being a home of Celtic music, most notably on the islands of Newfoundland, Cape Breton and Prince Edward Island. The traditional music of Atlantic Canada is heavily influenced by the Irish, Scottish and Acadian ethnic makeup of much of the region's communities. In some parts of Atlantic Canada, such as Newfoundland, Celtic music is as or more popular than in the old country. Further, some older forms of Celtic music that are rare in Scotland and Ireland today, such as the practice of accompanying a fiddle with a piano, or the Gaelic spinning songs of Cape Breton remain common in the Maritimes. Much of the music of this region is Celtic in nature, but originates in the local area and celebrates the sea, seafaring, fishing and other primary industries. Divisions In Celtic Music: A Complete Guide, June Skinner Sawyers acknowledges six Celtic nationalities divided into two groups according to their linguistic heritage. The Q-Celtic nationalities are the Irish, Scottish and Manx peoples, while the P-Celtic groups are the Cornish, Bretons and Welsh peoples. Musician Alan Stivell uses a similar dichotomy, between the Gaelic (Irish/Scottish/Manx) and the Brythonic (Breton/Welsh/Cornish) branches, which differentiate "mostly by the extended range (sometimes more than two octaves) of Irish and Scottish melodies and the closed range of Breton and Welsh melodies (often reduced to a half-octave), and by the frequent use of the pure pentatonic scale in Gaelic music." There is also tremendous variation between Celtic regions. Ireland, Scotland, Wales, Cornwall, and Brittany have living traditions of language and music, and there has been a recent major revival of interest in Celtic heritage in the Isle of Man. Galicia has a Celtic language revival movement to revive the Q-Celtic Gallaic language used into Roman times. Most of the Iberian Peninsula had a similar Celtic language in pre-Roman times. A Brythonic language was used in parts of Galicia and Asturias into early Medieval times brought by Britons fleeing the Anglo-Saxon invasions via Brittany. The Romance language currently spoken in Galicia, Galician (Galego) is closely related to the Portuguese language used mainly in Brazil and Portugal. Galician music is claimed to be Celtic. The same is true of the music of Asturias, Cantabria, and that of Northern Portugal (some say even traditional music from Central Portugal can be labeled Celtic). Breton artist Alan Stivell | de Música Celta de Collado Villalba (Collado Villalba, Spain) Yn Chruinnaght (Isle of Man) Celtic Connections (Glasgow, Scotland) Hebridean Celtic Festival (Stornoway, Scotland) Fleadh ceol na hÉireann (Tullamore, Ireland) Festival Intercéltico de Sendim (Sendim, Portugal) Galaicofolia (Esposende, Portugal) Festival Folk Celta Ponte da Barca (Ponte da Barca, Portugal) Douro Celtic Fest (Vila Nova de Gaia, Portugal) Festival Interceltique de Lorient (Lorient, France) Festival del Kan ar Bobl (Lorient, France) Festival de Cornouaille (Quimper, France) Les Nuits Celtiques du Stade de France (Paris, France) Montelago Celtic Night (Colfiorito, Macerata, Italy) Triskell International Celtic Festival (Trieste, Italy) Festival celtique de Québec or Québec city celtic festival, (Quebec city, Quebec, Canada) Festival Mémoire et Racines (Joliette, Quebec, Canada) Celtic Colours (Cape Breton, Nova Scotia, Canada) Paganfest (Tour through Europe) Celtic fusion The oldest musical tradition which fits under the label of Celtic fusion originated in the rural American south in the early colonial period and incorporated English, Scottish, Irish, Welsh, German, and African influences. Variously referred to as roots music, American folk music, or old-time music, this tradition has exerted a strong influence on all forms of American music, including country, blues, and rock and roll. In addition to its lasting effects on other genres, it marked the first modern large-scale mixing of musical traditions from multiple ethnic and religious communities within the Celtic diaspora. In the 1960s several bands put forward modern adaptations of Celtic music pulling influences from several of the Celtic nations at once to create a modern pan-celtic sound. A few of those include bagadoù (Breton pipe bands), Fairport Convention, Pentangle, Steeleye Span and Horslips. In the 1970s Clannad made their mark initially in the folk and traditional scene, and then subsequently went on to bridge the gap between traditional Celtic and pop music in the 1980s and 1990s, incorporating elements from new-age, smooth jazz, and folk rock. Traces of Clannad's legacy can be heard in the music of many artists, including Enya, Donna Taggart, Altan, Capercaillie, The Corrs, Loreena McKennitt, Anúna, Riverdance and U2. The solo music of Clannad's lead singer, Moya Brennan (often referred to as the First Lady of Celtic Music) has further enhanced this influence. Later, beginning in 1982 with The Pogues' invention of Celtic folk-punk and Stockton's Wing blend of Irish traditional and Pop, Rock and Reggae, there has been a movement to incorporate Celtic influences into other genres of music. Bands like Flogging Molly, Black 47, Dropkick Murphys, The Young Dubliners, The Tossers introduced a hybrid of Celtic rock, punk, reggae, hardcore and other elements in the 1990s that has become popular with Irish-American youth. Today there are Celtic-influenced subgenres of virtually every type of popular music including electronica, rock, metal, punk, hip hop, reggae, new-age, Latin, Andean and pop. Collectively these modern interpretations of Celtic music are sometimes referred to as Celtic fusion. Other modern adaptations Outside of America, the first deliberate attempts to create a "Pan-Celtic music" were made by the Breton Taldir Jaffrennou, having translated songs from Ireland, Scotland, and Wales into Breton between the two world wars. One of his major works was to bring "Hen Wlad Fy Nhadau" (the Welsh national anthem) back in Brittany and create lyrics in Breton. Eventually this song became "Bro goz va zadoù" ("Old land of my fathers") and is the most widely accepted Breton anthem. In the 70s, the Breton Alan Cochevelou (future Alan Stivell) began playing a mixed repertoire from the main Celtic countries on the Celtic harp his father created. Probably the most successful all-inclusive Celtic music composition in recent years is Shaun Daveys composition 'The Pilgrim'. This suite depicts the journey of St. Colum Cille through the Celtic nations of Ireland, Scotland, the Isle of Man, Wales, Cornwall, Brittany and Galicia. The suite which includes a Scottish pipe band, Irish and Welsh harpists, Galician gaitas, Irish uilleann pipes, the bombardes of Brittany, two vocal soloists and a narrator is set against a background of a classical orchestra and a large choir. Modern music may also be termed "Celtic" because it is written and recorded in a Celtic language, regardless of musical style. Many of the Celtic languages have experienced resurgences in modern years, spurred on partly by the action of artists and musicians who have embraced them as hallmarks of identity and distinctness. In 1971, the Irish band Skara Brae recorded its only LP (simply called Skara Brae), all songs in Irish. In 1978 Runrig recorded an album in Scottish Gaelic. In 1992 Capercaillie recorded "A Prince Among Islands", the first Scottish Gaelic language record to reach the UK top 40. In 1996, a song in Breton represented France in the 41st Eurovision Song Contest, the first time in history that France had a song without a word in French. Since about 2005, Oi Polloi (from Scotland) have recorded in Scottish Gaelic. Mill a h-Uile Rud (a Scottish Gaelic punk band from Seattle) recorded in the language in 2004. Several contemporary bands have Welsh language songs, such as Ceredwen, which fuses traditional instruments with trip hop beats, the Super Furry Animals, Fernhill, and so on (see the Music of Wales article for more Welsh and Welsh-language bands). The same phenomenon occurs in Brittany, where many singers record songs in Breton, traditional or modern (hip hop, rap, and so on.). See also Folk music of Ireland Music of Brittany Music of Cornwall Galician traditional music Music of the Isle of Man Music of Scotland Music of Wales Music of Portugal Traditional Gaelic music References External links Celtic melody library Free sheet music on CelticScores.com Free sheet music, chords, midis at |
survives as a versification by Aratus, dating to the 3rd century BC. The most complete existing works dealing with the mythical origins of the constellations are by the Hellenistic writer termed pseudo-Eratosthenes and an early Roman writer styled pseudo-Hyginus. The basis of Western astronomy as taught during Late Antiquity and until the Early Modern period is the Almagest by Ptolemy, written in the 2nd century. In the Ptolemaic Kingdom, native Egyptian tradition of anthropomorphic figures represented the planets, stars, and various constellations. Some of these were combined with Greek and Babylonian astronomical systems culminating in the Zodiac of Dendera; it remains unclear when this occurred, but most were placed during the Roman period between 2nd to 4th centuries AD. The oldest known depiction of the zodiac showing all the now familiar constellations, along with some original Egyptian constellations, decans, and planets. Ptolemy's Almagest remained the standard definition of constellations in the medieval period both in Europe and in Islamic astronomy. Ancient China Ancient China had a long tradition of observing celestial phenomena. Nonspecific Chinese star names, later categorized in the twenty-eight mansions, have been found on oracle bones from Anyang, dating back to the middle Shang dynasty. These constellations are some of the most important observations of Chinese sky, attested from the 5th century BC. Parallels to the earliest Babylonian (Sumerian) star catalogues suggest that the ancient Chinese system did not arise independently. Three schools of classical Chinese astronomy in the Han period are attributed to astronomers of the earlier Warring States period. The constellations of the three schools were conflated into a single system by Chen Zhuo, an astronomer of the 3rd century (Three Kingdoms period). Chen Zhuo's work has been lost, but information on his system of constellations survives in Tang period records, notably by Qutan Xida. The oldest extant Chinese star chart dates to that period and was preserved as part of the Dunhuang Manuscripts. Native Chinese astronomy flourished during the Song dynasty, and during the Yuan dynasty became increasingly influenced by medieval Islamic astronomy (see Treatise on Astrology of the Kaiyuan Era). As maps were prepared during this period on more scientific lines, they were considered as more reliable. A well-known map from the Song period is the Suzhou Astronomical Chart, which was prepared with carvings of stars on the planisphere of the Chinese sky on a stone plate; it is done accurately based on observations, and it shows the supernova of the year of 1054 in Taurus. Influenced by European astronomy during the late Ming dynasty, charts depicted more stars but retained the traditional constellations. Newly observed stars were incorporated as supplementary to old constellations in the southern sky, which did not depict the traditional stars recorded by ancient Chinese astronomers. Further improvements were made during the later part of the Ming dynasty by Xu Guangqi and Johann Adam Schall von Bell, the German Jesuit and was recorded in Chongzhen Lishu (Calendrical Treatise of Chongzhen period, 1628). Traditional Chinese star maps incorporated 23 new constellations with 125 stars of the southern hemisphere of the sky based on the knowledge of Western star charts; with this improvement, the Chinese Sky was integrated with the World astronomy. Early modern astronomy Historically, the origins of the constellations of the northern and southern skies are distinctly different. Most northern constellations date to antiquity, with names based mostly on Classical Greek legends. Evidence of these constellations has survived in the form of star charts, whose oldest representation appears on the statue known as the Farnese Atlas, based perhaps on the star catalogue of the Greek astronomer Hipparchus. Southern constellations are more modern inventions, sometimes as substitutes for ancient constellations (e.g. Argo Navis). Some southern constellations had long names that were shortened to more usable forms; e.g. Musca Australis became simply Musca. Some of the early constellations were never universally adopted. Stars were often grouped into constellations differently by different observers, and the arbitrary constellation boundaries often led to confusion as to which constellation a celestial object belonged. Before astronomers delineated precise boundaries (starting in the 19th century), constellations generally appeared as ill-defined regions of the sky. Today they now follow officially accepted designated lines of Right Ascension and Declination based on those defined by Benjamin Gould in epoch 1875.0 in his star catalogue Uranometria Argentina. The 1603 star atlas "Uranometria" of Johann Bayer assigned stars to individual constellations and formalized the division by assigning a series of Greek and Latin letters to the stars within each constellation. These are known today as Bayer designations. Subsequent star atlases led to the development of today's accepted modern constellations. Origin of the southern constellations The southern sky, below about −65° declination, was only partially catalogued by ancient Babylonians, Egyptians, Greeks, Chinese, and Persian astronomers of the north. The knowledge that northern and southern star patterns differed goes back to Classical writers, who describe, for example, the African circumnavigation expedition commissioned by Egyptian Pharaoh Necho II in c. 600 BC and those of Hanno the Navigator in c. 500 BC. The history of southern constellations is not straightforward. Different groupings and different names were proposed by various observers, some reflecting national traditions or designed to promote various sponsors. Southern constellations were important from the 14th to 16th centuries, when sailors used the stars for celestial navigation. Italian explorers who recorded new southern constellations include Andrea Corsali, Antonio Pigafetta, and Amerigo Vespucci. Many of the 88 IAU-recognized constellations in this region first appeared on celestial globes developed in the late 16th century by Petrus Plancius, based mainly on observations of the Dutch navigators Pieter Dirkszoon Keyser and Frederick de Houtman. These became widely known through Johann Bayer's star atlas Uranometria of 1603. Seventeen more were created in 1763 by the French astronomer Nicolas Louis de Lacaille appearing in his star catalogue, published in 1756. Several modern proposals have not survived. The French astronomers Pierre Lemonnier and Joseph Lalande, for example, proposed constellations that were once popular but have since been dropped. The northern constellation Quadrans Muralis survived into the 19th century (when its name was attached to the Quadrantid meteor shower), but is now divided between Boötes and Draco. 88 modern constellations A general list of 88 constellations was produced for the International Astronomical Union in 1922. It is roughly based on the traditional Greek constellations listed by Ptolemy in his Almagest in the 2nd century and Aratus' work Phenomena, with early modern modifications and additions (most importantly introducing constellations covering the parts of the southern sky unknown to Ptolemy) by Petrus Plancius (1592, 1597/98 and 1613), Johannes Hevelius (1690) and Nicolas Louis de Lacaille (1763), who named fourteen constellations and renamed a fifteenth one. De Lacaille studied the stars of the southern hemisphere from 1750 until 1754 from Cape of Good Hope, when he was said to have observed more than 10,000 stars using a refracting telescope. In 1922, Henry Norris Russell produced a general list of 88 constellations and some useful abbreviations for them. However, these constellations did not have clear borders between them. In 1928, the International Astronomical Union (IAU) formally accepted 88 modern constellations, with contiguous boundaries along vertical and horizontal lines of right ascension and declination developed by Eugene Delporte that, together, cover the entire celestial sphere; this list was finally published in 1930. Where possible, these modern constellations usually share the names of their Graeco-Roman predecessors, such as Orion, Leo or Scorpius. The aim of this system is area-mapping, i.e. the division of the celestial sphere into contiguous fields. Out of the 88 modern constellations, 36 lie predominantly in the northern sky, and the other 52 predominantly in the southern. The boundaries developed by Delporte used data that originated back to epoch B1875.0, which was when Benjamin A. Gould first made his proposal to designate boundaries for the celestial sphere, a suggestion on which Delporte based his work. The consequence of this early date is that because of the precession of the equinoxes, the borders on a modern star map, such as epoch J2000, are already somewhat skewed and no longer perfectly vertical or horizontal. This effect will increase over the years and centuries to come. Symbols The constellations have no official symbols, though those of the ecliptic may take the signs of the zodiac. Symbols for the other modern constellations, as well as older ones that still occur in modern nomenclature, have occasionally been published. Dark cloud constellations The Great Rift, a series of dark patches in the Milky Way, is more visible and striking in the southern hemisphere than in the northern. It vividly stands out when conditions are otherwise so dark that the Milky Way's central region casts shadows on the ground. Some cultures have discerned shapes in these patches and have given names to these "dark cloud constellations". Members of the Inca civilization identified various dark areas or dark nebulae in the Milky Way as animals and associated their appearance with the seasonal rains. Australian Aboriginal astronomy also describes dark cloud constellations, the most famous being the "emu in the sky" whose head is formed by the Coalsack, a dark nebula, instead of the stars. See also Celestial cartography Constellation family Former constellations IAU designated constellations Lists of stars by constellation Constellations listed by Johannes Hevelius Constellations listed by Lacaille Constellations listed by Petrus Plancius Constellations listed by Ptolemy References Further reading Mythology, lore, history, and archaeoastronomy Allen, Richard Hinckley. (1899) Star-Names And Their Meanings, G. E. Stechert, New York, hardcover; reprint 1963 as Star Names: Their Lore and Meaning, Dover Publications, Inc., Mineola, NY, softcover. Olcott, William Tyler. (1911); Star Lore of All Ages, G. P. Putnam's Sons, New York, hardcover; reprint 2004 as Star Lore: Myths, Legends, and Facts, Dover Publications, Inc., Mineola, NY, softcover. Kelley, David H. and Milone, Eugene F. (2004) Exploring Ancient Skies: An Encyclopedic Survey of Archaeoastronomy, Springer, hardcover. Ridpath, Ian. (2018) Star Tales 2nd ed., Lutterworth Press, softcover. Staal, Julius D. W. (1988) The New Patterns in the Sky: Myths and Legends of the Stars, McDonald & Woodward Publishing Co., hardcover, softcover. Atlases and celestial maps General and nonspecialized – entire celestial heavens Becvar, Antonin. Atlas Coeli. Published as Atlas of the Heavens, Sky Publishing Corporation, Cambridge, MA, with coordinate grid transparency overlay. Norton, Arthur Philip. (1910) Norton's Star Atlas, 20th Edition 2003 as Norton's Star Atlas and Reference Handbook, edited by Ridpath, Ian, Pi Press, , hardcover. National Geographic Society. (1957, 1970, 2001, 2007) The Heavens (1970), Cartographic Division of the National Geographic Society (NGS), Washington, DC, two-sided large map chart depicting the constellations of the heavens; as a special supplement to the August 1970 issue of National Geographic. Forerunner map as A Map of The Heavens, as a special supplement to the December 1957 issue. Current version 2001 (Tirion), with 2007 reprint. Sinnott, Roger W. and Perryman, Michael A.C. (1997) Millennium Star | employ such a distinction. E.g., the Pleiades and the Hyades are both asterisms, and each lies within the boundaries of the constellation of Taurus. Another example is the northern asterism popularly known as the Big Dipper (US) or the Plough (UK), composed of the seven brightest stars within the area of the IAU-defined constellation of Ursa Major. The southern False Cross asterism includes portions of the constellations Carina and Vela and the Summer Triangle is composed of the brightest stars in the constellations Lyra, Aquila and Cygnus. A constellation (or star), viewed from a particular latitude on Earth, that never sets below the horizon is termed circumpolar. From the North Pole or South Pole, all constellations south or north of the celestial equator are circumpolar. Depending on the definition, equatorial constellations may include those that lie between declinations 45° north and 45° south, or those that pass through the declination range of the ecliptic or zodiac ranging between 23½° north, the celestial equator, and 23½° south. Stars in constellations can appear near each other in the sky, but they usually lie at a variety of distances away from the Earth. Since each star has its own independent motion, all constellations will change slowly over time. After tens to hundreds of thousands of years, familiar outlines will become unrecognizable. Astronomers can predict the past or future constellation outlines by measuring individual stars' common proper motions or cpm by accurate astrometry and their radial velocities by astronomical spectroscopy. Identification Both the 88 IAU recognized constellations and those that cultures have recognized throughout history are essentially imagined figures and shapes with only a certain basis in the actually observable sky. Many officially recognized constellations are based in the imaginations of ancient, Near Eastern and Mediterranean mythologies, but the physical reality of the Earth's position in the Milky Way still produces shapes that are connected by the human mind. For instance, Orion's Belt forms a more or less visually perfect line. H.A. Rey, who wrote popular books on astronomy, pointed out the imaginative nature of the constellations and their mythological, artistic basis, and the practical use of identifying them through definite images, according to the classical names they were given. History of the early constellations Lascaux Caves Southern France It has been suggested that the 17,000-year-old cave paintings in Lascaux Southern France depict star constellations such as Taurus, Orion's Belt, and the Pleiades. However, this view is not yet generally accepted among scientists. Mesopotamia Inscribed stones and clay writing tablets from Mesopotamia (in modern Iraq) dating to 3000 BC provide the earliest generally accepted evidence for humankind's identification of constellations. It seems that the bulk of the Mesopotamian constellations were created within a relatively short interval from around 1300 to 1000 BC. Mesopotamian constellations appeared later in many of the classical Greek constellations. Ancient Near East The oldest Babylonian catalogues of stars and constellations date back to the beginning of the Middle Bronze Age, most notably the Three Stars Each texts and the MUL.APIN, an expanded and revised version based on more accurate observation from around 1000 BC. However, the numerous Sumerian names in these catalogues suggest that they built on older, but otherwise unattested, Sumerian traditions of the Early Bronze Age. The classical Zodiac is a revision of Neo-Babylonian constellations from the 6th century BC. The Greeks adopted the Babylonian constellations in the 4th century BC. Twenty Ptolemaic constellations are from the Ancient Near East. Another ten have the same stars but different names. Biblical scholar E. W. Bullinger interpreted some of the creatures mentioned in the books of Ezekiel and Revelation as the middle signs of the four-quarters of the Zodiac, with the Lion as Leo, the Bull as Taurus, the Man representing Aquarius, and the Eagle standing in for Scorpio. The biblical Book of Job also makes reference to a number of constellations, including "bier", "fool" and "heap" (Job 9:9, 38:31–32), rendered as "Arcturus, Orion and Pleiades" by the KJV, but ‘Ayish "the bier" actually corresponding to Ursa Major. The term Mazzaroth , translated as a garland of crowns, is a hapax legomenon in Job 38:32, and it might refer to the zodiacal constellations. Classical antiquity There is only limited information on ancient Greek constellations, with some fragmentary evidence being found in the Works and Days of the Greek poet Hesiod, who mentioned the "heavenly bodies". Greek astronomy essentially adopted the older Babylonian system in the Hellenistic era, first introduced to Greece by Eudoxus of Cnidus in the 4th century BC. The original work of Eudoxus is lost, but it survives as a versification by Aratus, dating to the 3rd century BC. The most complete existing works dealing with the mythical origins of the constellations are by the Hellenistic writer termed pseudo-Eratosthenes and an early Roman writer styled pseudo-Hyginus. The basis of Western astronomy as taught during Late Antiquity and until the Early Modern period is the Almagest by Ptolemy, written in the 2nd century. In the Ptolemaic Kingdom, native Egyptian tradition of anthropomorphic figures represented the planets, stars, and various constellations. Some of these were combined with Greek and Babylonian astronomical systems culminating in the Zodiac of Dendera; it remains unclear when this occurred, but most were placed during the Roman period between 2nd to 4th centuries AD. The oldest known depiction of the zodiac showing all the now familiar constellations, along with some original Egyptian constellations, decans, and planets. Ptolemy's Almagest remained the standard definition of constellations in the medieval period both in Europe and in Islamic astronomy. Ancient China Ancient China had a long tradition of observing celestial phenomena. Nonspecific Chinese star names, later categorized in the twenty-eight mansions, have been found on oracle bones from Anyang, dating back to the middle Shang dynasty. These constellations are some of the most important observations of Chinese sky, attested from the 5th century BC. Parallels to the earliest Babylonian (Sumerian) star catalogues suggest that the ancient Chinese system did not arise independently. Three schools of classical Chinese astronomy in the Han period are attributed to astronomers of the earlier Warring States period. The constellations of the three schools were conflated into a single system by Chen Zhuo, an astronomer of the 3rd century (Three Kingdoms period). Chen Zhuo's work has been lost, but information on his system of constellations survives in Tang period records, notably by Qutan Xida. The oldest extant Chinese star chart dates to that period and was preserved as part of the Dunhuang Manuscripts. Native Chinese astronomy flourished during the Song dynasty, and during the Yuan dynasty became increasingly influenced by medieval Islamic astronomy (see Treatise on Astrology of the Kaiyuan Era). As maps were prepared during this period on more scientific lines, they were considered as more reliable. A well-known map from the Song period is the Suzhou Astronomical Chart, which was prepared with carvings of stars on the planisphere of the Chinese sky on a stone plate; it is done accurately based on observations, and it shows the supernova of the year of 1054 in Taurus. Influenced by European astronomy during the late Ming dynasty, charts depicted more stars but retained the traditional constellations. Newly observed stars were incorporated as supplementary to old constellations in the southern sky, which did not depict the traditional stars recorded by ancient Chinese astronomers. Further improvements were made during the later part of the Ming dynasty by Xu Guangqi and Johann Adam Schall von Bell, the German Jesuit and was recorded in Chongzhen Lishu (Calendrical Treatise of Chongzhen period, 1628). Traditional Chinese star maps incorporated 23 new constellations with 125 stars of the southern hemisphere of the sky based on the knowledge of Western star charts; with this improvement, the Chinese Sky was integrated with the World astronomy. Early modern astronomy Historically, the origins of the constellations of the northern and southern skies are distinctly different. Most northern constellations date to antiquity, with names based mostly on Classical Greek legends. Evidence of these constellations has survived in the form of star charts, whose oldest representation appears on the statue known as the Farnese Atlas, based perhaps on the star catalogue of the Greek astronomer Hipparchus. Southern constellations are more modern inventions, sometimes as substitutes for ancient constellations (e.g. Argo Navis). Some southern constellations had long names that were shortened to more usable forms; e.g. Musca Australis became simply Musca. Some of the early constellations were never universally adopted. Stars were often grouped into constellations differently by different observers, and the arbitrary constellation boundaries often led to confusion as to which constellation a celestial object belonged. Before astronomers delineated precise boundaries (starting in the 19th century), constellations generally appeared as ill-defined regions of the sky. Today they now follow officially accepted designated lines of Right Ascension and Declination based on those defined by Benjamin Gould in epoch 1875.0 in his star catalogue Uranometria Argentina. The 1603 star atlas "Uranometria" of Johann Bayer assigned stars to individual constellations and formalized the division by assigning a series of Greek and Latin letters to the stars within each constellation. These are known today as Bayer designations. Subsequent star atlases led to the development of today's accepted modern constellations. Origin of the southern constellations The southern sky, below about −65° declination, was only partially catalogued by ancient Babylonians, Egyptians, Greeks, Chinese, and Persian astronomers of the north. The knowledge that northern and southern star patterns differed goes back to Classical writers, who describe, for example, the African circumnavigation expedition commissioned by Egyptian Pharaoh Necho II in c. 600 BC and those of Hanno the Navigator in c. 500 BC. The history of southern constellations is not straightforward. Different groupings and different names were proposed by various observers, some reflecting national traditions or designed to promote various sponsors. Southern constellations were important from the 14th to 16th centuries, when sailors used the stars for celestial navigation. Italian explorers who recorded new southern constellations include Andrea Corsali, Antonio Pigafetta, and Amerigo Vespucci. Many of the 88 IAU-recognized constellations in this region first appeared on celestial globes developed in the late 16th century by Petrus Plancius, based mainly on observations of the Dutch navigators Pieter Dirkszoon Keyser and Frederick de |
Character (Julia Kent album), 2013 Character (Rachael Sage album), 2020 Characters (Stevie Wonder album), 1987 Types of entity Character (arts), an agent within a work of art, including literature, drama, cinema, opera, etc. Character sketch or character, a literary description of a character type Game character (disambiguation), various types of characters in a video game or role playing game Player character, as above but who is controlled or whose actions are directly chosen by a player Non-player character, as above but not player-controlled, frequently abbreviated as NPC Other uses | (biology), the abstraction of an observable physical or biochemical trait of an organism Mathematics Character (mathematics), a homomorphism from a group to a field Characterization (mathematics), the logical equivalency between objects of two different domains. Character theory, the mathematical theory of special kinds of characters associated to group representations Dirichlet character, a type of character in number theory Multiplicative character, a homomorphism from a group to the multiplicative subgroup of a field Morality and social science Character education, a US term for values education Character structure, a person's traits Moral character, an evaluation of a particular individual's durable moral qualities Symbols Character (symbol), a sign or symbol Character (computing), a unit of information roughly corresponding to a grapheme Chinese characters, a written language symbol (sinogram) used in Chinese, Japanese, and other languages Other uses |
is a wheeled motor vehicle used for transporting passengers. Car, Cars, CAR or CARs may also refer to: Computing C.a.R. (Z.u.L.), geometry software CAR and CDR, commands in LISP computer programming Clock with Adaptive Replacement, a page replacement algorithm Computer-assisted reporting Computer-assisted reviewing Economics Capital adequacy ratio, a ratio of a bank's capital to its risk Cost accrual ratio, an accounting formula Cumulative abnormal return Cumulative average return, a financial concept related to the time value of money Film and television Cars (franchise), a Disney/Pixar film series Cars (film), a 2006 computer animated film from Disney and Pixar The Car, a 1977 suspense-horror film Car, a BBC Two television ident first aired in 1993 (see BBC Two '1991–2001' idents) The Car (1997 film), a Malayalam film "The Car" (The Assistants episode) Literature Car (magazine), a British auto-enthusiast publication The Car (novel), a novel by Gary Paulsen Military Canadian Airborne Regiment, a Canadian Forces formation Colt Automatic Rifle, a 5.56mm NATO firearm Combat Action Ribbon, a United States military decoration U.S. Army Combat Arms Regimental System, a 1950s reorganisation of the regiments of the US Army Music The Cars, an American rock band Albums The Cars (album), an album by The Cars Peter Gabriel (1977 album) or Car Cars (Now, Now Every Children album) (2008) C.A.R. (album), a 2012 album by Serengeti Cars (soundtrack), the soundtrack to | The Cars Peter Gabriel (1977 album) or Car Cars (Now, Now Every Children album) (2008) C.A.R. (album), a 2012 album by Serengeti Cars (soundtrack), the soundtrack to the 2006 film Cars, an album by Kris Delmhorst Songs "The Car" (song), a song by Jeff Carson "Cars" (song), a 1979 single by Gary Numan "Car", a 1994 song by Built to Spill from There's Nothing Wrong with Love Places Central African Republic The Central Asian Republics Cordillera Administrative Region, Philippines Car, Azerbaijan, a village Čar, a village in Serbia Cars, Gironde, France, a commune Les Cars, Haute-Vienne, France, a commune Cima Cars, mountain of the Ligurian Alps in Italy People Car (King of Caria) Car (Greek myth) Car (surname) Jean-François Cars (1670–1739), French engraver Laurent Cars (1699–1771), French designer and engraver Science Canonical anticommutation relation Carina (constellation) Chimeric antigen receptor, artificial T cell receptors Coxsackievirus and adenovirus receptor, a protein Coherent anti-Stokes Raman spectroscopy Constitutive androstane receptor Cortisol awakening response, on waking from sleep Sports Carolina Hurricanes, a National Hockey League team Carolina Panthers, a National Football League team Club Always Ready, a Bolivian football club from La Paz Rugby Africa, formerly known as Confederation of African Rugby Transportation Railroad car Canada Atlantic Railway, 1879–1914 Canadian Atlantic Railway, 1986–1994 Carlisle railway station's station code Car, the cab of an elevator Car, a tram, streetcar, or trolley car Visual arts Cars (painting), a series of paintings by Andy Warhol The Car (Brack), a 1955 painting by John Brack Other uses Car Car, meaning tsar in several Slavic languages Carib language, is a Cariban language spoken by the Kalina people of South America (ISO 639-2 and ISO 639-3 code: car) Car language, an Austroasiatic language of the Nicobar Islands |
the selected letter form was struck by a hammer. Others used a type cylinder in a similar way as the Selectric typewriters used their type ball. In either case, the letter form then struck a ribbon to print the letterform. Most teleprinters operated at ten characters per second although a few achieved 15 CPS. Daisy wheel printers Daisy wheel printers operate in much the same fashion as a typewriter. A hammer strikes a wheel with petals, the "daisy wheel", each petal containing a letter form at its tip. The letter form strikes a ribbon of ink, depositing the ink on the page and thus printing a character. By rotating the daisy wheel, different characters are selected for printing. These printers were also referred to as letter-quality printers because they could produce text which was as clear and crisp as a typewriter. The fastest letter-quality printers printed at 30 characters per second. Dot-matrix printers The term dot matrix printer is used for impact printers that use a matrix of small pins to transfer ink to the page. The advantage of dot matrix over other impact printers is that they can produce graphical images in addition to text; however the text is generally of poorer quality than impact printers that use letterforms (type). Dot-matrix printers can be broadly divided into two major classes: Ballistic wire printers Stored energy printers Dot matrix printers can either be character-based or line-based (that is, a single horizontal series of pixels across the page), referring to the configuration of the print head. In the 1970s and '80s, dot matrix printers were one of the more common types of printers used for general use, such as for home and small office use. Such printers normally had either 9 or 24 pins on the print head (early 7 pin printers also existed, which did not print descenders). There was a period during the early home computer era when a range of printers were manufactured under many brands such as the Commodore VIC-1525 using the Seikosha Uni-Hammer system. This used a single solenoid with an oblique striker that would be actuated 7 times for each column of 7 vertical pixels while the head was moving at a constant speed. The angle of the striker would align the dots vertically even though the head had moved one dot spacing in the time. The vertical dot position was controlled by a synchronized longitudinally ribbed platen behind the paper that rotated rapidly with a rib moving vertically seven dot spacings in the time it took to print one pixel column. 24-pin print heads were able to print at a higher quality and started to offer additional type styles and were marketed as Near Letter Quality by some vendors. Once the price of inkjet printers dropped to the point where they were competitive with dot matrix printers, dot matrix printers began to fall out of favour for general use. Some dot matrix printers, such as the NEC P6300, can be upgraded to print in colour. This is achieved through the use of a four-colour ribbon mounted on a mechanism (provided in an upgrade kit that replaces the standard black ribbon mechanism after installation) that raises and lowers the ribbons as needed. Colour graphics are generally printed in four passes at standard resolution, thus slowing down printing considerably. As a result, colour graphics can take up to four times longer to print than standard monochrome graphics, or up to 8-16 times as long at high resolution mode. Dot matrix printers are still commonly used in low-cost, low-quality applications such as cash registers, or in demanding, very high volume applications like invoice printing. Impact printing, unlike laser printing, allows the pressure of the print head to be applied to a stack of two or more forms to print multi-part documents such as sales invoices and credit card receipts using continuous stationery with carbonless copy paper. It also has security advantages as ink impressed into a paper matrix by force is harder to erase invisibly. Dot-matrix printers were being superseded even as receipt printers after the end of the twentieth century. Line printers Line printers print an entire line of text at a time. Four principal designs exist. Drum printers, where a horizontally mounted rotating drum carries the entire character set of the printer repeated in each printable character position. The IBM 1132 printer is an example of a drum printer. Drum printers are also found in adding machines and other numeric printers (POS), the dimensions are compact as only a dozen characters need to be supported. Chain or train printers, where the character set is arranged multiple times around a linked chain or a set of character slugs in a track traveling horizontally past the print line. The IBM 1403 is perhaps the most popular and comes in both chain and train varieties. The band printer is a later variant where the characters are embossed on a flexible steel band. The LP27 from Digital Equipment Corporation is a band printer. Bar printers, where the character set is attached to a solid bar that moves horizontally along the print line, such as the IBM 1443. A fourth design, used mainly on very early printers such as the IBM 402, features independent type bars, one for each printable position. Each bar contains the character set to be printed. The bars move vertically to position the character to be printed in front of the print hammer. In each case, to print a line, precisely timed hammers strike against the back of the paper at the exact moment that the correct character to be printed is passing in front of the paper. The paper presses forward against a ribbon which then presses against the character form and the impression of the character form is printed onto the paper. Each system could have slight timing issues, which could cause minor misalignment of the resulting printed characters. For drum or typebar printers, this appeared as vertical misalignment, with characters being printed slightly above or below the rest of the line. In chain or bar printers, the misalignment was horizontal, with printed characters being crowded closer together or farther apart. This was much less noticeable to human vision than vertical misalignment, where characters seemed to bounce up and down in the line, so they were considered as higher quality print. Comb printers, also called line matrix printers, represent the fifth major design. These printers are a hybrid of dot matrix printing and line printing. In these printers, a comb of hammers prints a portion of a row of pixels at one time, such as every eighth pixel. By shifting the comb back and forth slightly, the entire pixel row can be printed, continuing the example, in just eight cycles. The paper then advances, and the next pixel row is printed. Because far less motion is involved than in a conventional dot matrix printer, these printers are very fast compared to dot matrix printers and are competitive in speed with formed-character line printers while also being able to print dot matrix graphics. The Printronix P7000 series of line matrix printers are still manufactured as of 2013. Line printers are the fastest of all impact printers and are used for bulk printing in large computer centres. A line printer can print at 1100 lines per minute or faster, frequently printing pages more rapidly than many current laser printers. On the other hand, the mechanical components of line printers operate with tight tolerances and require regular preventive maintenance (PM) to produce a top quality print. They are virtually never used with personal computers and have now been replaced by high-speed laser printers. The legacy of line printers lives on in many operating systems, which use the abbreviations "lp", "lpr", or "LPT" to refer to printers. Liquid ink electrostatic printers Liquid ink electrostatic printers use a chemical coated paper, which is charged by the print head according to the image of the document. The paper is passed near a pool of liquid ink with the opposite charge. The charged areas of the paper attract the ink and thus form the image. This process was developed from the process of electrostatic copying. Color reproduction is very accurate, and because there is no heating the scale distortion is less than ±0.1%. (All laser printers have an accuracy of ±1%.) Worldwide, most survey offices used this printer before color inkjet plotters become popular. Liquid ink electrostatic printers were mostly available in width and also 6 color printing. These were also used to print large billboards. It was first introduced by Versatec, which was later bought by Xerox. 3M also used to make these printers. Plotters Pen-based plotters were an alternate printing technology once common in engineering and architectural firms. Pen-based plotters rely on contact with the paper (but not impact, per se) and special purpose pens that are mechanically run over the paper to create text and images. Since the pens output continuous lines, they were able to produce technical drawings of higher resolution than was achievable with dot-matrix technology. Some plotters used roll-fed paper, and therefore had a minimal restriction on the size of the output in one dimension. These plotters were capable of producing quite sizable drawings. Other printers A number of other sorts of printers are important for historical reasons, or for special purpose uses. Digital minilab (photographic paper) Electrolytic printers Spark printer Barcode printer multiple technologies, including: thermal printing, inkjet printing, and laser printing barcodes Billboard / sign paint spray printers Laser etching (product packaging) industrial printers Microsphere (special paper) Attributes Connectivity Printers can be connected to computers in many ways: directly by a dedicated data cable such as the USB, through a short-range radio like Bluetooth, a local area network using cables (such as the Ethernet) or radio (such as WiFi), or on a standalone basis without a computer, using a memory card or other portable data storage device. More than half of all printers sold at U.S. retail in 2010 were wireless-capable, but nearly three-quarters of consumers who have access to those printers weren't taking advantage of the increased access to print from multiple devices according to the new Wireless Printing Study. Printer control languages Most printers other than line printers accept control characters or unique character sequences to control various printer functions. These may range from shifting from lower to upper case or from black to red ribbon on typewriter printers to switching fonts and changing character sizes and colors on raster printers. Early printer controls were not standardized, with each manufacturer's equipment having its own set. The IBM Personal Printer Data Stream (PPDS) became a commonly used command set for dot-matrix printers. Today, most printers accept one or more page description languages (PDLs). Laser printers with greater processing power frequently offer support for variants of Hewlett-Packard's Printer Command Language (PCL), PostScript or XML Paper Specification. Most inkjet devices support manufacturer proprietary PDLs such as ESC/P. The diversity in mobile platforms have led to various standardization efforts around device PDLs such as the Printer Working Group (PWG's) PWG Raster. Printing speed The speed of early printers was measured in units of characters per minute (cpm) for character printers, or lines per minute (lpm) for line printers. Modern printers are measured in pages per minute (ppm). These measures are used primarily as a marketing tool, and are not as well standardised as toner yields. Usually pages per minute refers to sparse monochrome office documents, rather than dense pictures which usually print much more slowly, especially colour images. Speeds in ppm usually apply to A4 paper in most countries in the world, and letter paper size, about 6% shorter, in North America. Printing mode The data received by a printer may be: A string of characters A bitmapped image A vector image A computer program written in a page description language, such as PCL or PostScript Some printers can process all four types of data, others not. Character printers, such as daisy wheel printers, can handle only plain text data or rather simple point plots. Pen plotters typically process vector images. Inkjet based plotters can adequately reproduce all four. Modern printing technology, such as laser printers and inkjet printers, can adequately reproduce all four. This is especially true of printers equipped with support for PCL or PostScript, which includes the vast majority of printers produced today. Today it is possible to print everything (even plain text) by sending ready bitmapped images to the printer. This allows better control over formatting, especially among machines from different vendors. Many printer drivers do not use the text mode at all, even if the printer is capable of it. Monochrome, colour and photo printers A monochrome printer can only produce monochrome images, with only shades of a single colour. Most printers can produce only two colors, black (ink) and white (no ink). With half-tonning techniques, however, such a printer can produce acceptable grey-scale images too A colour printer can produce images of multiple colours. A photo printer is a colour printer that can produce images that mimic the colour range (gamut) and resolution of prints made from photographic film. Page yield The page yield is number of pages that can be printed from a toner cartridge or ink cartridge—before the cartridge needs to be refilled or replaced. The actual number of pages yielded by a | through automatic document feeders, but these traits have been significantly reduced in later models. This type of thermal transfer printer is only available from one manufacturer, Xerox, manufactured as part of their Xerox Phaser office printer line. Previously, solid ink printers were manufactured by Tektronix, but Tektronix sold the printing business to Xerox in 2001. Dye-sublimation printers A dye-sublimation printer (or dye-sub printer) is a printer that employs a printing process that uses heat to transfer dye to a medium such as a plastic card, paper, or canvas. The process is usually to lay one colour at a time using a ribbon that has colour panels. Dye-sub printers are intended primarily for high-quality colour applications, including colour photography; and are less well-suited for text. While once the province of high-end print shops, dye-sublimation printers are now increasingly used as dedicated consumer photo printers. Thermal printers Thermal printers work by selectively heating regions of special heat-sensitive paper. Monochrome thermal printers are used in cash registers, ATMs, gasoline dispensers and some older inexpensive fax machines. Colours can be achieved with special papers and different temperatures and heating rates for different colours; these coloured sheets are not required in black-and-white output. One example is Zink (a portmanteau of "zero ink"). Obsolete and special-purpose printing technologies The following technologies are either obsolete, or limited to special applications though most were, at one time, in widespread use. Impact printers Impact printers rely on a forcible impact to transfer ink to the media. The impact printer uses a print head that either hits the surface of the ink ribbon, pressing the ink ribbon against the paper (similar to the action of a typewriter), or, less commonly, hits the back of the paper, pressing the paper against the ink ribbon (the IBM 1403 for example). All but the dot matrix printer rely on the use of fully formed characters, letterforms that represent each of the characters that the printer was capable of printing. In addition, most of these printers were limited to monochrome, or sometimes two-color, printing in a single typeface at one time, although bolding and underlining of text could be done by "overstriking", that is, printing two or more impressions either in the same character position or slightly offset. Impact printers varieties include typewriter-derived printers, teletypewriter-derived printers, daisywheel printers, dot matrix printers, and line printers. Dot-matrix printers remain in common use in businesses where multi-part forms are printed. An overview of impact printing contains a detailed description of many of the technologies used. Typewriter-derived printers Several different computer printers were simply computer-controllable versions of existing electric typewriters. The Friden Flexowriter and IBM Selectric-based printers were the most-common examples. The Flexowriter printed with a conventional typebar mechanism while the Selectric used IBM's well-known "golf ball" printing mechanism. In either case, the letter form then struck a ribbon which was pressed against the paper, printing one character at a time. The maximum speed of the Selectric printer (the faster of the two) was 15.5 characters per second. Teletypewriter-derived printers The common teleprinter could easily be interfaced with the computer and became very popular except for those computers manufactured by IBM. Some models used a "typebox" that was positioned, in the X- and Y-axes, by a mechanism, and the selected letter form was struck by a hammer. Others used a type cylinder in a similar way as the Selectric typewriters used their type ball. In either case, the letter form then struck a ribbon to print the letterform. Most teleprinters operated at ten characters per second although a few achieved 15 CPS. Daisy wheel printers Daisy wheel printers operate in much the same fashion as a typewriter. A hammer strikes a wheel with petals, the "daisy wheel", each petal containing a letter form at its tip. The letter form strikes a ribbon of ink, depositing the ink on the page and thus printing a character. By rotating the daisy wheel, different characters are selected for printing. These printers were also referred to as letter-quality printers because they could produce text which was as clear and crisp as a typewriter. The fastest letter-quality printers printed at 30 characters per second. Dot-matrix printers The term dot matrix printer is used for impact printers that use a matrix of small pins to transfer ink to the page. The advantage of dot matrix over other impact printers is that they can produce graphical images in addition to text; however the text is generally of poorer quality than impact printers that use letterforms (type). Dot-matrix printers can be broadly divided into two major classes: Ballistic wire printers Stored energy printers Dot matrix printers can either be character-based or line-based (that is, a single horizontal series of pixels across the page), referring to the configuration of the print head. In the 1970s and '80s, dot matrix printers were one of the more common types of printers used for general use, such as for home and small office use. Such printers normally had either 9 or 24 pins on the print head (early 7 pin printers also existed, which did not print descenders). There was a period during the early home computer era when a range of printers were manufactured under many brands such as the Commodore VIC-1525 using the Seikosha Uni-Hammer system. This used a single solenoid with an oblique striker that would be actuated 7 times for each column of 7 vertical pixels while the head was moving at a constant speed. The angle of the striker would align the dots vertically even though the head had moved one dot spacing in the time. The vertical dot position was controlled by a synchronized longitudinally ribbed platen behind the paper that rotated rapidly with a rib moving vertically seven dot spacings in the time it took to print one pixel column. 24-pin print heads were able to print at a higher quality and started to offer additional type styles and were marketed as Near Letter Quality by some vendors. Once the price of inkjet printers dropped to the point where they were competitive with dot matrix printers, dot matrix printers began to fall out of favour for general use. Some dot matrix printers, such as the NEC P6300, can be upgraded to print in colour. This is achieved through the use of a four-colour ribbon mounted on a mechanism (provided in an upgrade kit that replaces the standard black ribbon mechanism after installation) that raises and lowers the ribbons as needed. Colour graphics are generally printed in four passes at standard resolution, thus slowing down printing considerably. As a result, colour graphics can take up to four times longer to print than standard monochrome graphics, or up to 8-16 times as long at high resolution mode. Dot matrix printers are still commonly used in low-cost, low-quality applications such as cash registers, or in demanding, very high volume applications like invoice printing. Impact printing, unlike laser printing, allows the pressure of the print head to be applied to a stack of two or more forms to print multi-part documents such as sales invoices and credit card receipts using continuous stationery with carbonless copy paper. It also has security advantages as ink impressed into a paper matrix by force is harder to erase invisibly. Dot-matrix printers were being superseded even as receipt printers after the end of the twentieth century. Line printers Line printers print an entire line of text at a time. Four principal designs exist. Drum printers, where a horizontally mounted rotating drum carries the entire character set of the printer repeated in each printable character position. The IBM 1132 printer is an example of a drum printer. Drum printers are also found in adding machines and other numeric printers (POS), the dimensions are compact as only a dozen characters need to be supported. Chain or train printers, where the character set is arranged multiple times around a linked chain or a set of character slugs in a track traveling horizontally past the print line. The IBM 1403 is perhaps the most popular and comes in both chain and train varieties. The band printer is a later variant where the characters are embossed on a flexible steel band. The LP27 from Digital Equipment Corporation is a band printer. Bar printers, where the character set is attached to a solid bar that moves horizontally along the print line, such as the IBM 1443. A fourth design, used mainly on very early printers such as the IBM 402, features independent type bars, one for each printable position. Each bar contains the character set to be printed. The bars move vertically to position the character to be printed in front of the print hammer. In each case, to print a line, precisely timed hammers strike against the back of the paper at the exact moment that the correct character to be printed is passing in front of the paper. The paper presses forward against a ribbon which then presses against the character form and the impression of the character form is printed onto the paper. Each system could have slight timing issues, which could cause minor misalignment of the resulting printed characters. For drum or typebar printers, this appeared as vertical misalignment, with characters being printed slightly above or below the rest of the line. In chain or bar printers, the misalignment was horizontal, with printed characters being crowded closer together or farther apart. This was much less noticeable to human vision than vertical misalignment, where characters seemed to bounce up and down in the line, so they were considered as higher quality print. Comb printers, also called line matrix printers, represent the fifth major design. These printers are a hybrid of dot matrix printing and line printing. In these printers, a comb of hammers prints a portion of a row of pixels at one time, such as every eighth pixel. By shifting the comb back and forth slightly, the entire pixel row can be printed, continuing the example, in just eight cycles. The paper then advances, and the next pixel row is printed. Because far less motion is involved than in a conventional dot matrix printer, these printers are very fast compared to dot matrix printers and are competitive in speed with formed-character line printers while also being able to print dot matrix graphics. The Printronix P7000 series of line matrix printers are still manufactured as of 2013. Line printers are the fastest of all impact printers and are used for bulk printing in large computer centres. A line printer can print at 1100 lines per minute or faster, frequently printing pages more rapidly than many current laser printers. On the other hand, the mechanical components of line printers operate with tight tolerances and require regular preventive maintenance (PM) to produce a top quality print. They are virtually never used with personal computers and have now been replaced by high-speed laser printers. The legacy of line printers lives on in many operating systems, which use the abbreviations "lp", "lpr", or "LPT" to refer to printers. Liquid ink electrostatic printers Liquid ink electrostatic printers use a chemical coated paper, which is charged by the print head according to the image of the document. The paper is passed near a pool of liquid ink with the opposite charge. The charged areas of the paper attract the ink and thus form the image. This process was developed from the process of electrostatic copying. Color reproduction is very accurate, and because there is no heating the scale distortion is less than ±0.1%. (All laser printers have an accuracy of ±1%.) Worldwide, most survey offices used this printer before color inkjet plotters become popular. Liquid ink electrostatic printers were mostly available in width and also 6 color printing. These were also used to print large billboards. It was first introduced by Versatec, which was later bought by Xerox. 3M also used to make these printers. Plotters Pen-based plotters were an alternate printing technology once common in engineering and architectural firms. Pen-based plotters rely on contact with the paper (but not impact, per se) and special purpose pens that are mechanically run over the paper to create text and images. Since the pens output continuous lines, they were able to produce technical drawings of higher resolution than was achievable with dot-matrix technology. Some plotters used roll-fed paper, and therefore had a minimal restriction on the size of the output in one dimension. These plotters were capable of producing quite sizable drawings. Other printers A number of other sorts of printers are important for historical reasons, or for special purpose uses. Digital minilab (photographic paper) Electrolytic printers Spark printer Barcode printer multiple technologies, including: thermal printing, inkjet printing, and laser printing barcodes Billboard / sign paint spray printers Laser etching (product packaging) industrial printers Microsphere (special paper) Attributes Connectivity Printers can be connected to computers in many ways: directly by a dedicated data cable such as the USB, through a short-range radio like Bluetooth, a local area network using cables (such as the Ethernet) or radio (such as WiFi), or on a standalone basis without a computer, using a memory card or other portable data storage device. More than half of all printers sold at U.S. retail in 2010 were wireless-capable, but nearly three-quarters of consumers who have access to those printers weren't taking advantage of the increased access |
general, so long as they are different enough to not be judged copies of Disney's. Note additionally that Mickey Mouse is not copyrighted because characters cannot be copyrighted; rather, Steamboat Willie is copyrighted and Mickey Mouse, as a character in that copyrighted work, is afforded protection. Originality Typically, a work must meet minimal standards of originality in order to qualify for copyright, and the copyright expires after a set period of time (some jurisdictions may allow this to be extended). Different countries impose different tests, although generally the requirements are low; in the United Kingdom there has to be some "skill, labour, and judgment" that has gone into it. In Australia and the United Kingdom it has been held that a single word is insufficient to comprise a copyright work. However, single words or a short string of words can sometimes be registered as a trademark instead. Copyright law recognizes the right of an author based on whether the work actually is an original creation, rather than based on whether it is unique; two authors may own copyright on two substantially identical works, if it is determined that the duplication was coincidental, and neither was copied from the other. Registration In all countries where the Berne Convention standards apply, copyright is automatic, and need not be obtained through official registration with any government office. Once an idea has been reduced to tangible form, for example by securing it in a fixed medium (such as a drawing, sheet music, photograph, a videotape, or a computer file), the copyright holder is entitled to enforce his or her exclusive rights. However, while registration is not needed to exercise copyright, in jurisdictions where the laws provide for registration, it serves as prima facie evidence of a valid copyright and enables the copyright holder to seek statutory damages and attorney's fees. (In the US, registering after an infringement only enables one to receive actual damages and lost profits.) A widely circulated strategy to avoid the cost of copyright registration is referred to as the poor man's copyright. It proposes that the creator send the work to himself in a sealed envelope by registered mail, using the postmark to establish the date. This technique has not been recognized in any published opinions of the United States courts. The United States Copyright Office says the technique is not a substitute for actual registration. The United Kingdom Intellectual Property Office discusses the technique and notes that the technique (as well as commercial registries) does not constitute dispositive proof that the work is original or establish who created the work. Fixing The Berne Convention allows member countries to decide whether creative works must be "fixed" to enjoy copyright. Article 2, Section 2 of the Berne Convention states: "It shall be a matter for legislation in the countries of the Union to prescribe that works in general or any specified categories of works shall not be protected unless they have been fixed in some material form." Some countries do not require that a work be produced in a particular form to obtain copyright protection. For instance, Spain, France, and Australia do not require fixation for copyright protection. The United States and Canada, on the other hand, require that most works must be "fixed in a tangible medium of expression" to obtain copyright protection. US law requires that the fixation be stable and permanent enough to be "perceived, reproduced or communicated for a period of more than transitory duration". Similarly, Canadian courts consider fixation to require that the work be "expressed to some extent at least in some material form, capable of identification and having a more or less permanent endurance". Note this provision of US law: c) Effect of Berne Convention.—No right or interest in a work eligible for protection under this title may be claimed by virtue of, or in reliance upon, the provisions of the Berne Convention, or the adherence of the United States thereto. Any rights in a work eligible for protection under this title that derive from this title, other Federal or State statutes, or the common law, shall not be expanded or reduced by virtue of, or in reliance upon, the provisions of the Berne Convention, or the adherence of the United States thereto. Copyright notice Before 1989, United States law required the use of a copyright notice, consisting of the copyright symbol (©, the letter C inside a circle), the abbreviation "Copr.", or the word "Copyright", followed by the year of the first publication of the work and the name of the copyright holder. Several years may be noted if the work has gone through substantial revisions. The proper copyright notice for sound recordings of musical or other audio works is a sound recording copyright symbol (℗, the letter P inside a circle), which indicates a sound recording copyright, with the letter P indicating a "phonorecord". In addition, the phrase All rights reserved was once required to assert copyright, but that phrase is now legally obsolete. Almost everything on the Internet has some sort of copyright attached to it. Whether these things are watermarked, signed, or have any other sort of indication of the copyright is a different story however. In 1989 the United States enacted the Berne Convention Implementation Act, amending the 1976 Copyright Act to conform to most of the provisions of the Berne Convention. As a result, the use of copyright notices has become optional to claim copyright, because the Berne Convention makes copyright automatic. However, the lack of notice of copyright using these marks may have consequences in terms of reduced damages in an infringement lawsuit – using notices of this form may reduce the likelihood of a defense of "innocent infringement" being successful. Enforcement Copyrights are generally enforced by the holder in a civil law court, but there are also criminal infringement statutes in some jurisdictions. While central registries are kept in some countries which aid in proving claims of ownership, registering does not necessarily prove ownership, nor does the fact of copying (even without permission) necessarily prove that copyright was infringed. Criminal sanctions are generally aimed at serious counterfeiting activity, but are now becoming more commonplace as copyright collectives such as the RIAA are increasingly targeting the file sharing home Internet user. Thus far, however, most such cases against file sharers have been settled out of court. (See Legal aspects of file sharing) In most jurisdictions the copyright holder must bear the cost of enforcing copyright. This will usually involve engaging legal representation, administrative or court costs. In light of this, many copyright disputes are settled by a direct approach to the infringing party in order to settle the dispute out of court. "...by 1978, the scope was expanded to apply to any 'expression' that has been 'fixed' in any medium, this protection granted automatically whether the maker wants it or not, no registration required." Copyright infringement For a work to be considered to infringe upon copyright, its use must have occurred in a nation that has domestic copyright laws or adheres to a bilateral treaty or established international convention such as the Berne Convention or WIPO Copyright Treaty. Improper use of materials outside of legislation is deemed "unauthorized edition", not copyright infringement. Statistics regarding the effects of copyright infringement are difficult to determine. Studies have attempted to determine whether there is a monetary loss for industries affected by copyright infringement by predicting what portion of pirated works would have been formally purchased if they had not been freely available. Other reports indicate that copyright infringement does not have an adverse effect on the entertainment industry, and can have a positive effect. In particular, a 2014 university study concluded that free music content, accessed on YouTube, does not necessarily hurt sales, instead has the potential to increase sales. According to the IP Commission Report the annual cost of intellectual property theft to the US economy "continues to exceed $225 billion in counterfeit goods, pirated software, and theft of trade secrets and could be as high as $600 billion." A 2019 study sponsored by the US Chamber of Commerce Global Innovation Policy Center (GIPC), in partnership with NERA Economic Consulting "estimates that global online piracy costs the U.S. economy at least $29.2 billion in lost revenue each year." An August 2021 report by the Digital Citizens Alliance states that "online criminals who offer stolen movies, TV shows, games, and live events through websites and apps are reaping $1.34 billion in annual advertising revenues." This comes as a result of users visiting pirate websites who are then subjected to pirated content, malware, and fraud. Rights granted According to World Intellectual Property Organisation, copyright protects two types of rights. Economic rights allow right owners to derive financial reward from the use of their works by others. Moral rights allow authors and creators to take certain actions to preserve and protect their link with their work. The author or creator may be the owner of the economic rights or those rights may be transferred to one or more copyright owners. Many countries do not allow the transfer of moral rights. Economic rights With any kind of property, its owner may decide how it is to be used, and others can use it lawfully only if they have the owner's permission, often through a license. The owner's use of the property must, however, respect the legally recognised rights and interests of other members of society. So the owner of a copyright-protected work may decide how to use the work, and may prevent others from using it without permission. National laws usually grant copyright owners exclusive rights to allow third parties to use their works, subject to the legally recognised rights and interests of others. Most copyright laws state that authors or other right owners have the right to authorise or prevent certain acts in relation to a work. Right owners can authorise or prohibit: reproduction of the work in various forms, such as printed publications or sound recordings; distribution of copies of the work; public performance of the work; broadcasting or other communication of the work to the public; translation of the work into other languages; and adaptation of the work, such as turning a novel into a screenplay. Moral rights Moral rights are concerned with the non-economic rights of a creator. They protect the creator's connection with a work as well as the integrity of the work. Moral rights are only accorded to individual authors and in many national laws they remain with the authors even after the authors have transferred their economic rights. In some EU countries, such as France, moral rights last indefinitely. In the UK, however, moral rights are finite. That is, the right of attribution and the right of integrity last only as long as the work is in copyright. When the copyright term comes to an end, so too do the moral rights in that work. This is just one reason why the moral rights regime within the UK is often regarded as weaker or inferior to the protection of moral rights in continental Europe and elsewhere in the world. The Berne Convention, in Article 6bis, requires its members to grant authors the following rights: the right to claim authorship of a work (sometimes called the right of paternity or the right of attribution); and the right to object to any distortion or modification of a work, or other derogatory action in relation to a work, which would be prejudicial to the author's honour or reputation (sometimes called the right of integrity). These and other similar rights granted in national laws are generally known as the moral rights of authors. The Berne Convention requires these rights to be independent of authors’ economic rights. Moral rights are only accorded to individual authors and in many national laws they remain with the authors even after the authors have transferred their economic rights. This means that even where, for example, a film producer or publisher owns the economic rights in a work, in many jurisdictions the individual author continues to have moral rights. Recently, as a part of the debates being held at the US Copyright Office on the question of inclusion of Moral Rights as a part of the framework of the Copyright Law in United States, the Copyright Office concluded that many diverse aspects of the current moral rights patchwork – including copyright law's derivative work right, state moral rights statutes, and contract law – are generally working well and should not be changed. Further, the Office concludes that there is no need for the creation of a blanket moral rights statute at this time. However, there are aspects of the US moral rights patchwork that could be improved to the benefit of individual authors and the copyright system as a whole. The Copyright Law in the United States, several exclusive rights are granted to the holder of a copyright, as are listed below: protection of the work; to determine and decide how, and under what conditions, the work may be marketed, publicly displayed, reproduced, distributed, etc. to produce copies or reproductions of the work and to sell those copies; (including, typically, electronic copies) to import or export the work; to create derivative works; (works that adapt the original work) to perform or display the work publicly; to sell or cede these rights to others; to transmit or display by radio, video or internet. The basic right when a work is protected by copyright is that the holder may determine and decide how and under what conditions the protected work may be used by others. This includes the right to decide to distribute the work for free. This part of copyright is often overseen. The phrase "exclusive right" means that only the copyright holder is free to exercise those rights, and others are prohibited from using the work without the holder's permission. Copyright is sometimes called a "negative right", as it serves to prohibit certain people (e.g., readers, viewers, or listeners, and primarily publishers and would be publishers) from doing something they would otherwise be able to do, rather than permitting people (e.g., authors) to do something they would otherwise be unable to do. In this way it is similar to the unregistered design right in English law and European law. The rights of the copyright holder also permit him/her to not use or exploit their copyright, for some or all of the term. There is, however, a critique which rejects this assertion as being based on a philosophical interpretation of copyright law that is not universally shared. There is also debate on whether copyright should be considered a property right or a moral right. UK copyright law gives creators both economic rights and moral rights. While ‘copying’ someone else's work without permission may constitute an infringement of their economic rights, that is, the reproduction right or the right of communication to the public, whereas, ‘mutilating’ it might infringe the creator's moral rights. In the UK, moral rights include the right to be identified as the author of the work, which is generally identified as the right of attribution, and the right not to have your work subjected to ‘derogatory treatment’, that is the right of integrity. Indian copyright law is at parity with the international standards as contained in TRIPS. The Indian Copyright Act, 1957, pursuant to the amendments in 1999, 2002 and 2012, fully reflects the Berne Convention and the Universal Copyrights Convention, to which India is a party. India is also a party to the Geneva Convention for the Protection of Rights of Producers of Phonograms and is an active member of the World Intellectual Property Organization (WIPO) and United Nations Educational, Scientific and Cultural Organization (UNESCO). The Indian system provides both the economic and moral rights under different provisions of its Indian Copyright Act of 1957. Duration Copyright subsists for a variety of lengths in different jurisdictions. The length of the term can depend on several factors, including the type of work (e.g. musical composition, novel), whether the work has been published, and whether the work was created by an individual or a corporation. In most of the world, the default length of copyright is the life of the author plus either 50 or 70 years. In the United States, the term for most existing works is a fixed number of years after the date of creation or publication. Under most countries' laws (for example, the United States and the United Kingdom), copyrights expire at the end of the calendar year in which they would otherwise expire. The length and requirements for copyright duration are subject to change by legislation, and since the early 20th century there have been a number of adjustments made in various countries, which can make determining the duration of a given copyright somewhat difficult. For example, the | the default length of copyright is the life of the author plus either 50 or 70 years. In the United States, the term for most existing works is a fixed number of years after the date of creation or publication. Under most countries' laws (for example, the United States and the United Kingdom), copyrights expire at the end of the calendar year in which they would otherwise expire. The length and requirements for copyright duration are subject to change by legislation, and since the early 20th century there have been a number of adjustments made in various countries, which can make determining the duration of a given copyright somewhat difficult. For example, the United States used to require copyrights to be renewed after 28 years to stay in force, and formerly required a copyright notice upon first publication to gain coverage. In Italy and France, there were post-wartime extensions that could increase the term by approximately 6 years in Italy and up to about 14 in France. Many countries have extended the length of their copyright terms (sometimes retroactively). International treaties establish minimum terms for copyrights, but individual countries may enforce longer terms than those. In the United States, all books and other works, except for sound recordings, published before 1926 have expired copyrights and are in the public domain. The applicable date for sound recordings in the United States is before 1923. In addition, works published before 1964 that did not have their copyrights renewed 28 years after first publication year also are in the public domain. Hirtle points out that the great majority of these works (including 93% of the books) were not renewed after 28 years and are in the public domain. Books originally published outside the US by non-Americans are exempt from this renewal requirement, if they are still under copyright in their home country. But if the intended exploitation of the work includes publication (or distribution of derivative work, such as a film based on a book protected by copyright) outside the US, the terms of copyright around the world must be considered. If the author has been dead more than 70 years, the work is in the public domain in most, but not all, countries. In 1998, the length of a copyright in the United States was increased by 20 years under the Copyright Term Extension Act. This legislation was strongly promoted by corporations which had valuable copyrights which otherwise would have expired, and has been the subject of substantial criticism on this point. Limitations and exceptions In many jurisdictions, copyright law makes exceptions to these restrictions when the work is copied for the purpose of commentary or other related uses. United States copyright law does not cover names, titles, short phrases or listings (such as ingredients, recipes, labels, or formulas). However, there are protections available for those areas copyright does not cover, such as trademarks and patents. Idea–expression dichotomy and the merger doctrine The idea–expression divide differentiates between ideas and expression, and states that copyright protects only the original expression of ideas, and not the ideas themselves. This principle, first clarified in the 1879 case of Baker v. Selden, has since been codified by the Copyright Act of 1976 at 17 U.S.C. § 102(b). The first-sale doctrine and exhaustion of rights Copyright law does not restrict the owner of a copy from reselling legitimately obtained copies of copyrighted works, provided that those copies were originally produced by or with the permission of the copyright holder. It is therefore legal, for example, to resell a copyrighted book or CD. In the United States this is known as the first-sale doctrine, and was established by the courts to clarify the legality of reselling books in second-hand bookstores. Some countries may have parallel importation restrictions that allow the copyright holder to control the aftermarket. This may mean for example that a copy of a book that does not infringe copyright in the country where it was printed does infringe copyright in a country into which it is imported for retailing. The first-sale doctrine is known as exhaustion of rights in other countries and is a principle which also applies, though somewhat differently, to patent and trademark rights. It is important to note that the first-sale doctrine permits the transfer of the particular legitimate copy involved. It does not permit making or distributing additional copies. In Kirtsaeng v. John Wiley & Sons, Inc., in 2013, the United States Supreme Court held in a 6–3 decision that the first-sale doctrine applies to goods manufactured abroad with the copyright owner's permission and then imported into the US without such permission. The case involved a plaintiff who imported Asian editions of textbooks that had been manufactured abroad with the publisher-plaintiff's permission. The defendant, without permission from the publisher, imported the textbooks and resold on eBay. The Supreme Court's holding severely limits the ability of copyright holders to prevent such importation. In addition, copyright, in most cases, does not prohibit one from acts such as modifying, defacing, or destroying his or her own legitimately obtained copy of a copyrighted work, so long as duplication is not involved. However, in countries that implement moral rights, a copyright holder can in some cases successfully prevent the mutilation or destruction of a work that is publicly visible. Fair use and fair dealing Copyright does not prohibit all copying or replication. In the United States, the fair use doctrine, codified by the Copyright Act of 1976 as 17 U.S.C. Section 107, permits some copying and distribution without permission of the copyright holder or payment to same. The statute does not clearly define fair use, but instead gives four non-exclusive factors to consider in a fair use analysis. Those factors are: the purpose and character of one's use; the nature of the copyrighted work; what amount and proportion of the whole work was taken; the effect of the use upon the potential market for or value of the copyrighted work. In the United Kingdom and many other Commonwealth countries, a similar notion of fair dealing was established by the courts or through legislation. The concept is sometimes not well defined; however in Canada, private copying for personal use has been expressly permitted by statute since 1999. In Alberta (Education) v. Canadian Copyright Licensing Agency (Access Copyright), 2012 SCC 37, the Supreme Court of Canada concluded that limited copying for educational purposes could also be justified under the fair dealing exemption. In Australia, the fair dealing exceptions under the Copyright Act 1968 (Cth) are a limited set of circumstances under which copyrighted material can be legally copied or adapted without the copyright holder's consent. Fair dealing uses are research and study; review and critique; news reportage and the giving of professional advice (i.e. legal advice). Under current Australian law, although it is still a breach of copyright to copy, reproduce or adapt copyright material for personal or private use without permission from the copyright owner, owners of a legitimate copy are permitted to "format shift" that work from one medium to another for personal, private use, or to "time shift" a broadcast work for later, once and only once, viewing or listening. Other technical exemptions from infringement may also apply, such as the temporary reproduction of a work in machine readable form for a computer. In the United States the AHRA (Audio Home Recording Act Codified in Section 10, 1992) prohibits action against consumers making noncommercial recordings of music, in return for royalties on both media and devices plus mandatory copy-control mechanisms on recorders. Later acts amended US Copyright law so that for certain purposes making 10 copies or more is construed to be commercial, but there is no general rule permitting such copying. Indeed, making one complete copy of a work, or in many cases using a portion of it, for commercial purposes will not be considered fair use. The Digital Millennium Copyright Act prohibits the manufacture, importation, or distribution of devices whose intended use, or only significant commercial use, is to bypass an access or copy control put in place by a copyright owner. An appellate court has held that fair use is not a defense to engaging in such distribution. EU copyright laws recognise the right of EU member states to implement some national exceptions to copyright. Examples of those exceptions are: photographic reproductions on paper or any similar medium of works (excluding sheet music) provided that the rightholders receives fair compensation; reproduction made by libraries, educational establishments, museums or archives, which are non-commercial; archival reproductions of broadcasts; uses for the benefit of people with a disability; for demonstration or repair of equipment; for non-commercial research or private study; when used in parody. Accessible copies It is legal in several countries including the United Kingdom and the United States to produce alternative versions (for example, in large print or braille) of a copyrighted work to provide improved access to a work for blind and visually impaired people without permission from the copyright holder. Religious Service Exemption In the US there is a Religious Service Exemption (1976 law, section 110[3]), namely "performance of a non-dramatic literary or musical work or of a dramatico-musical work of a religious nature or display of a work, in the course of services at a place of worship or other religious assembly" shall not constitute infringement of copyright. Transfer, assignment and licensing A copyright, or aspects of it (e.g. reproduction alone, all but moral rights), may be assigned or transferred from one party to another. For example, a musician who records an album will often sign an agreement with a record company in which the musician agrees to transfer all copyright in the recordings in exchange for royalties and other considerations. The creator (and original copyright holder) benefits, or expects to, from production and marketing capabilities far beyond those of the author. In the digital age of music, music may be copied and distributed at minimal cost through the Internet; however, the record industry attempts to provide promotion and marketing for the artist and their work so it can reach a much larger audience. A copyright holder need not transfer all rights completely, though many publishers will insist. Some of the rights may be transferred, or else the copyright holder may grant another party a non-exclusive license to copy or distribute the work in a particular region or for a specified period of time. A transfer or licence may have to meet particular formal requirements in order to be effective, for example under the Australian Copyright Act 1968 the copyright itself must be expressly transferred in writing. Under the US Copyright Act, a transfer of ownership in copyright must be memorialized in a writing signed by the transferor. For that purpose, ownership in copyright includes exclusive licenses of rights. Thus exclusive licenses, to be effective, must be granted in a written instrument signed by the grantor. No special form of transfer or grant is required. A simple document that identifies the work involved and the rights being granted is sufficient. Non-exclusive grants (often called non-exclusive licenses) need not be in writing under US law. They can be oral or even implied by the behavior of the parties. Transfers of copyright ownership, including exclusive licenses, may and should be recorded in the U.S. Copyright Office. (Information on recording transfers is available on the Office's web site.) While recording is not required to make the grant effective, it offers important benefits, much like those obtained by recording a deed in a real estate transaction. Copyright may also be licensed. Some jurisdictions may provide that certain classes of copyrighted works be made available under a prescribed statutory license (e.g. musical works in the United States used for radio broadcast or performance). This is also called a compulsory license, because under this scheme, anyone who wishes to copy a covered work does not need the permission of the copyright holder, but instead merely files the proper notice and pays a set fee established by statute (or by an agency decision under statutory guidance) for every copy made. Failure to follow the proper procedures would place the copier at risk of an infringement suit. Because of the difficulty of following every individual work, copyright collectives or collecting societies and performing rights organizations (such as ASCAP, BMI, and SESAC) have been formed to collect royalties for hundreds (thousands and more) works at once. Though this market solution bypasses the statutory license, the availability of the statutory fee still helps dictate the price per work collective rights organizations charge, driving it down to what avoidance of procedural hassle would justify. Free licenses Copyright licenses known as open or free licenses seek to grant several rights to licensees, either for a fee or not. Free in this context is not as much of a reference to price as it is to freedom. What constitutes free licensing has been characterised in a number of similar definitions, including by order of longevity the Free Software Definition, the Debian Free Software Guidelines, the Open Source Definition and the Definition of Free Cultural Works. Further refinements to these definitions have resulted in categories such as copyleft and permissive. Common examples of free licences are the GNU General Public License, BSD licenses and some Creative Commons licenses. Founded in 2001 by James Boyle, Lawrence Lessig, and Hal Abelson, the Creative Commons (CC) is a non-profit organization which aims to facilitate the legal sharing of creative works. To this end, the organization provides a number of generic copyright license options to the public, gratis. These licenses allow copyright holders to define conditions under which others may use a work and to specify what types of use are acceptable. Terms of use have traditionally been negotiated on an individual basis between copyright holder and potential licensee. Therefore, a general CC license outlining which rights the copyright holder is willing to waive enables the general public to use such works more freely. Six general types of CC licenses are available (although some of them are not properly free per the above definitions and per Creative Commons' own advice). These are based upon copyright-holder stipulations such as whether he or she is willing to allow modifications to the work, whether he or she permits the creation of derivative works and whether he or she is willing to permit commercial use of the work. approximately 130 million individuals had received such licenses. Criticism Some |
13th century they conquered the Land of Valencia and the Balearic Islands. The city of Alghero in Sardinia was repopulated with Catalan speakers in the 14th century. The language also reached Murcia, which became Spanish-speaking in the 15th century. In the Low Middle Ages, Catalan went through a golden age, reaching a peak of maturity and cultural richness. Examples include the work of Majorcan polymath Ramon Llull (1232–1315), the Four Great Chronicles (13th–14th centuries), and the Valencian school of poetry culminating in Ausiàs March (1397–1459). By the 15th century, the city of Valencia had become the sociocultural center of the Crown of Aragon, and Catalan was present all over the Mediterranean world. During this period, the Royal Chancery propagated a highly standardized language. Catalan was widely used as an official language in Sicily until the 15th century, and in Sardinia until the 17th. During this period, the language was what Costa Carreras terms "one of the 'great languages' of medieval Europe". Martorell's outstanding novel of chivalry Tirant lo Blanc (1490) shows a transition from Medieval to Renaissance values, something that can also be seen in Metge's work. The first book produced with movable type in the Iberian Peninsula was printed in Catalan. Start of the modern era With the union of the crowns of Castille and Aragon in 1479, the use of Spanish gradually became more prestigious and marked the start of the decline of Catalan. Starting in the 16th century, Catalan literature came under the influence of Spanish, and the urban and literary classes became bilingual. With the Treaty of the Pyrenees (1659), Spain ceded the northern part of Catalonia to France, and soon thereafter the local Catalan varieties came under the influence of French, which in 1700 became the sole official language of the region. Shortly after the French Revolution (1789), the French First Republic prohibited official use of, and enacted discriminating policies against, the regional languages of France, such as Catalan, Alsatian, Breton, Occitan, Flemish, and Basque. France: 19th to 20th centuries Following the French establishment of the colony of Algeria from 1830 onward, it received several waves of Catalan-speaking settlers. People from the Spanish Alacant province settled around Oran, whereas Algiers received immigration from Northern Catalonia and Menorca. Their speech was known as patuet. By 1911, the number of Catalan speakers was around 100,000. After the declaration of independence of Algeria in 1962, almost all the Catalan speakers fled to Northern Catalonia (as Pieds-Noirs) or Alacant. The government of France formally recognizes only French as an official language. Nevertheless, on 10 December 2007, the General Council of the Pyrénées-Orientales officially recognized Catalan as one of the languages of the department and seeks to further promote it in public life and education. Spain: 18th to 20th centuries In Spain, the decline of Catalan continued into the 18th century. The defeat of the pro-Habsburg coalition in the War of Spanish Succession (1714) initiated a series of laws which, among other centralizing measures, imposed the use of Spanish in legal documentation all over Spain. However, the 19th century saw a Catalan literary revival (), which has continued up to the present day. This period starts with Aribau's Ode to the Homeland (1833); followed in the second half of the 19th century, and the early 20th by the work of Verdaguer (poetry), Oller (realist novel), and Guimerà (drama). In the 19th century, the region of Carche, in the province of Murcia was repopulated with Catalan speakers from the Land of Valencia. Catalan obtained an orthographic standardization in 1913 and became an official language during the Second Spanish Republic (1931–1939). The Second Spanish Republic saw a brief period of tolerance, with most restrictions against Catalan lifted. The Catalan language and culture were suppressed during the Spanish Civil War (1936–1939) and the subsequent decades in Francoist Catalonia. The Francoist dictatorship (1939–1975) banned the use of Catalan in schools and in public administration. Oppression of the Catalan language and identity was carried out in schools, through governmental bodies, and in religious centers. Franco's desire for a homogenous Spanish population resonated with some Catalans in favor of his regime, primarily members of the upper class, who began to reject the use of Catalan. Despite all of these hardships, Catalan continued to be used privately within households, and it was able to survive Francisco Franco's dictatorship. Several prominent Catalan authors resisted the suppression through literature. In addition to the loss of prestige for Catalan and the prohibition of its use in schools, migration during the 1950s into Catalonia from other parts of Spain also contributed to the diminished use of the language. These migrants were often unaware of the existence of Catalan, and thus felt no need to learn or use it. Catalonia was the economic powerhouse of Spain, so these migrations continued to occur from all corners of the country. Historically, employment opportunities were reduced for those who are not bilingual. Present day Since the Spanish transition to democracy (1975–1982), Catalan has been institutionalized as an official language, language of education, and language of mass media; all of which have contributed to its increased prestige. In Catalonia, there is an unparalleled large bilingual European non-state linguistic community. The teaching of Catalan is mandatory in all schools, but it is possible to use Spanish for studying in the public education system of Catalonia in two situations – if the teacher assigned to a class chooses to use Spanish, or during the learning process of one or more recently arrived immigrant students. There is also some intergenerational shift towards Catalan. According to the Statistical Institute of Catalonia, in 2013 the Catalan language is the second most commonly used in Catalonia, after Spanish, as a native or self-defining language: 7% of the population self-identifies with both Catalan and Spanish equally, 36.4% with Catalan and 47.5% only Spanish. In 2003 the same studies concluded no language preference for self-identification within the population above 15 years old: 5% self-identified with both languages, 44.3% with Catalan and 47.5% with Spanish. To promote use of Catalan, the Generalitat de Catalunya (Catalonia's official Autonomous government) spends part of its annual budget on the promotion of the use of Catalan in Catalonia and in other territories, with entities such as (Consortium for Linguistic Normalization) In Andorra, Catalan has always been the sole official language. Since the promulgation of the 1993 constitution, several policies favoring Catalan have been enforced, like Catalan medium education. On the other hand, there are several language shift processes currently taking place. In the Northern Catalonia area of France, Catalan has followed the same trend as the other minority languages of France, with most of its native speakers being 60 or older (as of 2004). Catalan is studied as a foreign language by 30% of the primary education students, and by 15% of the secondary. The cultural association promotes a network of community-run schools engaged in Catalan language immersion programs. In Alicante province, Catalan is being replaced by Spanish and in Alghero by Italian. There is also well ingrained diglossia in the Valencian Community, Ibiza, and to a lesser extent, in the rest of the Balearic islands. Classification and relationship with other Romance languages One classification of Catalan is given by Pèire Bèc: Romance languages Italo-Western languages Western Romance languages Gallo-Iberian languages Gallo-Romance languages Occitano-Romance languages Catalan language However, the ascription of Catalan to the Occitano-Romance branch of Gallo-Romance languages is not shared by all linguists and philologists, particularly among Spanish ones, such as Ramón Menéndez Pidal. Catalan bears varying degrees of similarity to the linguistic varieties subsumed under the cover term Occitan language (see also differences between Occitan and Catalan and Gallo-Romance languages). Thus, as it should be expected from closely related languages, Catalan today shares many traits with other Romance languages. Relationship with other Romance languages Some include Catalan in Occitan, as the linguistic distance between this language and some Occitan dialects (such as the Gascon language) is similar to the distance among different Occitan dialects. Catalan was considered a dialect of Occitan until the end of the 19th century and still today remains its closest relative. Catalan shares many traits with the other neighboring Romance languages (Occitan, French, Italian, Sardinian as well as Spanish and Portuguese among others). However, despite being spoken mostly on the Iberian Peninsula, Catalan has marked differences with the Iberian Romance group (Spanish and Portuguese) in terms of pronunciation, grammar, and especially vocabulary; showing instead its closest affinity with languages native to France and northern Italy, particularly Occitan and to a lesser extent Gallo-Romance (Franco-Provençal, French, Gallo-Italian). According to Ethnologue, the lexical similarity between Catalan and other Romance languages is: 87% with Italian; 85% with Portuguese and Spanish; 76% with Ladin; 75% with Sardinian; and 73% with Romanian. During much of its history, and especially during the Francoist dictatorship (1939–1975), the Catalan language was ridiculed as a mere dialect of Spanish. This view, based on political and ideological considerations, has no linguistic validity. Spanish and Catalan have important differences in their sound systems, lexicon, and grammatical features, placing the language in features closer to Occitan (and French). There is evidence that, at least from the 2nd century , the vocabulary and phonology of Roman Tarraconensis was different from the rest of Roman Hispania. Differentiation arose generally because Spanish, Asturian, and Galician-Portuguese share certain peripheral archaisms (Spanish , Asturian and Portuguese vs. Catalan , Occitan "to boil") and innovatory regionalisms (Sp , Ast vs. Cat , Oc "bullock"), while Catalan has a shared history with the Western Romance innovative core, especially Occitan. Like all Romance languages, Catalan has a handful of native words which are unique to it, or rare elsewhere. These include: verbs: 'to fasten; transfix' > 'to compose, write up', > 'to combine, conjugate', > 'to wake; awaken', 'to thicken; crowd together' > 'to save, keep', > 'to miss, yearn, pine for', 'to investigate, track' > Old Catalan enagar 'to incite, induce', > OCat ujar 'to exhaust, fatigue', > 'to appease, mollify', > 'to reject, refuse'; nouns: > 'pomace', > 'reedmace', > 'catarrh', > 'snowdrift', > 'ardor, passion', > 'brake', > 'avalanche', > 'edge, border', 'sawfish' > pestriu > 'thresher shark, smooth hound; ray', 'live coal' > 'spark', > tardaó > 'autumn'. The Gothic superstrate produced different outcomes in Spanish and Catalan. For example, Catalan "mud" and "to roast", of Germanic origin, contrast with Spanish and , of Latin origin; whereas Catalan "spinning wheel" and "temple", of Latin origin, contrast with Spanish and , of Germanic origin. The same happens with Arabic loanwords. Thus, Catalan "large earthenware jar" and "tile", of Arabic origin, contrast with Spanish and , of Latin origin; whereas Catalan "oil" and "olive", of Latin origin, contrast with Spanish and . However, the Arabic element in Spanish is generally much more prevalent. Situated between two large linguistic blocks (Iberian Romance and Gallo-Romance), Catalan has many unique lexical choices, such as "to miss somebody", "to calm somebody down", and "reject". Geographic distribution Catalan-speaking territories Traditionally Catalan-speaking territories are sometimes called the (Catalan Countries), a denomination based on cultural affinity and common heritage, that has also had a subsequent political interpretation but no official status. Various interpretations of the term may include some or all of these regions. Number of speakers The number of people known to be fluent in Catalan varies depending on the sources used. A 2004 study did not count the total number of speakers, but estimated a total of 9–9.5 million by matching the percentage of speakers to the population of each area where Catalan is spoken. The web site of the Generalitat de Catalunya estimated that as of 2004 there were 9,118,882 speakers of Catalan. These figures only reflect potential speakers; today it is the native language of only 35.6% of the Catalan population. According to Ethnologue, Catalan had 4.1 million native speakers and 5.1 million second-language speakers in 2021. According to a 2011 study the total number of Catalan speakers is over 9.8 million, with 5.9 million residing in Catalonia. More than half of them speak Catalan as a second language, with native speakers being about 4.4 million of those (more than 2.8 in Catalonia). Very few Catalan monoglots exist; basically, virtually all of the Catalan speakers in Spain are bilingual speakers of Catalan and Spanish, with a sizable population of Spanish-only speakers of immigrant origin (typically born outside Catalonia or with both parents born outside Catalonia) existing in the major Catalan urban areas as well. In Roussillon, only a minority of French Catalans speak Catalan nowadays, with French being the majority language for the inhabitants after a continued process of language shift. According to a 2019 survey by the Catalan government, 31.5% of the inhabitants of Catalonia have Catalan as first language at home whereas 52.7% have Spanish, 2.8% both Catalan and Spanish and 10.8% other languages. Spanish is the most spoken language in Barcelona (according to the linguistic census held by the Government of Catalonia in 2013) and it is understood almost universally. According to this census of 2013 Catalan is also very commonly spoken in the city of 1,501,262: it is understood by 95% of the population, while 72.3% over the age of 2 can speak it (1,137,816), 79% can read it (1,246.555), and 53% can write it (835,080). The proportion in Barcelona who can speak it, 72.3%, is lower than that of the overall Catalan population, of whom 81.2% over the age of 15 speak the language. Knowledge of Catalan has increased significantly in recent decades thanks to a language immersion educational system. An important social characteristic of the Catalan language is that all the areas where it is spoken are bilingual in practice: together with the French language in Roussillon, with Italian in Alghero, with Spanish and French in Andorra and with Spanish in the rest of the territories. 1. The number of people who understand Catalan includes those who can speak it. 2. Figures relate to all self-declared capable speakers, not just native speakers. Level of knowledge (% of the population 15 years old and older). Social use (% of the population 15 years old and older). Native language Phonology Catalan phonology varies by dialect. Notable features include: Marked contrast of the vowel pairs and , as in other Western Romance languages, other than Spanish. Lack of diphthongization of Latin short , , as in Galician and Portuguese, but unlike French, Spanish, or Italian. Abundance of diphthongs containing , as in Galician and Portuguese. In contrast to other Romance languages, Catalan has many monosyllabic words, and these may end in a wide variety of consonants, including some consonant clusters. Additionally, Catalan has final obstruent devoicing, which gives rise to an abundance of such couplets as "(male friend") vs. ("female friend"). Central Catalan pronunciation is considered to be standard for the language. The descriptions below are mostly representative of this variety. For the differences in pronunciation between the different dialects, see the section on pronunciation of dialects in this article. Vowels Catalan has inherited the typical vowel system of Vulgar Latin, with seven stressed phonemes: , a common feature in Western Romance, with the exception of Spanish. Balearic also has instances of stressed . Dialects differ in the different degrees of vowel reduction, and the incidence of the pair . In Central Catalan, unstressed vowels reduce to three: ; ; remains distinct. The other dialects have different vowel reduction processes (see the section pronunciation of dialects in this | are often called the or "Catalan Countries". The language evolved from Vulgar Latin in the Middle Ages around the eastern Pyrenees. Nineteenth-century Spain saw a Catalan literary revival, culminating in the early 1900s. Etymology and pronunciation The word Catalan is derived from the territorial name of Catalonia, itself of disputed etymology. The main theory suggests that (Latin Gathia Launia) derives from the name Gothia or Gauthia ("Land of the Goths"), since the origins of the Catalan counts, lords and people were found in the March of Gothia, whence Gothland > Gothlandia > Gothalania > Catalonia theoretically derived. In English, the term referring to a person first appears in the mid 14th century as Catelaner, followed in the 15th century as Catellain (from French). It is attested a language name since at least 1652. The word Catalan can be pronounced in English as , or . The endonym is pronounced in the Eastern Catalan dialects, and in the Western dialects. In the Valencian Community and Carche, the term is frequently used instead. Thus, the name "Valencian", although often employed for referring to the varieties specific to the Valencian Community and Carche, is also used by Valencians as a name for the language as a whole, synonymous with "Catalan". Both uses of the term have their respective entries in the dictionaries by the AVL and the IEC. See also status of Valencian below. History Middle Ages By the 9th century, Catalan had evolved from Vulgar Latin on both sides of the eastern end of the Pyrenees, as well as the territories of the Roman province of Hispania Tarraconensis to the south. From the 8th century onwards the Catalan counts extended their territory southwards and westwards at the expense of the Muslims, bringing their language with them. This process was given definitive impetus with the separation of the County of Barcelona from the Carolingian Empire in 988. In the 11th century, documents written in macaronic Latin begin to show Catalan elements, with texts written almost completely in Romance appearing by 1080. Old Catalan shared many features with Gallo-Romance, diverging from Old Occitan between the 11th and 14th centuries. During the 11th and 12th centuries the Catalan rulers expanded southward to the Ebro river, and in the 13th century they conquered the Land of Valencia and the Balearic Islands. The city of Alghero in Sardinia was repopulated with Catalan speakers in the 14th century. The language also reached Murcia, which became Spanish-speaking in the 15th century. In the Low Middle Ages, Catalan went through a golden age, reaching a peak of maturity and cultural richness. Examples include the work of Majorcan polymath Ramon Llull (1232–1315), the Four Great Chronicles (13th–14th centuries), and the Valencian school of poetry culminating in Ausiàs March (1397–1459). By the 15th century, the city of Valencia had become the sociocultural center of the Crown of Aragon, and Catalan was present all over the Mediterranean world. During this period, the Royal Chancery propagated a highly standardized language. Catalan was widely used as an official language in Sicily until the 15th century, and in Sardinia until the 17th. During this period, the language was what Costa Carreras terms "one of the 'great languages' of medieval Europe". Martorell's outstanding novel of chivalry Tirant lo Blanc (1490) shows a transition from Medieval to Renaissance values, something that can also be seen in Metge's work. The first book produced with movable type in the Iberian Peninsula was printed in Catalan. Start of the modern era With the union of the crowns of Castille and Aragon in 1479, the use of Spanish gradually became more prestigious and marked the start of the decline of Catalan. Starting in the 16th century, Catalan literature came under the influence of Spanish, and the urban and literary classes became bilingual. With the Treaty of the Pyrenees (1659), Spain ceded the northern part of Catalonia to France, and soon thereafter the local Catalan varieties came under the influence of French, which in 1700 became the sole official language of the region. Shortly after the French Revolution (1789), the French First Republic prohibited official use of, and enacted discriminating policies against, the regional languages of France, such as Catalan, Alsatian, Breton, Occitan, Flemish, and Basque. France: 19th to 20th centuries Following the French establishment of the colony of Algeria from 1830 onward, it received several waves of Catalan-speaking settlers. People from the Spanish Alacant province settled around Oran, whereas Algiers received immigration from Northern Catalonia and Menorca. Their speech was known as patuet. By 1911, the number of Catalan speakers was around 100,000. After the declaration of independence of Algeria in 1962, almost all the Catalan speakers fled to Northern Catalonia (as Pieds-Noirs) or Alacant. The government of France formally recognizes only French as an official language. Nevertheless, on 10 December 2007, the General Council of the Pyrénées-Orientales officially recognized Catalan as one of the languages of the department and seeks to further promote it in public life and education. Spain: 18th to 20th centuries In Spain, the decline of Catalan continued into the 18th century. The defeat of the pro-Habsburg coalition in the War of Spanish Succession (1714) initiated a series of laws which, among other centralizing measures, imposed the use of Spanish in legal documentation all over Spain. However, the 19th century saw a Catalan literary revival (), which has continued up to the present day. This period starts with Aribau's Ode to the Homeland (1833); followed in the second half of the 19th century, and the early 20th by the work of Verdaguer (poetry), Oller (realist novel), and Guimerà (drama). In the 19th century, the region of Carche, in the province of Murcia was repopulated with Catalan speakers from the Land of Valencia. Catalan obtained an orthographic standardization in 1913 and became an official language during the Second Spanish Republic (1931–1939). The Second Spanish Republic saw a brief period of tolerance, with most restrictions against Catalan lifted. The Catalan language and culture were suppressed during the Spanish Civil War (1936–1939) and the subsequent decades in Francoist Catalonia. The Francoist dictatorship (1939–1975) banned the use of Catalan in schools and in public administration. Oppression of the Catalan language and identity was carried out in schools, through governmental bodies, and in religious centers. Franco's desire for a homogenous Spanish population resonated with some Catalans in favor of his regime, primarily members of the upper class, who began to reject the use of Catalan. Despite all of these hardships, Catalan continued to be used privately within households, and it was able to survive Francisco Franco's dictatorship. Several prominent Catalan authors resisted the suppression through literature. In addition to the loss of prestige for Catalan and the prohibition of its use in schools, migration during the 1950s into Catalonia from other parts of Spain also contributed to the diminished use of the language. These migrants were often unaware of the existence of Catalan, and thus felt no need to learn or use it. Catalonia was the economic powerhouse of Spain, so these migrations continued to occur from all corners of the country. Historically, employment opportunities were reduced for those who are not bilingual. Present day Since the Spanish transition to democracy (1975–1982), Catalan has been institutionalized as an official language, language of education, and language of mass media; all of which have contributed to its increased prestige. In Catalonia, there is an unparalleled large bilingual European non-state linguistic community. The teaching of Catalan is mandatory in all schools, but it is possible to use Spanish for studying in the public education system of Catalonia in two situations – if the teacher assigned to a class chooses to use Spanish, or during the learning process of one or more recently arrived immigrant students. There is also some intergenerational shift towards Catalan. According to the Statistical Institute of Catalonia, in 2013 the Catalan language is the second most commonly used in Catalonia, after Spanish, as a native or self-defining language: 7% of the population self-identifies with both Catalan and Spanish equally, 36.4% with Catalan and 47.5% only Spanish. In 2003 the same studies concluded no language preference for self-identification within the population above 15 years old: 5% self-identified with both languages, 44.3% with Catalan and 47.5% with Spanish. To promote use of Catalan, the Generalitat de Catalunya (Catalonia's official Autonomous government) spends part of its annual budget on the promotion of the use of Catalan in Catalonia and in other territories, with entities such as (Consortium for Linguistic Normalization) In Andorra, Catalan has always been the sole official language. Since the promulgation of the 1993 constitution, several policies favoring Catalan have been enforced, like Catalan medium education. On the other hand, there are several language shift processes currently taking place. In the Northern Catalonia area of France, Catalan has followed the same trend as the other minority languages of France, with most of its native speakers being 60 or older (as of 2004). Catalan is studied as a foreign language by 30% of the primary education students, and by 15% of the secondary. The cultural association promotes a network of community-run schools engaged in Catalan language immersion programs. In Alicante province, Catalan is being replaced by Spanish and in Alghero by Italian. There is also well ingrained diglossia in the Valencian Community, Ibiza, and to a lesser extent, in the rest of the Balearic islands. Classification and relationship with other Romance languages One classification of Catalan is given by Pèire Bèc: Romance languages Italo-Western languages Western Romance languages Gallo-Iberian languages Gallo-Romance languages Occitano-Romance languages Catalan language However, the ascription of Catalan to the Occitano-Romance branch of Gallo-Romance languages is not shared by all linguists and philologists, particularly among Spanish ones, such as Ramón Menéndez Pidal. Catalan bears varying degrees of similarity to the linguistic varieties subsumed under the cover term Occitan language (see also differences between Occitan and Catalan and Gallo-Romance languages). Thus, as it should be expected from closely related languages, Catalan today shares many traits with other Romance languages. Relationship with other Romance languages Some include Catalan in Occitan, as the linguistic distance between this language and some Occitan dialects (such as the Gascon language) is similar to the distance among different Occitan dialects. Catalan was considered a dialect of Occitan until the end of the 19th century and still today remains its closest relative. Catalan shares many traits with the other neighboring Romance languages (Occitan, French, Italian, Sardinian as well as Spanish and Portuguese among others). However, despite being spoken mostly on the Iberian Peninsula, Catalan has marked differences with the Iberian Romance group (Spanish and Portuguese) in terms of pronunciation, grammar, and especially vocabulary; showing instead its closest affinity with languages native to France and northern Italy, particularly Occitan and to a lesser extent Gallo-Romance (Franco-Provençal, French, Gallo-Italian). According to Ethnologue, the lexical similarity between Catalan and other Romance languages is: 87% with Italian; 85% with Portuguese and Spanish; 76% with Ladin; 75% with Sardinian; and 73% with Romanian. During much of its history, and especially during the Francoist dictatorship (1939–1975), the Catalan language was ridiculed as a mere dialect of Spanish. This view, based on political and ideological considerations, has no linguistic validity. Spanish and Catalan have important differences in their sound systems, lexicon, and grammatical features, placing the language in features closer to Occitan (and French). There is evidence that, at least from the 2nd century , the vocabulary and phonology of Roman Tarraconensis was different from the rest of Roman Hispania. Differentiation arose generally because Spanish, Asturian, and Galician-Portuguese share certain peripheral archaisms (Spanish , Asturian and Portuguese vs. Catalan , Occitan "to boil") and innovatory regionalisms (Sp , Ast vs. Cat , Oc "bullock"), while Catalan has a shared history with the Western Romance innovative core, especially Occitan. Like all Romance languages, Catalan has a handful of native words which are unique to it, or rare elsewhere. These include: verbs: 'to fasten; transfix' > 'to compose, write up', > 'to combine, conjugate', > 'to wake; awaken', 'to thicken; crowd together' > 'to save, keep', > 'to miss, yearn, pine for', 'to investigate, track' > Old Catalan enagar 'to incite, induce', > OCat ujar 'to exhaust, fatigue', > 'to appease, mollify', > 'to reject, refuse'; nouns: > 'pomace', > 'reedmace', > 'catarrh', > 'snowdrift', > 'ardor, passion', > 'brake', > 'avalanche', > 'edge, border', 'sawfish' > pestriu > 'thresher shark, smooth hound; ray', 'live coal' > 'spark', > tardaó > 'autumn'. The Gothic superstrate produced different outcomes in Spanish and Catalan. For example, Catalan "mud" and "to roast", of Germanic origin, contrast with Spanish and , of Latin origin; whereas Catalan "spinning wheel" and "temple", of Latin origin, contrast with Spanish and , of Germanic origin. The same happens with Arabic loanwords. Thus, Catalan "large earthenware jar" and "tile", of Arabic origin, contrast with Spanish and , of Latin origin; whereas Catalan "oil" and "olive", of Latin origin, contrast with Spanish and . However, the Arabic element in Spanish is generally much more prevalent. Situated between two large linguistic blocks (Iberian Romance and Gallo-Romance), Catalan has many unique lexical choices, such as "to miss somebody", "to calm somebody down", and "reject". Geographic distribution Catalan-speaking territories Traditionally Catalan-speaking territories are sometimes called the (Catalan Countries), a denomination based on cultural affinity and common heritage, that has also had a subsequent political interpretation but no official status. Various interpretations of the term may include some or all of these regions. Number of speakers The number of people known to be fluent in Catalan varies depending on the sources used. A 2004 study did not count the total number of speakers, but estimated a total of 9–9.5 million by matching the percentage of speakers to the population of each area where Catalan is spoken. The web site of the Generalitat de Catalunya estimated that as of 2004 there were 9,118,882 speakers of Catalan. These figures only reflect potential speakers; today it is the native language of only 35.6% of the Catalan population. According to Ethnologue, Catalan had 4.1 million native speakers and 5.1 million second-language speakers in 2021. According to a 2011 study the total number of Catalan speakers is over 9.8 million, with 5.9 million residing in Catalonia. More than half of them speak Catalan as a second language, with native speakers being about 4.4 million of those (more than 2.8 in Catalonia). Very few Catalan monoglots exist; basically, virtually all of the Catalan speakers in Spain are bilingual speakers of Catalan and Spanish, with a sizable population of Spanish-only speakers of immigrant origin (typically born outside Catalonia or with both parents born outside Catalonia) existing in the major Catalan urban areas |
shutdown. Booster Systems Engineer Jenny M. Howard acted quickly to command the crew to inhibit any further automatic RS-25 shutdowns based on readings from the remaining sensors, preventing the potential shutdown of a second engine and a possible abort mode that may have resulted in the loss of crew and vehicle (LOCV). The failed RS-25 resulted in an Abort to Orbit (ATO) trajectory, whereby the shuttle achieved a lower-than-planned orbital altitude. The plan had been for a by orbit, but the mission was carried out at by . Mission summary STS-51-F's primary payload was the laboratory module Spacelab 2. A special part of the modular Spacelab system, the "igloo", which was located at the head of a three-pallet train, provided on-site support to instruments mounted on pallets. The main mission objective was to verify performance of Spacelab systems, determine the interface capability of the orbiter, and measure the environment created by the spacecraft. Experiments covered life sciences, plasma physics, astronomy, high-energy astrophysics, solar physics, atmospheric physics and technology research. Despite mission replanning necessitated by Challengers abort to orbit trajectory, the Spacelab mission was declared a success. The flight marked the first time the European Space Agency (ESA) Instrument Pointing System (IPS) was tested in orbit. This unique pointing instrument was designed with an accuracy of one arcsecond. Initially, some problems were experienced when it was commanded to track the Sun, but a series of software fixes were made and the problem was corrected. In addition, Anthony W. England became the second amateur radio operator to transmit from space during the mission. Spacelab Infrared Telescope The Spacelab Infrared Telescope (IRT) was also flown on the mission. The IRT was a aperture helium-cooled infrared telescope, observing light between wavelengths of 1.7 to 118 μm. It was thought heat emissions from the Shuttle corrupting long-wavelength data, but it still returned useful astronomical data. Another problem was that a piece of mylar insulation broke loose and floated in the line-of-sight of the telescope. IRT collected infrared data on 60% of the galactic plane. (see also List of largest infrared telescopes) A later space mission that experienced a stray light problem from debris was Gaia astrometry spacecraft launch in 2013 by the ESA - the source of the stray light was later identified as the fibers of the sunshield, protruding beyond the edges of the shield. Other payloads The Plasma Diagnostics Package (PDP), which had been previously flown on STS-3, made its return on the mission, and was part of a set of plasma physics experiments designed to study the Earth's ionosphere. During the third day of the mission, it was grappled out of the payload bay by the Remote Manipulator System (Canadarm) and released for six hours. During this time, Challenger maneuvered around the PDP as part of a targeted proximity operations exercise. The PDP was successfully grappled by the Canadarm and returned to the payload bay at the beginning of the fourth day of the mission. In a heavily publicized marketing experiment, astronauts aboard STS-51-F drank carbonated beverages from specially designed cans from Cola Wars competitors Coca-Cola and Pepsi. According to Acton, after Coke developed its experimental dispenser for an earlier shuttle flight, Pepsi insisted to the Presidency of Ronald Reagan that Coke should not be the first cola in space. The experiment was delayed until Pepsi could develop its own system, and the two companies' products were assigned to STS-51-F. Red Team tested Coke, and Blue Team tested Pepsi. As part of the experiment, each team was photographed with the cola logo. Acton said that while | hours. During this time, Challenger maneuvered around the PDP as part of a targeted proximity operations exercise. The PDP was successfully grappled by the Canadarm and returned to the payload bay at the beginning of the fourth day of the mission. In a heavily publicized marketing experiment, astronauts aboard STS-51-F drank carbonated beverages from specially designed cans from Cola Wars competitors Coca-Cola and Pepsi. According to Acton, after Coke developed its experimental dispenser for an earlier shuttle flight, Pepsi insisted to the Presidency of Ronald Reagan that Coke should not be the first cola in space. The experiment was delayed until Pepsi could develop its own system, and the two companies' products were assigned to STS-51-F. Red Team tested Coke, and Blue Team tested Pepsi. As part of the experiment, each team was photographed with the cola logo. Acton said that while the sophisticated Coke system "dispensed soda kind of like what we're used to drinking on Earth", the Pepsi can was a shaving cream can with the Pepsi logo on a paper wrapper, which "dispensed soda filled with bubbles" that was "not very drinkable". Acton said that when he gives speeches in schools, audiences are much more interested in hearing about the cola experiment than in solar physics. Post-flight, the astronauts revealed that they preferred Tang, in part because it could be mixed on-orbit with existing chilled-water supplies, whereas there was no dedicated refrigeration equipment on board to chill the cans, which also fizzed excessively in microgravity. In an experiment during the mission, thruster rockets were fired at a point over Tasmania and also above Boston to create two "holes" – plasma depletion regions – in the ionosphere. A worldwide group of geophysicists collaborated with the observations made from Spacelab 2. Landing Challenger landed at Edwards Air Force Base, California, on 6 August 1985, at 12:45:26 p.m. PDT. Its rollout distance was . The mission had been extended by 17 orbits for additional payload activities due to the Abort to Orbit. The orbiter arrived back at Kennedy Space Center on 11 August 1985. Mission insignia The mission insignia was designed by Houston, Texas artist Skip Bradley. is depicted ascending toward the heavens in search of new knowledge in the field of solar and stellar astronomy, with its Spacelab 2 payload. The constellations Leo and Orion are shown in the positions they were in relative to the Sun during the flight. The nineteen stars indicate that the mission is the 19th shuttle flight. Crew bios C. Gordon Fullerton died on 21 August 2013, aged 76. Karl Gordon Henize died 5 October 1993, aged 66, on an expedition to Mount Everest studying the effects of radiation from space. Legacy One of the purposes of the mission was to test how suitable the Shuttle was for conducting infrared observations, and the IRT was operated on this mission. However, the orbiter was found to have some draw-backs for infrared astronomy, and this led to later |
grosso (a concerto for more than one musician), a very popular form in the Baroque era, began to be replaced by the solo concerto, featuring only one soloist. Composers began to place more importance on the particular soloist's ability to show off virtuoso skills, with challenging, fast scale and arpeggio runs. Nonetheless, some concerti grossi remained, the most famous of which being Mozart's Sinfonia Concertante for Violin and Viola in E-flat major. Main characteristics In the classical period, the theme consists of phrases with contrasting melodic figures and rhythms. These phrases are relatively brief, typically four bars in length, and can occasionally seem sparse or terse. The texture is mainly homophonic, with a clear melody above a subordinate chordal accompaniment, for instance an Alberti bass. This contrasts with the practice in Baroque music, where a piece or movement would typically have only one musical subject, which would then be worked out in a number of voices according to the principles of counterpoint, while maintaining a consistent rhythm or metre throughout. As a result, Classical music tends to have a lighter, clearer texture than the Baroque. The classical style draws on the style galant, a musical style which emphasised light elegance in place of the Baroque's dignified seriousness and impressive grandeur. Structurally, Classical music generally has a clear musical form, with a well-defined contrast between tonic and dominant, introduced by clear cadences. Dynamics are used to highlight the structural characteristics of the piece. In particular, sonata form and its variants were developed during the early classical period and was frequently used. The Classical approach to structure again contrasts with the Baroque, where a composition would normally move between tonic and dominant and back again, but through a continual progress of chord changes and without a sense of "arrival" at the new key. While counterpoint was less emphasised in the classical period, it was by no means forgotten, especially later in the period, and composers still used counterpoint in "serious" works such as symphonies and string quartets, as well as religious pieces, such as Masses. The classical musical style was supported by technical developments in instruments. The widespread adoption of equal temperament made classical musical structure possible, by ensuring that cadences in all keys sounded similar. The fortepiano and then the pianoforte replaced the harpsichord, enabling more dynamic contrast and more sustained melodies. Over the Classical period, keyboard instruments became richer, more sonorous and more powerful. The orchestra increased in size and range, and became more standardised. The harpsichord or pipe organ basso continuo role in orchestra fell out of use between 1750 and 1775, leaving the string section woodwinds became a self-contained section, consisting of clarinets, oboes, flutes and bassoons. While vocal music such as comic opera was popular, great importance was given to instrumental music. The main kinds of instrumental music were the sonata, trio, string quartet, quintet, symphony, concerto (usually for a virtuoso solo instrument accompanied by orchestra), and light pieces such as serenades and divertimentos. Sonata form developed and became the most important form. It was used to build up the first movement of most large-scale works in symphonies and string quartets. Sonata form was also used in other movements and in single, standalone pieces such as overtures. History Baroque/Classical transition c. 1730–1760 In his book The Classical Style, author and pianist Charles Rosen claims that from 1755 to 1775, composers groped for a new style that was more effectively dramatic. In the High Baroque period, dramatic expression was limited to the representation of individual affects (the "doctrine of affections", or what Rosen terms "dramatic sentiment"). For example, in Handel's oratorio Jephtha, the composer renders four emotions separately, one for each character, in the quartet "O, spare your daughter". Eventually this depiction of individual emotions came to be seen as simplistic and unrealistic; composers sought to portray multiple emotions, simultaneously or progressively, within a single character or movement ("dramatic action"). Thus in the finale of act 2 of Mozart's Die Entführung aus dem Serail, the lovers move "from joy through suspicion and outrage to final reconciliation." Musically speaking, this "dramatic action" required more musical variety. Whereas Baroque music was characterized by seamless flow within individual movements and largely uniform textures, composers after the High Baroque sought to interrupt this flow with abrupt changes in texture, dynamic, harmony, or tempo. Among the stylistic developments which followed the High Baroque, the most dramatic came to be called Empfindsamkeit, (roughly "sensitive style"), and its best-known practitioner was Carl Philipp Emanuel Bach. Composers of this style employed the above-discussed interruptions in the most abrupt manner, and the music can sound illogical at times. The Italian composer Domenico Scarlatti took these developments further. His more than five hundred single-movement keyboard sonatas also contain abrupt changes of texture, but these changes are organized into periods, balanced phrases that became a hallmark of the classical style. However, Scarlatti's changes in texture still sound sudden and unprepared. The outstanding achievement of the great classical composers (Haydn, Mozart and Beethoven) was their ability to make these dramatic surprises sound logically motivated, so that "the expressive and the elegant could join hands." Between the death of J. S. Bach and the maturity of Haydn and Mozart (roughly 1750–1770), composers experimented with these new ideas, which can be seen in the music of Bach's sons. Johann Christian developed a style which we now call Roccoco, comprising simpler textures and harmonies, and which was "charming, undramatic, and a little empty." As mentioned previously, Carl Philipp Emmanuel sought to increase drama, and his music was "violent, expressive, brilliant, continuously surprising, and often incoherent." And finally Wilhelm Friedemann, J.S. Bach's eldest son, extended Baroque traditions in an idiomatic, unconventional way. At first the new style took over Baroque forms—the ternary da capo aria, the sinfonia and the concerto—but composed with simpler parts, more notated ornamentation, rather than the improvised ornaments that were common in the Baroque era, and more emphatic division of pieces into sections. However, over time, the new aesthetic caused radical changes in how pieces were put together, and the basic formal layouts changed. Composers from this period sought dramatic effects, striking melodies, and clearer textures. One of the big textural changes was a shift away from the complex, dense polyphonic style of the Baroque, in which multiple interweaving melodic lines were played simultaneously, and towards homophony, a lighter texture which uses a clear single melody line accompanied by chords. Baroque music generally uses many harmonic fantasies and polyphonic sections that focus less on the structure of the musical piece, and there was less emphasis on clear musical phrases. In the classical period, the harmonies became simpler. However, the structure of the piece, the phrases and small melodic or rhythmic motives, became much more important than in the Baroque period. Another important break with the past was the radical overhaul of opera by Christoph Willibald Gluck, who cut away a great deal of the layering and improvisational ornaments and focused on the points of modulation and transition. By making these moments where the harmony changes more of a focus, he enabled powerful dramatic shifts in the emotional color of the music. To highlight these transitions, he used changes in instrumentation (orchestration), melody, and mode. Among the most successful composers of his time, Gluck spawned many emulators, including Antonio Salieri. Their emphasis on accessibility brought huge successes in opera, and in other vocal music such as songs, oratorios, and choruses. These were considered the most important kinds of music for performance and hence enjoyed greatest public success. The phase between the Baroque and the rise of the Classical (around 1730), was home to various competing musical styles. The diversity of artistic paths are represented in the sons of Johann Sebastian Bach: Wilhelm Friedemann Bach, who continued the Baroque tradition in a personal way; Johann Christian Bach, who simplified textures of the Baroque and most clearly influenced Mozart; and Carl Philipp Emanuel Bach, who composed passionate and sometimes violently eccentric music of the Empfindsamkeit movement. Musical culture was caught at a crossroads: the masters of the older style had the technique, but the public hungered for the new. This is one of the reasons C. P. E. Bach was held in such high regard: he understood the older forms quite well and knew how to present them in new garb, with an enhanced variety of form. 1750–1775 By the late 1750s there were flourishing centers of the new style in Italy, Vienna, Mannheim, and Paris; dozens of symphonies were composed and there were bands of players associated with musical theatres. Opera or other vocal music accompanied by orchestra was the feature of most musical events, with concertos and symphonies (arising from the overture) serving as instrumental interludes and introductions for operas and church services. Over the course of the Classical period, symphonies and concertos developed and were presented independently of vocal music. The "normal" orchestra ensemble—a body of strings supplemented by winds—and movements of particular rhythmic character were established by the late 1750s in Vienna. However, the length and weight of pieces was still set with some Baroque characteristics: individual movements still focused on one "affect" (musical mood) or had only one sharply contrasting middle section, and their length was not significantly greater than Baroque movements. There was not yet a clearly enunciated theory of how to compose in the new style. It was a moment ripe for a breakthrough. The first great master of the style was the composer Joseph Haydn. In the late 1750s he began composing symphonies, and by 1761 he had composed a triptych (Morning, Noon, and Evening) solidly in the contemporary mode. As a vice-Kapellmeister and later Kapellmeister, his output expanded: he composed over forty symphonies in the 1760s alone. And while his fame grew, as his orchestra was expanded and his compositions were copied and disseminated, his voice was only one among many. While some scholars suggest that Haydn was overshadowed by Mozart and Beethoven, it would be difficult to overstate Haydn's centrality to the new style, and therefore to the future of Western art music as a whole. At the time, before the pre-eminence of Mozart or Beethoven, and with Johann Sebastian Bach known primarily to connoisseurs of keyboard music, Haydn reached a place in music that set him above all other composers except perhaps the Baroque era's George Frideric Handel. Haydn took existing ideas, and radically altered how they functioned—earning him the titles "father of the symphony" and "father of the string quartet". One of the forces that worked as an impetus for his pressing forward was the first stirring of what would later be called Romanticism—the Sturm und Drang, or "storm and stress" phase in the arts, a short period where obvious and dramatic emotionalism was a stylistic preference. Haydn accordingly wanted more dramatic contrast and more emotionally appealing melodies, with sharpened character and individuality in his pieces. This period faded away in music and literature: however, it influenced what came afterward and would eventually be a component of aesthetic taste in later decades. The Farewell Symphony, No. 45 in F minor, exemplifies Haydn's integration of the differing demands of the new style, with surprising sharp turns and a long slow adagio to end the work. In 1772, Haydn completed his Opus 20 set of six string quartets, in which he deployed the polyphonic techniques he had gathered from the previous Baroque era to provide structural coherence capable of holding together his melodic ideas. For some, this marks the beginning of the "mature" Classical style, in which the period of reaction against late Baroque complexity yielded to a period of integration Baroque and Classical elements. 1775–1790 Haydn, having worked for over a decade as the music director for a prince, had far more resources and scope for composing than most other composers. His position also gave him the ability to shape the forces that would play his music, as he could select skilled musicians. This opportunity was not wasted, as Haydn, beginning quite early on his career, sought to press forward the technique of building and developing ideas in his music. His next important breakthrough was in the Opus 33 string quartets (1781), in which the melodic and the harmonic roles segue among the instruments: it is often momentarily unclear what is melody and what is harmony. This changes the way the ensemble works its way between dramatic moments of transition and climactic sections: the music flows smoothly and without obvious interruption. He then took this integrated style and began applying it to orchestral and vocal music. Haydn's gift to music was a way of composing, a way of structuring works, which was at the same time in accord with the governing aesthetic of the new style. However, a younger contemporary, Wolfgang Amadeus Mozart, brought his genius to Haydn's ideas and applied them to two of the major genres of the day: opera, and the virtuoso concerto. Whereas Haydn spent much of his working life as a court composer, Mozart wanted public success in the concert life of cities, playing for the general public. This meant he needed to write operas and write and perform virtuoso pieces. Haydn was not a virtuoso at the international touring level; nor | classical period and was frequently used. The Classical approach to structure again contrasts with the Baroque, where a composition would normally move between tonic and dominant and back again, but through a continual progress of chord changes and without a sense of "arrival" at the new key. While counterpoint was less emphasised in the classical period, it was by no means forgotten, especially later in the period, and composers still used counterpoint in "serious" works such as symphonies and string quartets, as well as religious pieces, such as Masses. The classical musical style was supported by technical developments in instruments. The widespread adoption of equal temperament made classical musical structure possible, by ensuring that cadences in all keys sounded similar. The fortepiano and then the pianoforte replaced the harpsichord, enabling more dynamic contrast and more sustained melodies. Over the Classical period, keyboard instruments became richer, more sonorous and more powerful. The orchestra increased in size and range, and became more standardised. The harpsichord or pipe organ basso continuo role in orchestra fell out of use between 1750 and 1775, leaving the string section woodwinds became a self-contained section, consisting of clarinets, oboes, flutes and bassoons. While vocal music such as comic opera was popular, great importance was given to instrumental music. The main kinds of instrumental music were the sonata, trio, string quartet, quintet, symphony, concerto (usually for a virtuoso solo instrument accompanied by orchestra), and light pieces such as serenades and divertimentos. Sonata form developed and became the most important form. It was used to build up the first movement of most large-scale works in symphonies and string quartets. Sonata form was also used in other movements and in single, standalone pieces such as overtures. History Baroque/Classical transition c. 1730–1760 In his book The Classical Style, author and pianist Charles Rosen claims that from 1755 to 1775, composers groped for a new style that was more effectively dramatic. In the High Baroque period, dramatic expression was limited to the representation of individual affects (the "doctrine of affections", or what Rosen terms "dramatic sentiment"). For example, in Handel's oratorio Jephtha, the composer renders four emotions separately, one for each character, in the quartet "O, spare your daughter". Eventually this depiction of individual emotions came to be seen as simplistic and unrealistic; composers sought to portray multiple emotions, simultaneously or progressively, within a single character or movement ("dramatic action"). Thus in the finale of act 2 of Mozart's Die Entführung aus dem Serail, the lovers move "from joy through suspicion and outrage to final reconciliation." Musically speaking, this "dramatic action" required more musical variety. Whereas Baroque music was characterized by seamless flow within individual movements and largely uniform textures, composers after the High Baroque sought to interrupt this flow with abrupt changes in texture, dynamic, harmony, or tempo. Among the stylistic developments which followed the High Baroque, the most dramatic came to be called Empfindsamkeit, (roughly "sensitive style"), and its best-known practitioner was Carl Philipp Emanuel Bach. Composers of this style employed the above-discussed interruptions in the most abrupt manner, and the music can sound illogical at times. The Italian composer Domenico Scarlatti took these developments further. His more than five hundred single-movement keyboard sonatas also contain abrupt changes of texture, but these changes are organized into periods, balanced phrases that became a hallmark of the classical style. However, Scarlatti's changes in texture still sound sudden and unprepared. The outstanding achievement of the great classical composers (Haydn, Mozart and Beethoven) was their ability to make these dramatic surprises sound logically motivated, so that "the expressive and the elegant could join hands." Between the death of J. S. Bach and the maturity of Haydn and Mozart (roughly 1750–1770), composers experimented with these new ideas, which can be seen in the music of Bach's sons. Johann Christian developed a style which we now call Roccoco, comprising simpler textures and harmonies, and which was "charming, undramatic, and a little empty." As mentioned previously, Carl Philipp Emmanuel sought to increase drama, and his music was "violent, expressive, brilliant, continuously surprising, and often incoherent." And finally Wilhelm Friedemann, J.S. Bach's eldest son, extended Baroque traditions in an idiomatic, unconventional way. At first the new style took over Baroque forms—the ternary da capo aria, the sinfonia and the concerto—but composed with simpler parts, more notated ornamentation, rather than the improvised ornaments that were common in the Baroque era, and more emphatic division of pieces into sections. However, over time, the new aesthetic caused radical changes in how pieces were put together, and the basic formal layouts changed. Composers from this period sought dramatic effects, striking melodies, and clearer textures. One of the big textural changes was a shift away from the complex, dense polyphonic style of the Baroque, in which multiple interweaving melodic lines were played simultaneously, and towards homophony, a lighter texture which uses a clear single melody line accompanied by chords. Baroque music generally uses many harmonic fantasies and polyphonic sections that focus less on the structure of the musical piece, and there was less emphasis on clear musical phrases. In the classical period, the harmonies became simpler. However, the structure of the piece, the phrases and small melodic or rhythmic motives, became much more important than in the Baroque period. Another important break with the past was the radical overhaul of opera by Christoph Willibald Gluck, who cut away a great deal of the layering and improvisational ornaments and focused on the points of modulation and transition. By making these moments where the harmony changes more of a focus, he enabled powerful dramatic shifts in the emotional color of the music. To highlight these transitions, he used changes in instrumentation (orchestration), melody, and mode. Among the most successful composers of his time, Gluck spawned many emulators, including Antonio Salieri. Their emphasis on accessibility brought huge successes in opera, and in other vocal music such as songs, oratorios, and choruses. These were considered the most important kinds of music for performance and hence enjoyed greatest public success. The phase between the Baroque and the rise of the Classical (around 1730), was home to various competing musical styles. The diversity of artistic paths are represented in the sons of Johann Sebastian Bach: Wilhelm Friedemann Bach, who continued the Baroque tradition in a personal way; Johann Christian Bach, who simplified textures of the Baroque and most clearly influenced Mozart; and Carl Philipp Emanuel Bach, who composed passionate and sometimes violently eccentric music of the Empfindsamkeit movement. Musical culture was caught at a crossroads: the masters of the older style had the technique, but the public hungered for the new. This is one of the reasons C. P. E. Bach was held in such high regard: he understood the older forms quite well and knew how to present them in new garb, with an enhanced variety of form. 1750–1775 By the late 1750s there were flourishing centers of the new style in Italy, Vienna, Mannheim, and Paris; dozens of symphonies were composed and there were bands of players associated with musical theatres. Opera or other vocal music accompanied by orchestra was the feature of most musical events, with concertos and symphonies (arising from the overture) serving as instrumental interludes and introductions for operas and church services. Over the course of the Classical period, symphonies and concertos developed and were presented independently of vocal music. The "normal" orchestra ensemble—a body of strings supplemented by winds—and movements of particular rhythmic character were established by the late 1750s in Vienna. However, the length and weight of pieces was still set with some Baroque characteristics: individual movements still focused on one "affect" (musical mood) or had only one sharply contrasting middle section, and their length was not significantly greater than Baroque movements. There was not yet a clearly enunciated theory of how to compose in the new style. It was a moment ripe for a breakthrough. The first great master of the style was the composer Joseph Haydn. In the late 1750s he began composing symphonies, and by 1761 he had composed a triptych (Morning, Noon, and Evening) solidly in the contemporary mode. As a vice-Kapellmeister and later Kapellmeister, his output expanded: he composed over forty symphonies in the 1760s alone. And while his fame grew, as his orchestra was expanded and his compositions were copied and disseminated, his voice was only one among many. While some scholars suggest that Haydn was overshadowed by Mozart and Beethoven, it would be difficult to overstate Haydn's centrality to the new style, and therefore to the future of Western art music as a whole. At the time, before the pre-eminence of Mozart or Beethoven, and with Johann Sebastian Bach known primarily to connoisseurs of keyboard music, Haydn reached a place in music that set him above all other composers except perhaps the Baroque era's George Frideric Handel. Haydn took existing ideas, and radically altered how they functioned—earning him the titles "father of the symphony" and "father of the string quartet". One of the forces that worked as an impetus for his pressing forward was the first stirring of what would later be called Romanticism—the Sturm und Drang, or "storm and stress" phase in the arts, a short period where obvious and dramatic emotionalism was a stylistic preference. Haydn accordingly wanted more dramatic contrast and more emotionally appealing melodies, with sharpened character and individuality in his pieces. This period faded away in music and literature: however, it influenced what came afterward and would eventually be a component of aesthetic taste in later decades. The Farewell Symphony, No. 45 in F minor, exemplifies Haydn's integration of the differing demands of the new style, with surprising sharp turns and a long slow adagio to end the work. In 1772, Haydn completed his Opus 20 set of six string quartets, in which he deployed the polyphonic techniques he had gathered from the previous Baroque era to provide structural coherence capable of holding together his melodic ideas. For some, this marks the beginning of the "mature" Classical style, in which the period of reaction against late Baroque complexity yielded to a period of integration Baroque and Classical elements. 1775–1790 Haydn, having worked for over a decade as the music director for a prince, had far more resources and scope for composing than most other composers. His position also gave him the ability to shape the forces that would play his music, as he could select skilled musicians. This opportunity was not wasted, as Haydn, beginning quite early on his career, sought to press forward the technique of building and developing ideas in his music. His next important breakthrough was in the Opus 33 string quartets (1781), in which the melodic and the harmonic roles segue among the instruments: it is often momentarily unclear what is melody and what is harmony. This changes the way the ensemble works its way between dramatic moments of transition and climactic sections: the music flows smoothly and without obvious interruption. He then took this integrated style and began applying it to orchestral and vocal music. Haydn's gift to music was a way of composing, a way of structuring works, which was at the same time in accord with the governing aesthetic of the new style. However, a younger contemporary, Wolfgang Amadeus Mozart, brought his genius to Haydn's ideas and applied them to two of the major genres of the day: opera, and the virtuoso concerto. Whereas Haydn spent much of his working life as a court composer, Mozart wanted public success in the concert life of cities, playing for the general public. This meant he needed to write operas and write and perform virtuoso pieces. Haydn was not a virtuoso at the international touring level; nor was he seeking to create operatic works that could play for many nights in front of a large audience. Mozart wanted to achieve both. Moreover, Mozart also had a taste for more chromatic chords (and greater contrasts in harmonic language generally), a greater love for creating a welter of melodies in a single work, and a more Italianate sensibility in music as a whole. He found, in Haydn's music and later in his study of the polyphony of J.S. Bach, the means to discipline and enrich his artistic gifts. Mozart rapidly came to the attention of Haydn, who hailed the new composer, studied his works, and considered the younger man his only true peer in music. In Mozart, Haydn found a greater range of instrumentation, dramatic effect and melodic resource. The learning relationship moved in both directions. Mozart also had a great respect for the older, more experienced composer, and sought to learn from him. Mozart's arrival in Vienna in 1780 brought an acceleration in the development of the Classical style. There, Mozart absorbed the fusion of Italianate brilliance and Germanic cohesiveness that had been brewing for the previous |
but the latter allows any letter/diacritic combination to be used in text. Ligatures pose similar problems. Other writing systems, such as Arabic and Hebrew, are represented with more complex character repertoires due to the need to accommodate things like bidirectional text and glyphs that are joined together in different ways for different situations. A coded character set (CCS) is a function that maps characters to code points (each code point represents one character). For example, in a given repertoire, the capital letter "A" in the Latin alphabet might be represented by the code point 65, the character "B" to 66, and so on. Multiple coded character sets may share the same repertoire; for example ISO/IEC 8859-1 and IBM code pages 037 and 500 all cover the same repertoire but map them to different code points. A character encoding form (CEF) is the mapping of code points to code units to facilitate storage in a system that represents numbers as bit sequences of fixed length (i.e. practically any computer system). For example, a system that stores numeric information in 16-bit units can only directly represent code points 0 to 65,535 in each unit, but larger code points (say, 65,536 to 1.4 million) could be represented by using multiple 16-bit units. This correspondence is defined by a CEF. Next, a character encoding scheme (CES) is the mapping of code units to a sequence of octets to facilitate storage on an octet-based file system or transmission over an octet-based network. Simple character encoding schemes include UTF-8, UTF-16BE, UTF-32BE, UTF-16LE or UTF-32LE; compound character encoding schemes, such as UTF-16, UTF-32 and ISO/IEC 2022, switch between several simple schemes by using byte order marks or escape sequences; compressing schemes try to minimise the number of bytes used per code unit (such as SCSU, BOCU, and Punycode). Although UTF-32BE is a simpler CES, most systems working with Unicode use either UTF-8, which is backward compatible with fixed-width ASCII and maps Unicode code points to variable-width sequences of octets, or UTF-16BE, which is backward compatible with fixed-width UCS-2BE and maps Unicode code points to variable-width sequences of 16-bit words. See comparison of Unicode encodings for a detailed discussion. Finally, there may be a higher level protocol which supplies additional information to select the particular variant of a Unicode character, particularly where there are regional variants that have been 'unified' in Unicode as the same character. An example is the XML attribute xml:lang. The Unicode model uses the term character map for historical systems which directly assign a sequence of characters to a sequence of bytes, covering all of CCS, CEF and CES layers. Character sets, character maps and code pages Historically, the terms "character encoding", "character map", "character set" and "code page" were synonymous in computer science, as the same standard would specify a repertoire of characters and how they were to be encoded into a stream of code units – usually with a single character per code unit. But now the terms have related but distinct meanings, due to efforts by standards bodies to use precise terminology when writing about and unifying many different encoding systems. Regardless, the terms are still used interchangeably, with character set being nearly ubiquitous. A "code page" usually means a byte-oriented encoding, but with regard to some suite of encodings (covering different scripts), where many characters share the same codes in most or all those code pages. Well-known code page suites are "Windows" (based on Windows-1252) and "IBM"/"DOS" (based on code page 437), see Windows code page for details. Most, but not all, encodings referred to as code pages are single-byte encodings (but see octet on byte size.) IBM's Character Data Representation Architecture (CDRA) designates entities with coded character set identifiers (CCSIDs), each of which is variously called a "charset", "character set", "code page", or "CHARMAP". The term "code page" does not occur in Unix or Linux where "charmap" is preferred, usually in the larger context of locales. In contrast to a "coded character set", a "character encoding" is a map from abstract characters to code words. A "character set" in HTTP (and MIME) parlance is the same as a character encoding (but not the same as CCS). "Legacy encoding" is a term sometimes used to characterize old character encodings, but with an ambiguity of sense. Most of its use is in the context of Unicodification, where it refers to encodings that fail to cover all Unicode code points, or, more generally, using a somewhat different character repertoire: several code points representing one Unicode character, or versa (see e.g. code page 437). Some sources refer to an encoding as legacy only because it preceded Unicode. All Windows code pages are usually referred to as legacy, both because they antedate Unicode and because they are unable to represent all 221 possible Unicode code points. Character encoding translation As a result of having many character encoding methods in use (and the need for backward compatibility with archived data), many computer programs have been developed to translate data between encoding schemes as a form of data transcoding. Some of these are cited below. Cross-platform: Web browsers – most modern web browsers feature automatic character encoding detection. On Firefox 3, for example, see the View/Character Encoding submenu. iconv – program and standardized API to convert encodings luit – program that converts encoding of input and output to programs running interactively convert_encoding.py – Python based utility to convert text files between arbitrary encodings and line endings. decodeh.py – algorithm and module to heuristically guess the encoding of a string. International Components for Unicode – A set of C and Java libraries to perform charset conversion. uconv can be used from ICU4C. chardet – This is a translation of the Mozilla automatic-encoding-detection code into the Python computer language. The newer versions of the Unix file command attempt to do a basic detection of character encoding (also available on Cygwin). charset – C++ template library with simple interface to convert between C++/user-defined streams. charset defined many character-sets and allows you to use Unicode formats with support of endianness. Unix-like: cmv – simple tool for transcoding filenames. convmv – convert a filename from one encoding to another. cstocs – convert file contents from one encoding to another for the Czech and Slovak languages. enca – analyzes encodings for given text files. recode – convert file contents from one encoding to another utrac – convert file contents from one encoding to another. Windows: Encoding.Convert – .NET API MultiByteToWideChar/WideCharToMultiByte – Convert from ANSI to Unicode & Unicode to ANSI cscvt – character set conversion tool enca – analyzes encodings for given text files. See also Percent encoding Alt code Character encodings in HTML :Category:Character encoding – articles related to character encoding in general :Category:Character sets – articles detailing specific character encodings Hexadecimal representations Mojibake – character set mismap. Mojikyo – a system ("glyph set") that includes over 100,000 Chinese character drawings, modern and ancient, popular and obscure. Presentation layer TRON, part of the TRON project, is an encoding system that does not use Han Unification; instead, it uses "control codes" to switch between 16-bit "planes" of characters. Universal Character Set characters Charset sniffing – used in some applications when character encoding metadata is not available Common character encodings ISO 646 ASCII EBCDIC ISO 8859: ISO 8859-1 Western Europe ISO 8859-2 Western and Central Europe ISO 8859-3 Western Europe and South European (Turkish, Maltese plus Esperanto) ISO 8859-4 Western Europe and Baltic countries (Lithuania, Estonia, Latvia and Lapp) ISO 8859-5 Cyrillic alphabet ISO 8859-6 Arabic ISO 8859-7 Greek ISO 8859-8 Hebrew ISO 8859-9 Western Europe with amended Turkish character set ISO 8859-10 Western Europe with rationalised character set for Nordic languages, including complete Icelandic set ISO 8859-11 Thai ISO 8859-13 Baltic languages plus Polish ISO 8859-14 Celtic languages (Irish Gaelic, Scottish, Welsh) ISO 8859-15 Added the Euro sign and other rationalisations to ISO 8859-1 ISO 8859-16 Central, Eastern and Southern European languages (Albanian, Bosnian, Croatian, Hungarian, Polish, Romanian, Serbian and Slovenian, but also French, German, Italian and Irish Gaelic) CP437, CP720, CP737, CP850, CP852, CP855, CP857, CP858, CP860, CP861, CP862, CP863, CP865, CP866, CP869, CP872 MS-Windows character sets: Windows-1250 for Central European languages that use Latin script, (Polish, Czech, Slovak, Hungarian, Slovene, Serbian, Croatian, Bosnian, Romanian and Albanian) Windows-1251 for Cyrillic alphabets Windows-1252 for Western languages Windows-1253 for Greek Windows-1254 for Turkish Windows-1255 for Hebrew Windows-1256 for Arabic Windows-1257 for Baltic languages Windows-1258 for Vietnamese Mac OS Roman KOI8-R, KOI8-U, KOI7 MIK ISCII TSCII VISCII JIS X 0208 is a widely deployed standard for Japanese character encoding that has several encoding forms. | 8 bits; A code unit in UTF-16 consists of 16 bits; A code unit in UTF-32 consists of 32 bits. Example of a code unit: Consider a string of the letters "abc" followed by (represented with 1 char32_t, 2 char16_t or 4 char8_t). That string contains: four characters; four code points either: four code units in UTF-32 (00000061, 00000062, 00000063, 00010400) five code units in UTF-16 (0061, 0062, 0063, d801, dc00), or seven code units in UTF-8 (61, 62, 63, f0, 90, 90, 80). The convention to refer to a character in Unicode is to start with 'U+' followed by the codepoint value in hexadecimal. The range of valid code points for the Unicode standard is U+0000 to U+10FFFF, inclusive, divided in 17 planes, identified by the numbers 0 to 16. Characters in the range U+0000 to U+FFFF are in plane 0, called the Basic Multilingual Plane (BMP). This plane contains most commonly-used characters. Characters in the range U+10000 to U+10FFFF in the other planes are called supplementary characters. The following table shows examples of code point values: A code point is represented by a sequence of code units. The mapping is defined by the encoding. Thus, the number of code units required to represent a code point depends on the encoding: UTF-8: code points map to a sequence of one, two, three or four code units. UTF-16: code units are twice as long as 8-bit code units. Therefore, any code point with a scalar value less than U+10000 is encoded with a single code unit. Code points with a value U+10000 or higher require two code units each. These pairs of code units have a unique term in UTF-16: "Unicode surrogate pairs". UTF-32: the 32-bit code unit is large enough that every code point is represented as a single code unit. GB18030: multiple code units per code point are common, because of the small code units. Code points are mapped to one, two, or four code units. Unicode encoding model Unicode and its parallel standard, the ISO/IEC 10646 Universal Character Set, together constitute a modern, unified character encoding. Rather than mapping characters directly to octets (bytes), they separately define what characters are available, corresponding natural numbers (code points), how those numbers are encoded as a series of fixed-size natural numbers (code units), and finally how those units are encoded as a stream of octets. The purpose of this decomposition is to establish a universal set of characters that can be encoded in a variety of ways. To describe this model correctly requires more precise terms than "character set" and "character encoding." The terms used in the modern model follow: A character repertoire is the full set of abstract characters that a system supports. The repertoire may be closed, i.e. no additions are allowed without creating a new standard (as is the case with ASCII and most of the ISO-8859 series), or it may be open, allowing additions (as is the case with Unicode and to a limited extent the Windows code pages). The characters in a given repertoire reflect decisions that have been made about how to divide writing systems into basic information units. The basic variants of the Latin, Greek and Cyrillic alphabets can be broken down into letters, digits, punctuation, and a few special characters such as the space, which can all be arranged in simple linear sequences that are displayed in the same order they are read. But even with these alphabets, diacritics pose a complication: they can be regarded either as part of a single character containing a letter and diacritic (known as a precomposed character), or as separate characters. The former allows a far simpler text handling system but the latter allows any letter/diacritic combination to be used in text. Ligatures pose similar problems. Other writing systems, such as Arabic and Hebrew, are represented with more complex character repertoires due to the need to accommodate things like bidirectional text and glyphs that are joined together in different ways for different situations. A coded character set (CCS) is a function that maps characters to code points (each code point represents one character). For example, in a given repertoire, the capital letter "A" in the Latin alphabet might be represented by the code point 65, the character "B" to 66, and so on. Multiple coded character sets may share the same repertoire; for example ISO/IEC 8859-1 and IBM code pages 037 and 500 all cover the same repertoire but map them to different code points. A character encoding form (CEF) is the mapping of code points to code units to facilitate storage in a system that represents numbers as bit sequences of fixed length (i.e. practically any computer system). For example, a system that stores numeric information in 16-bit units can only directly represent code points 0 to 65,535 in each unit, but larger code points (say, 65,536 to 1.4 million) could be represented by using multiple 16-bit units. This correspondence is defined by a CEF. Next, a character encoding scheme (CES) is the mapping of code units to a sequence of octets to facilitate storage on an octet-based file system or transmission over an octet-based network. Simple character encoding schemes include UTF-8, UTF-16BE, UTF-32BE, UTF-16LE or UTF-32LE; compound character encoding schemes, such as UTF-16, UTF-32 and ISO/IEC 2022, switch between several simple schemes by using byte order marks or escape sequences; compressing schemes try to minimise the number of bytes used per code unit (such as SCSU, BOCU, and Punycode). Although UTF-32BE is a simpler CES, most systems working with Unicode use either UTF-8, which is backward compatible with fixed-width ASCII and maps Unicode code points to variable-width sequences of octets, or UTF-16BE, which is backward compatible with fixed-width UCS-2BE and maps Unicode code points to variable-width sequences of 16-bit words. See comparison of Unicode encodings for a detailed discussion. Finally, there may be a higher level protocol which supplies additional information to select the particular variant of a Unicode character, particularly where there are regional variants that have been 'unified' in Unicode as the same character. An example is the XML attribute xml:lang. The Unicode model uses the term character map for historical systems which directly assign a sequence of characters to a sequence of bytes, covering all of CCS, CEF and CES layers. Character sets, character maps and code pages Historically, the terms "character encoding", "character map", "character set" and "code page" were synonymous in computer science, as the same standard would specify a repertoire of characters and how they were to be encoded into a stream of code units – usually with a single character per code unit. But now the terms have related but distinct meanings, due to efforts by standards bodies to use precise terminology when writing about and unifying many different encoding systems. Regardless, the terms are still used interchangeably, with character set being nearly ubiquitous. A "code page" usually means a byte-oriented encoding, but with regard to some suite of encodings (covering different scripts), where many characters share the same codes in most or all those code pages. Well-known code page suites are "Windows" (based on Windows-1252) and "IBM"/"DOS" (based on code page 437), see Windows code page for details. Most, but not all, encodings referred to as code pages are single-byte encodings (but see octet on byte size.) IBM's Character Data Representation Architecture (CDRA) designates entities with coded character set identifiers (CCSIDs), each of which is variously called a "charset", "character set", "code page", or "CHARMAP". The term "code page" does not occur in Unix or Linux where "charmap" is preferred, usually in the larger context of locales. In contrast to a "coded character set", a "character encoding" is a map from abstract characters to code words. A "character set" in HTTP (and MIME) parlance is the same as a character encoding (but not the same as CCS). "Legacy encoding" is a term sometimes used to characterize old character encodings, but with an ambiguity of sense. Most of its use is in the context of Unicodification, where it refers to encodings that fail to cover all Unicode code points, or, more generally, using a somewhat different character repertoire: several code points representing one Unicode character, or versa (see e.g. code page 437). Some sources refer to an encoding as legacy only because it preceded Unicode. All Windows code pages are usually referred to as legacy, both because they antedate Unicode and because they are unable to represent all 221 possible Unicode code points. Character encoding translation As a result of having many character encoding methods in use (and the need for backward compatibility with archived data), many computer programs have been developed to translate data between encoding schemes as a form of data transcoding. Some of these are cited below. Cross-platform: Web browsers – most modern web browsers feature automatic character encoding detection. On Firefox 3, for example, see the View/Character Encoding submenu. iconv – program and standardized API to convert encodings luit – program that converts encoding of input and output to programs running interactively convert_encoding.py – Python based utility to convert text files between arbitrary encodings and line endings. decodeh.py – algorithm and module to heuristically guess the encoding of a string. International Components for Unicode – A set of C and Java libraries to perform charset conversion. uconv can be used from ICU4C. chardet – This is a translation of the Mozilla automatic-encoding-detection code into the Python computer language. The newer versions of the Unix file command attempt to do a basic detection of character encoding (also available on Cygwin). charset – C++ template library with simple interface to convert between C++/user-defined streams. |
General Category is "Cc". Formatting codes are distinct, in General Category "Cf". The Cc control characters have no Name in Unicode, but are given labels such as "<control-001A>" instead. Display There are a number of techniques to display non-printing characters, which may be illustrated with the bell character in ASCII encoding: Code point: decimal 7, hexadecimal 0x07 An abbreviation, often three capital letters: BEL A special character condensing the abbreviation: Unicode U+2407 (␇), "symbol for bell" An ISO 2047 graphical representation: Unicode U+237E (⍾), "graphic for bell" Caret notation in ASCII, where code point 00xxxxx is represented as a caret followed by the capital letter at code point 10xxxxx: ^G An escape sequence, as in C/C++ character string codes: , , , etc. How control characters map to keyboards ASCII-based keyboards have a key labelled "Control", "Ctrl", or (rarely) "Cntl" which is used much like a shift key, being pressed in combination with another letter or symbol key. In one implementation, the control key generates the code 64 places below the code for the (generally) uppercase letter it is pressed in combination with (i.e., subtract 64 from ASCII code value in decimal of the (generally) uppercase letter). The other implementation is to take the ASCII code produced by the key and bitwise AND it with 31, forcing bits 6 and 7 to zero. For example, pressing "control" and the letter "g" or "G" (code 107 in octal or 71 in base 10, which is 01000111 in binary), produces the code 7 (Bell, 7 in base 10, or 00000111 in binary). The NULL character (code 0) is represented by Ctrl-@, "@" being the code immediately before "A" in the ASCII character set. For convenience, a lot of terminals accept Ctrl-Space as an alias for Ctrl-@. In either case, this produces one of the 32 ASCII control codes between 0 and 31. This approach is not able to represent the DEL character because of its value (code 127), but Ctrl-? is often used for this character, as subtracting 64 from a '?' gives −1, which if masked to 7 bits is 127. When the control key is held down, letter keys produce the same control characters regardless of the state of the shift or caps lock keys. In other words, it does not matter whether the key would have produced an upper-case or a lower-case letter. The interpretation of the control key with the space, graphics character, and digit keys (ASCII codes 32 to 63) vary between systems. Some will produce the same character code as if the control key were not held down. Other systems translate these keys into control characters when the control key is held down. The interpretation of the control key with non-ASCII ("foreign") keys also varies between systems. Control characters are often rendered into a printable form known as caret notation by printing a caret (^) and then the ASCII character that has a value of the control character plus 64. Control characters generated using letter keys are thus displayed with the upper-case form of the letter. For example, ^G represents code 7, which is generated by pressing the G key when the control key is held down. Keyboards also typically have a few single keys which produce control character codes. For example, the key labelled "Backspace" typically produces code 8, "Tab" code 9, "Enter" or "Return" code 13 (though some keyboards might produce code 10 for "Enter"). Many keyboards include keys that do not correspond to any ASCII printable or control character, for example cursor control arrows and word processing functions. The associated keypresses are communicated to computer programs by one of four methods: appropriating otherwise unused control characters; using some encoding other than ASCII; using multi-character control sequences; or using an additional mechanism outside of generating characters. "Dumb" computer terminals typically use control sequences. Keyboards attached to stand-alone personal computers made in the 1980s typically use one (or both) of the first two methods. Modern computer keyboards generate scancodes that identify the specific physical keys that are pressed; computer software then determines how to handle the keys that are pressed, including any of the four methods described above. The design purpose The control characters were designed to fall into a few groups: printing and display control, data structuring, transmission control, and miscellaneous. Printing and display control Printing control characters were first used to control the physical mechanism of printers, the earliest output device. An early implementation of this idea was the out-of-band ASA carriage control characters. Later, control characters were integrated into the stream of data to be printed. The carriage return character (CR), when sent to such a device, causes it to put the character at the edge of the paper at which writing begins (it may, or may not, also move the printing position to the next line). The line feed character (LF/NL) causes the device to put the printing position on the next line. It may (or may not), depending on the device and its configuration, also move the printing position to the start of the next line (which would be the leftmost position for left-to-right scripts, such as the alphabets used for Western languages, and the rightmost position for right-to-left scripts such as the Hebrew and Arabic alphabets). The vertical and horizontal tab characters (VT and HT/TAB) cause the output device to move the printing position to the next tab stop in the direction of reading. The form feed character (FF/NP) starts a new sheet of paper, and may or may not move to the start of the first line. The backspace character (BS) moves the printing position one character space backwards. On printers, this is most often used so the printer can overprint characters to make other, not normally available, characters. On terminals and other electronic output devices, there are often software (or hardware) configuration choices which will allow a destruct backspace (i.e., a BS, SP, BS sequence) which erases, or a non-destructive one which does not. The shift in and shift out characters (SI and SO) selected alternate character sets, fonts, underlining, or other printing modes. Escape sequences were often used to do the same thing. With the advent of computer terminals that did not physically print on paper and so offered more flexibility regarding screen placement, erasure, and so forth, printing control codes were adapted. Form feeds, for example, usually cleared the screen, there being no new paper page to move to. More complex escape sequences were developed to take advantage of the flexibility of the new terminals, and indeed of newer printers. The concept of a control character had always been somewhat limiting, and was extremely so when used with new, much more flexible, hardware. Control sequences (sometimes implemented as escape sequences) could match the new flexibility and power and became the standard method. However, there were, and remain, a large variety of standard sequences to choose from. Data structuring The separators (File, Group, Record, and Unit: FS, GS, RS and US) were made to structure data, usually on a tape, in order to simulate punched cards. End of medium (EM) warns that the tape (or other recording medium) is ending. While many systems use CR/LF and TAB for structuring data, it is possible to encounter the separator control characters in data that needs to be structured. The separator control characters are not overloaded; there is no general use of them except to separate data into structured groupings. Their numeric values are contiguous with the space character, which can be considered a member of the group, as a word separator. Transmission control The transmission control characters were intended to structure a data stream, and to manage re-transmission or graceful failure, as needed, in the face of transmission errors. The start of heading (SOH) character was to mark a non-data section of a data stream—the part of a stream containing addresses and other housekeeping data. The start of text character (STX) marked the end of the header, and the start of the textual part of a stream. The end of text character (ETX) marked the end of the data of a message. A widely used convention is to make the two characters preceding ETX a checksum or CRC for error-detection purposes. The end of transmission block character (ETB) was used to indicate the end of a block of data, where data was divided into such blocks for transmission purposes. The escape character (ESC) was intended to "quote" the next character, if it was another control character it would print it instead of performing the control function. It is almost never used for this purpose today. Various printable characters are used as visible "escape characters", depending on context. The substitute character (SUB) was intended to request a translation of the next character from a printable character to another value, usually by setting bit 5 to zero. This is handy because some media (such as sheets of paper produced by typewriters) can transmit only printable characters. However, on MS-DOS systems with files opened in text mode, "end of text" or "end of file" is marked by this Ctrl-Z character, instead of the Ctrl-C or Ctrl-D, which are common on other operating systems. The cancel character (CAN) signalled that the previous element should be discarded. The negative acknowledge character (NAK) is a definite flag for, usually, noting that reception was a problem, and, often, that the current element should be sent again. The acknowledge character (ACK) is normally used as a flag to indicate no problem detected with current element. When a transmission medium is half duplex (that is, it can transmit in only one direction at a time), there is usually a master station that can transmit at | In other words, it does not matter whether the key would have produced an upper-case or a lower-case letter. The interpretation of the control key with the space, graphics character, and digit keys (ASCII codes 32 to 63) vary between systems. Some will produce the same character code as if the control key were not held down. Other systems translate these keys into control characters when the control key is held down. The interpretation of the control key with non-ASCII ("foreign") keys also varies between systems. Control characters are often rendered into a printable form known as caret notation by printing a caret (^) and then the ASCII character that has a value of the control character plus 64. Control characters generated using letter keys are thus displayed with the upper-case form of the letter. For example, ^G represents code 7, which is generated by pressing the G key when the control key is held down. Keyboards also typically have a few single keys which produce control character codes. For example, the key labelled "Backspace" typically produces code 8, "Tab" code 9, "Enter" or "Return" code 13 (though some keyboards might produce code 10 for "Enter"). Many keyboards include keys that do not correspond to any ASCII printable or control character, for example cursor control arrows and word processing functions. The associated keypresses are communicated to computer programs by one of four methods: appropriating otherwise unused control characters; using some encoding other than ASCII; using multi-character control sequences; or using an additional mechanism outside of generating characters. "Dumb" computer terminals typically use control sequences. Keyboards attached to stand-alone personal computers made in the 1980s typically use one (or both) of the first two methods. Modern computer keyboards generate scancodes that identify the specific physical keys that are pressed; computer software then determines how to handle the keys that are pressed, including any of the four methods described above. The design purpose The control characters were designed to fall into a few groups: printing and display control, data structuring, transmission control, and miscellaneous. Printing and display control Printing control characters were first used to control the physical mechanism of printers, the earliest output device. An early implementation of this idea was the out-of-band ASA carriage control characters. Later, control characters were integrated into the stream of data to be printed. The carriage return character (CR), when sent to such a device, causes it to put the character at the edge of the paper at which writing begins (it may, or may not, also move the printing position to the next line). The line feed character (LF/NL) causes the device to put the printing position on the next line. It may (or may not), depending on the device and its configuration, also move the printing position to the start of the next line (which would be the leftmost position for left-to-right scripts, such as the alphabets used for Western languages, and the rightmost position for right-to-left scripts such as the Hebrew and Arabic alphabets). The vertical and horizontal tab characters (VT and HT/TAB) cause the output device to move the printing position to the next tab stop in the direction of reading. The form feed character (FF/NP) starts a new sheet of paper, and may or may not move to the start of the first line. The backspace character (BS) moves the printing position one character space backwards. On printers, this is most often used so the printer can overprint characters to make other, not normally available, characters. On terminals and other electronic output devices, there are often software (or hardware) configuration choices which will allow a destruct backspace (i.e., a BS, SP, BS sequence) which erases, or a non-destructive one which does not. The shift in and shift out characters (SI and SO) selected alternate character sets, fonts, underlining, or other printing modes. Escape sequences were often used to do the same thing. With the advent of computer terminals that did not physically print on paper and so offered more flexibility regarding screen placement, erasure, and so forth, printing control codes were adapted. Form feeds, for example, usually cleared the screen, there being no new paper page to move to. More complex escape sequences were developed to take advantage of the flexibility of the new terminals, and indeed of newer printers. The concept of a control character had always been somewhat limiting, and was extremely so when used with new, much more flexible, hardware. Control sequences (sometimes implemented as escape sequences) could match the new flexibility and power and became the standard method. However, there were, and remain, a large variety of standard sequences to choose from. Data structuring The separators (File, Group, Record, and Unit: FS, GS, RS and US) were made to structure data, usually on a tape, in order to simulate punched cards. End of medium (EM) |
other carbon atoms, and is capable of forming multiple stable covalent bonds with suitable multivalent atoms. Carbon is known to form almost ten million compounds, a large majority of all chemical compounds. Carbon also has the highest sublimation point of all elements. At atmospheric pressure it has no melting point, as its triple point is at and , so it sublimes at about . Graphite is much more reactive than diamond at standard conditions, despite being more thermodynamically stable, as its delocalised pi system is much more vulnerable to attack. For example, graphite can be oxidised by hot concentrated nitric acid at standard conditions to mellitic acid, C6(CO2H)6, which preserves the hexagonal units of graphite while breaking up the larger structure. Carbon sublimes in a carbon arc, which has a temperature of about 5800 K (5,530 °C or 9,980 °F). Thus, irrespective of its allotropic form, carbon remains solid at higher temperatures than the highest-melting-point metals such as tungsten or rhenium. Although thermodynamically prone to oxidation, carbon resists oxidation more effectively than elements such as iron and copper, which are weaker reducing agents at room temperature. Carbon is the sixth element, with a ground-state electron configuration of 1s22s22p2, of which the four outer electrons are valence electrons. Its first four ionisation energies, 1086.5, 2352.6, 4620.5 and 6222.7 kJ/mol, are much higher than those of the heavier group-14 elements. The electronegativity of carbon is 2.5, significantly higher than the heavier group-14 elements (1.8–1.9), but close to most of the nearby nonmetals, as well as some of the second- and third-row transition metals. Carbon's covalent radii are normally taken as 77.2 pm (C−C), 66.7 pm (C=C) and 60.3 pm (C≡C), although these may vary depending on coordination number and what the carbon is bonded to. In general, covalent radius decreases with lower coordination number and higher bond order. Carbon-based compounds form the basis of all known life on Earth, and the carbon–nitrogen cycle provides some of the energy produced by the Sun and other stars. Although it forms an extraordinary variety of compounds, most forms of carbon are comparatively unreactive under normal conditions. At standard temperature and pressure, it resists all but the strongest oxidizers. It does not react with sulfuric acid, hydrochloric acid, chlorine or any alkalis. At elevated temperatures, carbon reacts with oxygen to form carbon oxides and will rob oxygen from metal oxides to leave the elemental metal. This exothermic reaction is used in the iron and steel industry to smelt iron and to control the carbon content of steel: + 4 C + 2 → 3 Fe + 4 . Carbon reacts with sulfur to form carbon disulfide, and it reacts with steam in the coal-gas reaction used in coal gasification: C + HO → CO + H. Carbon combines with some metals at high temperatures to form metallic carbides, such as the iron carbide cementite in steel and tungsten carbide, widely used as an abrasive and for making hard tips for cutting tools. The system of carbon allotropes spans a range of extremes: Allotropes Atomic carbon is a very short-lived species and, therefore, carbon is stabilized in various multi-atomic structures with diverse molecular configurations called allotropes. The three relatively well-known allotropes of carbon are amorphous carbon, graphite, and diamond. Once considered exotic, fullerenes are nowadays commonly synthesized and used in research; they include buckyballs, carbon nanotubes, carbon nanobuds and nanofibers. Several other exotic allotropes have also been discovered, such as lonsdaleite, glassy carbon, carbon nanofoam and linear acetylenic carbon (carbyne). Graphene is a two-dimensional sheet of carbon with the atoms arranged in a hexagonal lattice. As of 2009, graphene appears to be the strongest material ever tested. The process of separating it from graphite will require some further technological development before it is economical for industrial processes. If successful, graphene could be used in the construction of a space elevator. It could also be used to safely store hydrogen for use in a hydrogen based engine in cars. The amorphous form is an assortment of carbon atoms in a non-crystalline, irregular, glassy state, not held in a crystalline macrostructure. It is present as a powder, and is the main constituent of substances such as charcoal, lampblack (soot) and activated carbon. At normal pressures, carbon takes the form of graphite, in which each atom is bonded trigonally to three others in a plane composed of fused hexagonal rings, just like those in aromatic hydrocarbons. The resulting network is 2-dimensional, and the resulting flat sheets are stacked and loosely bonded through weak van der Waals forces. This gives graphite its softness and its cleaving properties (the sheets slip easily past one another). Because of the delocalization of one of the outer electrons of each atom to form a π-cloud, graphite conducts electricity, but only in the plane of each covalently bonded sheet. This results in a lower bulk electrical conductivity for carbon than for most metals. The delocalization also accounts for the energetic stability of graphite over diamond at room temperature. At very high pressures, carbon forms the more compact allotrope, diamond, having nearly twice the density of graphite. Here, each atom is bonded tetrahedrally to four others, forming a 3-dimensional network of puckered six-membered rings of atoms. Diamond has the same cubic structure as silicon and germanium, and because of the strength of the carbon-carbon bonds, it is the hardest naturally occurring substance measured by resistance to scratching. Contrary to the popular belief that "diamonds are forever", they are thermodynamically unstable (ΔfG°(diamond, 298 K) = 2.9 kJ/mol) under normal conditions (298 K, 105 Pa) and should theoretically transform into graphite. But due to a high activation energy barrier, the transition into graphite is so slow at normal temperature that it is unnoticeable. However, at very high temperatures diamond will turn into graphite, and diamonds can burn up in a house fire. The bottom left corner of the phase diagram for carbon has not been scrutinized experimentally. Although a computational study employing density functional theory methods reached the conclusion that as and , diamond becomes more stable than graphite by approximately 1.1 kJ/mol, more recent and definitive experimental and computational studies show that graphite is more stable than diamond for , without applied pressure, by 2.7 kJ/mol at T = 0 K and 3.2 kJ/mol at T = 298.15 K. Under some conditions, carbon crystallizes as lonsdaleite, a hexagonal crystal lattice with all atoms covalently bonded and properties similar to those of diamond. Fullerenes are a synthetic crystalline formation with a graphite-like structure, but in place of flat hexagonal cells only, some of the cells of which fullerenes are formed may be pentagons, nonplanar hexagons, or even heptagons of carbon atoms. The sheets are thus warped into spheres, ellipses, or cylinders. The properties of fullerenes (split into buckyballs, buckytubes, and nanobuds) have not yet been fully analyzed and represent an intense area of research in nanomaterials. The names fullerene and buckyball are given after Richard Buckminster Fuller, popularizer of geodesic domes, which resemble the structure of fullerenes. The buckyballs are fairly large molecules formed completely of carbon bonded trigonally, forming spheroids (the best-known and simplest is the soccerball-shaped C buckminsterfullerene). Carbon nanotubes (buckytubes) are structurally similar to buckyballs, except that each atom is bonded trigonally in a curved sheet that forms a hollow cylinder. Nanobuds were first reported in 2007 and are hybrid buckytube/buckyball materials (buckyballs are covalently bonded to the outer wall of a nanotube) that combine the properties of both in a single structure. Of the other discovered allotropes, carbon nanofoam is a ferromagnetic allotrope discovered in 1997. It consists of a low-density cluster-assembly of carbon atoms strung together in a loose three-dimensional web, in which the atoms are bonded trigonally in six- and seven-membered rings. It is among the lightest known solids, with a density of about 2 kg/m. Similarly, glassy carbon contains a high proportion of closed porosity, but contrary to normal graphite, the graphitic layers are not stacked like pages in a book, but have a more random arrangement. Linear acetylenic carbon has the chemical structure −(C:::C)n−. Carbon in this modification is linear with sp orbital hybridization, and is a polymer with alternating single and triple bonds. This carbyne is of considerable interest to nanotechnology as its Young's modulus is 40 times that of the hardest known material – diamond. In 2015, a team at the North Carolina State University announced the development of another allotrope they have dubbed Q-carbon, created by a high energy low duration laser pulse on amorphous carbon dust. Q-carbon is reported to exhibit ferromagnetism, fluorescence, and a hardness superior to diamonds. In the vapor phase, some of the carbon is in the form of dicarbon (). When excited, this gas glows green. Occurrence Carbon is the fourth most abundant chemical element in the observable universe by mass after hydrogen, helium, and oxygen. Carbon is abundant in the Sun, stars, comets, and in the atmospheres of most planets. Some meteorites contain microscopic diamonds that were formed when the Solar System was still a protoplanetary disk. Microscopic diamonds may also be formed by the intense pressure and high temperature at the sites of meteorite impacts. In 2014 NASA announced a greatly upgraded database for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. More than 20% of the carbon in the universe may be associated with PAHs, complex compounds of carbon and hydrogen without oxygen. These compounds figure in the PAH world hypothesis where they are hypothesized to have a role in abiogenesis and formation of life. PAHs seem to have been formed "a couple of billion years" after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets. It has been estimated that the solid earth as a whole contains 730 ppm of carbon, with 2000 ppm in the core and 120 ppm in the combined mantle and crust. Since the mass of the earth is , this would imply 4360 million gigatonnes of carbon. This is much more than the amount of carbon in the oceans or atmosphere (below). In combination with oxygen in carbon dioxide, carbon is found in the Earth's atmosphere (approximately 900 gigatonnes of carbon — each ppm corresponds to 2.13 Gt) and dissolved in all water bodies (approximately 36,000 gigatonnes of carbon). Carbon in the biosphere has been estimated at 550 gigatonnes but with a large uncertainty, due mostly to a huge uncertainty in the amount of terrestrial deep subsurface bacteria. Hydrocarbons (such as coal, petroleum, and natural gas) contain carbon as well. Coal "reserves" (not "resources") amount to around 900 gigatonnes with perhaps 18,000 Gt of resources. Oil reserves are around 150 gigatonnes. Proven sources of natural gas are about (containing about 105 gigatonnes of carbon), but studies estimate another of "unconventional" deposits such as shale gas, representing about 540 gigatonnes of carbon. Carbon is also found in methane hydrates in polar regions and under the seas. Various estimates put this carbon between 500, 2500 Gt, or 3,000 Gt. In the past, quantities of hydrocarbons were greater. According to one source, in the period from 1751 to 2008 about 347 gigatonnes of carbon were released as carbon dioxide to the atmosphere from burning of fossil fuels. Another source puts the amount added to the atmosphere for the period since 1750 at 879 Gt, and the total going to the atmosphere, sea, and land (such as peat bogs) at almost 2,000 Gt. Carbon is a constituent (about 12% by mass) of the very large masses of carbonate rock (limestone, dolomite, marble and so on). Coal is very rich in carbon (anthracite contains 92–98%) and is the largest commercial source of mineral carbon, accounting for 4,000 gigatonnes or 80% of fossil fuel. As for individual carbon allotropes, graphite is found in large quantities in the United States (mostly in New York and Texas), Russia, Mexico, Greenland, and India. Natural diamonds occur in the rock kimberlite, found in ancient volcanic "necks", or "pipes". Most diamond deposits are in Africa, notably in South Africa, Namibia, Botswana, the Republic of the Congo, and Sierra Leone. Diamond deposits have also been found in Arkansas, Canada, the Russian Arctic, Brazil, and in Northern and Western Australia. Diamonds are now also being recovered from the ocean floor off the Cape of Good Hope. Diamonds are found naturally, but about 30% of all industrial diamonds used in the U.S. are now manufactured. Carbon-14 is formed in upper layers of the troposphere and the stratosphere at altitudes of 9–15 km by a reaction that is precipitated by cosmic rays. Thermal neutrons are produced that collide with the nuclei of nitrogen-14, forming carbon-14 and a proton. As such, of atmospheric carbon dioxide contains carbon-14. Carbon-rich asteroids are relatively preponderant in the outer parts of the asteroid belt in the Solar System. These asteroids have not yet been directly sampled by scientists. The asteroids can be used in hypothetical space-based carbon mining, which may be possible in the future, but is currently technologically impossible. Isotopes Isotopes of carbon are atomic nuclei that contain six protons plus a number of neutrons (varying from 2 to 16). Carbon has two stable, naturally occurring isotopes. The isotope carbon-12 (C) forms 98.93% of the carbon on Earth, while carbon-13 (C) forms the remaining 1.07%. The concentration of C is further increased in biological materials because biochemical reactions discriminate against C. In 1961, the International Union of Pure and Applied Chemistry (IUPAC) adopted the isotope carbon-12 as the basis for atomic weights. Identification of carbon in nuclear magnetic resonance (NMR) experiments is done with the isotope C. Carbon-14 (C) is a naturally occurring radioisotope, created in the upper atmosphere (lower stratosphere and upper troposphere) by interaction of nitrogen with cosmic rays. It is found in trace amounts on Earth of 1 part per trillion (0.0000000001%) or more, mostly confined to the atmosphere and superficial deposits, particularly of peat and other organic materials. This isotope decays by 0.158 MeV β emission. Because of its relatively short half-life of 5730 years, C is virtually absent in ancient rocks. The amount of C in the atmosphere and in living organisms is almost constant, but decreases predictably in their bodies after death. This principle is used in radiocarbon dating, invented in 1949, which has been used extensively to determine the age of carbonaceous materials with ages up to about 40,000 years. There are 15 known isotopes of carbon and the shortest-lived of these is C which decays through proton emission and alpha decay and has a half-life of 1.98739 × 10 s. The exotic C exhibits a nuclear halo, which means its radius is appreciably larger than would be expected if the nucleus were a sphere of constant density. Formation in stars Formation of the carbon atomic nucleus occurs within a giant or supergiant star through the triple-alpha process. This requires a nearly simultaneous collision of three alpha particles (helium nuclei), as the products of further nuclear fusion reactions of helium with hydrogen or another helium nucleus produce lithium-5 and beryllium-8 respectively, both of which are highly unstable and decay almost instantly back into smaller nuclei. The triple-alpha process happens in conditions of temperatures over 100 megakelvins and helium concentration that the rapid expansion and cooling of the early universe prohibited, and therefore no significant carbon was created during the Big Bang. According to current physical cosmology theory, carbon is formed in the interiors of stars on the horizontal branch. When massive stars die as supernova, the carbon is scattered into space as dust. This dust becomes component material for the formation of the next-generation star systems with accreted planets. The Solar System is one such star system with an abundance of carbon, enabling the existence of life as we know it. The CNO cycle is an additional hydrogen fusion mechanism that powers stars, wherein carbon operates as a catalyst. Rotational transitions of various isotopic forms of carbon monoxide (for example, CO, CO, and CO) are detectable in the submillimeter wavelength range, and are used in the study of newly forming stars in molecular clouds. Carbon cycle Under terrestrial conditions, conversion of one element to another is very rare. Therefore, the amount of carbon on Earth is effectively constant. Thus, processes that use carbon must obtain it from somewhere and dispose of it somewhere else. The paths of carbon in the environment form the carbon cycle. For example, photosynthetic plants draw carbon dioxide from the atmosphere (or seawater) and build it into biomass, as in the Calvin cycle, a process of carbon fixation. Some of this biomass is eaten by animals, while some carbon is exhaled by animals as carbon dioxide. The carbon cycle is considerably more complicated than this short loop; for example, some carbon dioxide is dissolved in the oceans; if bacteria do not consume it, dead plant or animal matter may become petroleum or coal, which releases carbon when burned. Compounds Organic compounds Carbon can form very long chains of interconnecting carbon–carbon bonds, a property that is called catenation. Carbon-carbon bonds are strong and stable. Through catenation, carbon forms a countless number of compounds. A tally of unique compounds shows that more contain carbon than do not. A similar claim can be made for hydrogen because most organic compounds contain hydrogen chemically bonded to carbon or another common element like oxygen or nitrogen. The simplest form of an organic molecule is the hydrocarbon—a large family of organic molecules that are composed of hydrogen atoms bonded to a chain of carbon atoms. A hydrocarbon backbone can be substituted by other atoms, known as heteroatoms. Common heteroatoms that appear in organic compounds include oxygen, nitrogen, sulfur, phosphorus, and the nonradioactive halogens, as well as the metals lithium and magnesium. Organic compounds containing bonds to metal are known as organometallic compounds (see below). Certain groupings of atoms, often including heteroatoms, recur in large numbers of organic compounds. These collections, known as functional groups, confer common reactivity patterns and allow for the systematic study and categorization of organic compounds. Chain length, shape and functional groups all affect the properties of organic molecules. In most stable compounds of carbon (and nearly all stable organic compounds), carbon obeys the octet rule and is tetravalent, meaning that a carbon atom forms a total of four covalent bonds (which may include double and triple bonds). Exceptions include a small number of stabilized carbocations (three bonds, positive charge), radicals (three bonds, neutral), carbanions (three bonds, negative charge) and carbenes (two bonds, neutral), although these species are much more likely to be encountered as unstable, reactive intermediates. Carbon occurs in all known organic life and is the basis of organic chemistry. When united with hydrogen, it forms various hydrocarbons that are important to industry as refrigerants, lubricants, solvents, as chemical feedstock for the manufacture of plastics and petrochemicals, and as fossil fuels. When combined with oxygen and hydrogen, carbon can form many groups of important biological compounds including sugars, lignans, chitins, alcohols, fats, and aromatic esters, carotenoids and terpenes. With nitrogen it forms alkaloids, and with the addition of sulfur also it forms antibiotics, amino acids, and rubber products. With the addition of phosphorus to these other elements, it forms DNA and RNA, the chemical-code carriers of life, and adenosine triphosphate (ATP), the most important energy-transfer molecule in all living cells. Inorganic compounds Commonly carbon-containing compounds which are associated with minerals or which do not contain bonds to the other carbon atoms, halogens, or hydrogen, are treated separately from classical organic compounds; the definition is not rigid, and the classification of some compounds can vary from author to author (see reference articles above). Among these are the simple oxides of carbon. The most prominent oxide is carbon dioxide (). This was once the principal constituent of the paleoatmosphere, but is a minor component of the Earth's atmosphere today. Dissolved in water, it forms carbonic acid (), but as most compounds with multiple single-bonded | cyclopentanepentone (CO), cyclohexanehexone (CO), and mellitic anhydride (CO). However, mellitic anhydride is the triple acyl anhydride of mellitic acid; moreover, it contains a benzene ring. Thus, many chemists consider it to be organic. With reactive metals, such as tungsten, carbon forms either carbides (C) or acetylides () to form alloys with high melting points. These anions are also associated with methane and acetylene, both very weak acids. With an electronegativity of 2.5, carbon prefers to form covalent bonds. A few carbides are covalent lattices, like carborundum (SiC), which resembles diamond. Nevertheless, even the most polar and salt-like of carbides are not completely ionic compounds. Organometallic compounds Organometallic compounds by definition contain at least one carbon-metal covalent bond. A wide range of such compounds exist; major classes include simple alkyl-metal compounds (for example, tetraethyllead), η-alkene compounds (for example, Zeise's salt), and η-allyl compounds (for example, allylpalladium chloride dimer); metallocenes containing cyclopentadienyl ligands (for example, ferrocene); and transition metal carbene complexes. Many metal carbonyls and metal cyanides exist (for example, tetracarbonylnickel and potassium ferricyanide); some workers consider metal carbonyl and cyanide complexes without other carbon ligands to be purely inorganic, and not organometallic. However, most organometallic chemists consider metal complexes with any carbon ligand, even 'inorganic carbon' (e.g., carbonyls, cyanides, and certain types of carbides and acetylides) to be organometallic in nature. Metal complexes containing organic ligands without a carbon-metal covalent bond (e.g., metal carboxylates) are termed metalorganic compounds. While carbon is understood to strongly prefer formation of four covalent bonds, other exotic bonding schemes are also known. Carboranes are highly stable dodecahedral derivatives of the [B12H12]2- unit, with one BH replaced with a CH+. Thus, the carbon is bonded to five boron atoms and one hydrogen atom. The cation [(PhPAu)C] contains an octahedral carbon bound to six phosphine-gold fragments. This phenomenon has been attributed to the aurophilicity of the gold ligands, which provide additional stabilization of an otherwise labile species. In nature, the iron-molybdenum cofactor (FeMoco) responsible for microbial nitrogen fixation likewise has an octahedral carbon center (formally a carbide, C(-IV)) bonded to six iron atoms. In 2016, it was confirmed that, in line with earlier theoretical predictions, the hexamethylbenzene dication contains a carbon atom with six bonds. More specifically, the dication could be described structurally by the formulation [MeC(η5-C5Me5)]2+, making it an "organic metallocene" in which a MeC3+ fragment is bonded to a η5-C5Me5− fragment through all five of the carbons of the ring. It is important to note that in the cases above, each of the bonds to carbon contain less than two formal electron pairs. Thus, the formal electron count of these species does not exceed an octet. This makes them hypercoordinate but not hypervalent. Even in cases of alleged 10-C-5 species (that is, a carbon with five ligands and a formal electron count of ten), as reported by Akiba and co-workers, electronic structure calculations conclude that the electron population around carbon is still less than eight, as is true for other compounds featuring four-electron three-center bonding. History and etymology The English name carbon comes from the Latin carbo for coal and charcoal, whence also comes the French charbon, meaning charcoal. In German, Dutch and Danish, the names for carbon are Kohlenstoff, koolstof and kulstof respectively, all literally meaning coal-substance. Carbon was discovered in prehistory and was known in the forms of soot and charcoal to the earliest human civilizations. Diamonds were known probably as early as 2500 BCE in China, while carbon in the form of charcoal was made around Roman times by the same chemistry as it is today, by heating wood in a pyramid covered with clay to exclude air. In 1722, René Antoine Ferchault de Réaumur demonstrated that iron was transformed into steel through the absorption of some substance, now known to be carbon. In 1772, Antoine Lavoisier showed that diamonds are a form of carbon; when he burned samples of charcoal and diamond and found that neither produced any water and that both released the same amount of carbon dioxide per gram. In 1779, Carl Wilhelm Scheele showed that graphite, which had been thought of as a form of lead, was instead identical with charcoal but with a small admixture of iron, and that it gave "aerial acid" (his name for carbon dioxide) when oxidized with nitric acid. In 1786, the French scientists Claude Louis Berthollet, Gaspard Monge and C. A. Vandermonde confirmed that graphite was mostly carbon by oxidizing it in oxygen in much the same way Lavoisier had done with diamond. Some iron again was left, which the French scientists thought was necessary to the graphite structure. In their publication they proposed the name carbone (Latin carbonum) for the element in graphite which was given off as a gas upon burning graphite. Antoine Lavoisier then listed carbon as an element in his 1789 textbook. A new allotrope of carbon, fullerene, that was discovered in 1985 includes nanostructured forms such as buckyballs and nanotubes. Their discoverers – Robert Curl, Harold Kroto and Richard Smalley – received the Nobel Prize in Chemistry in 1996. The resulting renewed interest in new forms led to the discovery of further exotic allotropes, including glassy carbon, and the realization that "amorphous carbon" is not strictly amorphous. Production Graphite Commercially viable natural deposits of graphite occur in many parts of the world, but the most important sources economically are in China, India, Brazil and North Korea. Graphite deposits are of metamorphic origin, found in association with quartz, mica and feldspars in schists, gneisses and metamorphosed sandstones and limestone as lenses or veins, sometimes of a metre or more in thickness. Deposits of graphite in Borrowdale, Cumberland, England were at first of sufficient size and purity that, until the 19th century, pencils were made simply by sawing blocks of natural graphite into strips before encasing the strips in wood. Today, smaller deposits of graphite are obtained by crushing the parent rock and floating the lighter graphite out on water. There are three types of natural graphite—amorphous, flake or crystalline flake, and vein or lump. Amorphous graphite is the lowest quality and most abundant. Contrary to science, in industry "amorphous" refers to very small crystal size rather than complete lack of crystal structure. Amorphous is used for lower value graphite products and is the lowest priced graphite. Large amorphous graphite deposits are found in China, Europe, Mexico and the United States. Flake graphite is less common and of higher quality than amorphous; it occurs as separate plates that crystallized in metamorphic rock. Flake graphite can be four times the price of amorphous. Good quality flakes can be processed into expandable graphite for many uses, such as flame retardants. The foremost deposits are found in Austria, Brazil, Canada, China, Germany and Madagascar. Vein or lump graphite is the rarest, most valuable, and highest quality type of natural graphite. It occurs in veins along intrusive contacts in solid lumps, and it is only commercially mined in Sri Lanka. According to the USGS, world production of natural graphite was 1.1 million tonnes in 2010, to which China contributed 800,000 t, India 130,000 t, Brazil 76,000 t, North Korea 30,000 t and Canada 25,000 t. No natural graphite was reported mined in the United States, but 118,000 t of synthetic graphite with an estimated value of $998 million was produced in 2009. Diamond The diamond supply chain is controlled by a limited number of powerful businesses, and is also highly concentrated in a small number of locations around the world (see figure). Only a very small fraction of the diamond ore consists of actual diamonds. The ore is crushed, during which care has to be taken in order to prevent larger diamonds from being destroyed in this process and subsequently the particles are sorted by density. Today, diamonds are located in the diamond-rich density fraction with the help of X-ray fluorescence, after which the final sorting steps are done by hand. Before the use of X-rays became commonplace, the separation was done with grease belts; diamonds have a stronger tendency to stick to grease than the other minerals in the ore. Historically diamonds were known to be found only in alluvial deposits in southern India. India led the world in diamond production from the time of their discovery in approximately the 9th century BC to the mid-18th century AD, but the commercial potential of these sources had been exhausted by the late 18th century and at that time India was eclipsed by Brazil where the first non-Indian diamonds were found in 1725. Diamond production of primary deposits (kimberlites and lamproites) only started in the 1870s after the discovery of the diamond fields in South Africa. Production has increased over time and now an accumulated total of 4.5 billion carats have been mined since that date. About 20% of that amount has been mined in the last 5 years alone, and during the last ten years 9 new mines have started production while 4 more are waiting to be opened soon. Most of these mines are located in Canada, Zimbabwe, Angola, and one in Russia. In the United States, diamonds have been found in Arkansas, Colorado and Montana. In 2004, a startling discovery of a microscopic diamond in the United States led to the January 2008 bulk-sampling of kimberlite pipes in a remote part of Montana. Today, most commercially viable diamond deposits are in Russia, Botswana, Australia and the Democratic Republic of Congo. In 2005, Russia produced almost one-fifth of the global diamond output, reports the British Geological Survey. Australia has the richest diamantiferous pipe with production reaching peak levels of per year in the 1990s. There are also commercial deposits being actively mined in the Northwest Territories of Canada, Siberia (mostly in Yakutia territory; for example, Mir pipe and Udachnaya pipe), Brazil, and in Northern and Western Australia. Applications Carbon is essential to all known living systems, and without it life as we know it could not exist (see alternative biochemistry). The major economic use of carbon other than food and wood is in the form of hydrocarbons, most notably the fossil fuel methane gas and crude oil (petroleum). Crude oil is distilled in refineries by the petrochemical industry to produce gasoline, kerosene, and other products. Cellulose is a natural, carbon-containing polymer produced by plants in the form of wood, cotton, linen, and hemp. Cellulose is used primarily for maintaining structure in plants. Commercially valuable carbon polymers of animal origin include wool, cashmere and silk. Plastics are made from synthetic carbon polymers, often with oxygen and nitrogen atoms included at regular intervals in the main polymer chain. The raw materials for many of these synthetic substances come from crude oil. The uses of carbon and its compounds are extremely varied. It can form alloys with iron, of which the most common is carbon steel. Graphite is combined with clays to form the 'lead' used in pencils used for writing and drawing. It is also used as a lubricant and a pigment, |
other tasks. As the RAM types used for primary storage are volatile (uninitialized at start up), a computer containing only such storage would not have a source to read instructions from, in order to start the computer. Hence, non-volatile primary storage containing a small startup program (BIOS) is used to bootstrap the computer, that is, to read a larger program from non-volatile secondary storage to RAM and start to execute it. A non-volatile technology used for this purpose is called ROM, for read-only memory (the terminology may be somewhat confusing as most ROM types are also capable of random access). Many types of "ROM" are not literally read only, as updates to them are possible; however it is slow and memory must be erased in large portions before it can be re-written. Some embedded systems run programs directly from ROM (or similar), because such programs are rarely changed. Standard computers do not store non-rudimentary programs in ROM, and rather, use large capacities of secondary storage, which is non-volatile as well, and not as costly. Recently, primary storage and secondary storage in some uses refer to what was historically called, respectively, secondary storage and tertiary storage. Secondary storage Secondary storage (also known as external memory or auxiliary storage) differs from primary storage in that it is not directly accessible by the CPU. The computer usually uses its input/output channels to access secondary storage and transfer the desired data to primary storage. Secondary storage is non-volatile (retaining data when its power is shut off). Modern computer systems typically have two orders of magnitude more secondary storage than primary storage because secondary storage is less expensive. In modern computers, hard disk drives (HDDs) or solid-state drives (SSDs) are usually used as secondary storage. The access time per byte for HDDs or SSDs is typically measured in milliseconds (one thousandth seconds), while the access time per byte for primary storage is measured in nanoseconds (one billionth seconds). Thus, secondary storage is significantly slower than primary storage. Rotating optical storage devices, such as CD and DVD drives, have even longer access times. Other examples of secondary storage technologies include USB flash drives, floppy disks, magnetic tape, paper tape, punched cards, and RAM disks. Once the disk read/write head on HDDs reaches the proper placement and the data, subsequent data on the track are very fast to access. To reduce the seek time and rotational latency, data are transferred to and from disks in large contiguous blocks. Sequential or block access on disks is orders of magnitude faster than random access, and many sophisticated paradigms have been developed to design efficient algorithms based upon sequential and block access. Another way to reduce the I/O bottleneck is to use multiple disks in parallel in order to increase the bandwidth between primary and secondary memory. Secondary storage is often formatted according to a file system format, which provides the abstraction necessary to organize data into files and directories, while also providing metadata describing the owner of a certain file, the access time, the access permissions, and other information. Most computer operating systems use the concept of virtual memory, allowing utilization of more primary storage capacity than is physically available in the system. As the primary memory fills up, the system moves the least-used chunks (pages) to a swap file or page file on secondary storage, retrieving them later when needed. If a lot of pages are moved to slower secondary storage, the system performance is degraded. Tertiary storage Tertiary storage or tertiary memory is a level below secondary storage. Typically, it involves a robotic mechanism which will mount (insert) and dismount removable mass storage media into a storage device according to the system's demands; such data are often copied to secondary storage before use. It is primarily used for archiving rarely accessed information since it is much slower than secondary storage (e.g. 5–60 seconds vs. 1–10 milliseconds). This is primarily useful for extraordinarily large data stores, accessed without human operators. Typical examples include tape libraries and optical jukeboxes. When a computer needs to read information from the tertiary storage, it will first consult a catalog database to determine which tape or disc contains the information. Next, the computer will instruct a robotic arm to fetch the medium and place it in a drive. When the computer has finished reading the information, the robotic arm will return the medium to its place in the library. Tertiary storage is also known as nearline storage because it is "near to online". The formal distinction between online, nearline, and offline storage is: Online storage is immediately available for I/O. Nearline storage is not immediately available, but can be made online quickly without human intervention. Offline storage is not immediately available, and requires some human intervention to become online. For example, always-on spinning hard disk drives are online storage, while spinning drives that spin down automatically, such as in massive arrays of idle disks (MAID), are nearline storage. Removable media such as tape cartridges that can be automatically loaded, as in tape libraries, are nearline storage, while tape cartridges that must be manually loaded are offline storage. Off-line storage Off-line storage is a computer data storage on a medium or a device that is not under the control of a processing unit. The medium is recorded, usually in a secondary or tertiary storage device, and then physically removed or disconnected. It must be inserted or connected by a human operator before a computer can access it again. Unlike tertiary storage, it cannot be accessed without human interaction. Off-line storage is used to transfer information, since the detached medium can easily be physically transported. Additionally, it is useful for cases of disaster, where, for example, a fire destroys the original data, a medium in a remote location will be unaffected, enabling disaster recovery. Off-line storage increases general information security, since it is physically inaccessible from a computer, and data confidentiality or integrity cannot be affected by computer-based attack techniques. Also, if the information stored for archival purposes is rarely accessed, off-line storage is less expensive than tertiary storage. In modern personal computers, most secondary and tertiary storage media are also used for off-line storage. Optical discs and flash memory devices are most popular, and to much lesser extent removable hard disk drives. In enterprise uses, magnetic tape is predominant. Older examples are floppy disks, Zip disks, or punched cards. Characteristics of storage Storage technologies at all levels of the storage hierarchy can be differentiated by evaluating certain core characteristics as well as measuring characteristics specific to a particular implementation. These core characteristics are volatility, mutability, accessibility, and addressability. For any particular implementation of any storage technology, the characteristics worth measuring are capacity and performance. Volatility Non-volatile memory retains the stored information even if not constantly supplied with electric power. It is suitable for long-term storage of information. Volatile memory requires constant power to maintain the stored information. The fastest memory technologies are volatile ones, although that is not a universal rule. Since the primary storage is required to be very fast, it predominantly uses volatile memory. Dynamic random-access memory is a form of volatile memory that also requires the stored information to be periodically reread and rewritten, or refreshed, otherwise it would vanish. Static random-access memory is a form of volatile memory similar to DRAM with the exception that it never needs to be refreshed as long as power is applied; it loses its content when the power supply is lost. An uninterruptible power supply (UPS) can be used to give a computer a brief window of time to move information from primary volatile storage into non-volatile storage before the batteries are exhausted. Some systems, for example EMC Symmetrix, have integrated batteries that maintain volatile storage for several minutes. Mutability Read/write storage or mutable storage Allows information to be overwritten at any time. A computer without some amount of read/write storage for primary storage purposes would be useless for many tasks. Modern computers typically use read/write storage also for secondary storage. Slow write, fast read storage Read/write storage which allows information to be overwritten multiple times, but with the write operation being much slower than the read operation. Examples include CD-RW and SSD. Write once storage Write once read many (WORM) allows the information to be written only once at some point after manufacture. Examples include semiconductor programmable read-only memory and CD-R. Read only storage Retains the information stored at the time of manufacture. Examples include mask ROM ICs and CD-ROM. Accessibility Random access Any location in storage can be accessed at any moment in approximately the same amount of time. Such characteristic is well suited for primary and secondary storage. Most semiconductor memories and disk drives provide random access, though only flash memory supports random access without latency, as no mechanical parts need to be moved. Sequential access The accessing of pieces of information will be in a serial order, one after the other; therefore the time to access a particular piece of information depends upon which piece of information was last accessed. Such characteristic is typical of off-line storage. Addressability Location-addressable Each individually accessible unit of information in storage is selected with its numerical memory address. In modern computers, location-addressable storage usually limits to primary storage, accessed internally by computer programs, since location-addressability is very efficient, but burdensome for humans. File addressable Information is divided into files of variable length, and a particular file is selected with human-readable directory and file names. The underlying device is still location-addressable, but the operating system of a computer provides the file system abstraction to make the operation more understandable. In modern computers, secondary, tertiary and off-line storage use file systems. Content-addressable Each | computers. The central processing unit (CPU) of a computer is what manipulates data by performing computations. In practice, almost all computers use a storage hierarchy, which puts fast but expensive and small storage options close to the CPU and slower but less expensive and larger options further away. Generally the fast volatile technologies (which lose data when off power) are referred to as "memory", while slower persistent technologies are referred to as "storage". Even the first computer designs, Charles Babbage's Analytical Engine and Percy Ludgate's Analytical Machine, clearly distinguished between processing and memory (Babbage stored numbers as rotations of gears, while Ludgate stored numbers as displacements of rods in shuttles). This distinction was extended in the Von Neumann architecture, where the CPU consists of two main parts: The control unit and the arithmetic logic unit (ALU). The former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data. Functionality Without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately output the result. It would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators, digital signal processors, and other specialized devices. Von Neumann machines differ in having a memory in which they store their operating instructions and data. Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can simply be reprogrammed with new in-memory instructions; they also tend to be simpler to design, in that a relatively simple processor may keep state between successive computations to build up complex procedural results. Most modern computers are von Neumann machines. Data organization and representation A modern digital computer represents data using the binary numeral system. Text, numbers, pictures, audio, and nearly any other form of information can be converted into a string of bits, or binary digits, each of which has a value of 0 or 1. The most common unit of storage is the byte, equal to 8 bits. A piece of information can be handled by any computer or device whose storage space is large enough to accommodate the binary representation of the piece of information, or simply data. For example, the complete works of Shakespeare, about 1250 pages in print, can be stored in about five megabytes (40 million bits) with one byte per character. Data are encoded by assigning a bit pattern to each character, digit, or multimedia object. Many standards exist for encoding (e.g. character encodings like ASCII, image encodings like JPEG, video encodings like MPEG-4). By adding bits to each encoded unit, redundancy allows the computer to both detect errors in coded data and correct them based on mathematical algorithms. Errors generally occur in low probabilities due to random bit value flipping, or "physical bit fatigue", loss of the physical bit in storage of its ability to maintain a distinguishable value (0 or 1), or due to errors in inter or intra-computer communication. A random bit flip (e.g. due to random radiation) is typically corrected upon detection. A bit, or a group of malfunctioning physical bits (not always the specific defective bit is known; group definition depends on specific storage device) is typically automatically fenced-out, taken out of use by the device, and replaced with another functioning equivalent group in the device, where the corrected bit values are restored (if possible). The cyclic redundancy check (CRC) method is typically used in communications and storage for error detection. A detected error is then retried. Data compression methods allow in many cases (such as a database) to represent a string of bits by a shorter bit string ("compress") and reconstruct the original string ("decompress") when needed. This utilizes substantially less storage (tens of percents) for many types of data at the cost of more computation (compress and decompress when needed). Analysis of trade-off between storage cost saving and costs of related computations and possible delays in data availability is done before deciding whether to keep certain data compressed or not. For security reasons, certain types of data (e.g. credit-card information) may be kept encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots. Hierarchy of storage Generally, the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency is from the CPU. This traditional division of storage to primary, secondary, tertiary, and off-line storage is also guided by cost per bit. In contemporary usage, memory is usually semiconductor storage read-write random-access memory, typically DRAM (dynamic RAM) or other forms of fast but temporary storage. Storage consists of storage devices and their media not directly accessible by the CPU (secondary or tertiary storage), typically hard disk drives, optical disc drives, and other devices slower than RAM but non-volatile (retaining contents when powered down). Historically, memory has been called core memory, main memory, real storage, or internal memory. Meanwhile, non-volatile storage devices have been referred to as secondary storage, external memory, or auxiliary/peripheral storage. Primary storage Primary storage (also known as main memory, internal memory, or prime memory), often referred to simply as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in uniform manner. Historically, early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were mostly replaced by magnetic-core memory. Core memory remained dominant until the 1970s, when advances in integrated circuit technology allowed semiconductor memory to become economically competitive. This led to modern random-access memory (RAM). It is small-sized, light, but quite expensive at the same time. The particular types of RAM used for primary storage are volatile, meaning that they lose the information when not powered. Besides storing opened programs, it serves as disk cache and write buffer to improve both reading and writing performance. Operating systems borrow RAM capacity for caching so long as not needed by running software. Spare memory can be utilized as RAM drive for temporary high-speed data storage. As shown in the diagram, traditionally there are two more sub-layers of the primary storage, besides main large-capacity RAM: Processor registers are located inside the processor. Each register typically holds a word of data (often 32 or 64 bits). CPU instructions instruct the arithmetic logic unit to perform various calculations or other operations on this data (or with the help of it). Registers are the fastest of all forms of computer data storage. Processor cache is an intermediate stage between ultra-fast registers and much slower main memory. It was introduced solely to improve the performance of computers. Most actively used information in the main memory is just duplicated in the cache memory, which is faster, but of much lesser capacity. On the other hand, main memory is much slower, but has a much greater storage capacity than processor registers. Multi-level hierarchical cache setup is also commonly used—primary cache being smallest, fastest and located inside the processor; secondary cache being somewhat larger and slower. Main memory is directly or indirectly connected to the central processing unit via a memory bus. It is actually two buses (not on the diagram): an address bus and a data bus. The CPU firstly sends a number through an address bus, a number called memory address, that indicates the desired location of data. Then it reads or writes the data in the memory cells using the data bus. Additionally, a memory management unit (MMU) is a small device between CPU and RAM recalculating the actual memory address, for example to provide an abstraction of virtual memory or other tasks. As the RAM types used for primary storage are volatile (uninitialized at start up), a computer containing only such storage would not have a source to read instructions from, in order to start the computer. Hence, non-volatile primary storage containing a small startup program (BIOS) is used to bootstrap the computer, that is, to read a larger program from non-volatile secondary storage to RAM and start to execute it. A non-volatile technology used for this purpose is called ROM, for read-only memory (the terminology may be somewhat confusing as most ROM types are also capable of random access). Many types of "ROM" are not literally read only, as updates to them are possible; however it is slow and memory must be erased in large portions before it can be re-written. Some embedded systems run programs directly from ROM (or similar), because such programs are rarely changed. Standard computers do not store non-rudimentary programs in ROM, and rather, use large capacities of secondary storage, which is non-volatile as well, and not as costly. Recently, primary storage and secondary storage in some uses refer to what was historically called, respectively, secondary storage and tertiary storage. Secondary storage Secondary storage (also known as external memory or auxiliary storage) differs from primary storage in that it is not directly accessible by the CPU. The computer usually uses its input/output channels to access secondary storage and transfer the desired data to primary storage. Secondary storage is non-volatile (retaining data when its power is shut off). Modern computer systems typically have two orders of magnitude more secondary storage than primary storage because secondary storage is less expensive. In modern computers, hard disk drives (HDDs) or solid-state drives (SSDs) are usually used as secondary storage. The access time per byte for HDDs or SSDs is typically measured in milliseconds (one thousandth seconds), while the access time per byte for primary storage is measured in nanoseconds (one billionth seconds). Thus, secondary storage is significantly slower than primary storage. Rotating optical storage devices, such as CD and DVD drives, have even longer access times. Other examples of secondary storage technologies include USB flash drives, floppy disks, magnetic tape, paper tape, punched cards, and RAM disks. Once the disk read/write head on HDDs reaches the proper placement and the data, subsequent data on the track are very fast to access. To reduce the seek time and rotational latency, data are transferred to and from disks in large contiguous blocks. Sequential or block access on disks is orders of magnitude faster than random access, and many sophisticated paradigms have been developed to design efficient algorithms based upon sequential and block access. Another way to reduce the I/O bottleneck is to use multiple disks in parallel in order to increase the bandwidth between primary and secondary memory. Secondary storage is often formatted according to a file system format, which provides the abstraction necessary to organize data into files and directories, while also providing metadata describing the owner of a certain file, the access time, the access permissions, and other information. Most computer operating systems use the concept of virtual memory, allowing utilization of more primary storage capacity than is physically available in the system. As the primary memory fills up, the system moves the least-used chunks (pages) to a swap file or page file on secondary storage, retrieving them later when needed. If a lot of pages are moved to slower secondary storage, the system performance is degraded. Tertiary storage Tertiary storage or tertiary memory is a level below secondary storage. Typically, it involves a robotic mechanism which will mount (insert) and dismount removable mass storage media into a storage device according to the system's demands; such data are often copied to secondary storage before use. It is primarily used for archiving rarely accessed information since it is much slower than secondary storage (e.g. 5–60 seconds vs. 1–10 milliseconds). This is primarily useful for extraordinarily large data stores, accessed without human operators. Typical examples include tape libraries and optical jukeboxes. When a computer needs to read information from the tertiary storage, it will first consult a catalog database to determine which tape or disc contains the information. Next, the computer will instruct a robotic arm to fetch the medium and place it in a drive. When the computer has finished reading the information, the robotic arm will return the medium to its place in the library. Tertiary storage is also known as nearline storage because it is "near to online". The formal distinction between online, nearline, and offline storage is: Online storage is immediately available for I/O. Nearline storage is not immediately available, but can be made online quickly without human intervention. Offline storage is not immediately available, and requires some human intervention to become online. For example, always-on spinning hard disk drives are online storage, while spinning drives that spin down automatically, such as in massive arrays of idle disks (MAID), are nearline storage. Removable media such as tape cartridges that can be automatically loaded, as in tape libraries, are nearline storage, while tape cartridges that must be manually loaded are offline storage. Off-line storage Off-line storage is a computer data storage on a medium or a device that is not under the control of a processing unit. The medium is recorded, usually in a secondary or tertiary storage device, and then physically removed or disconnected. It must be inserted or connected by a human operator before a computer can access it again. Unlike tertiary storage, it cannot be accessed without human interaction. Off-line storage is used to transfer information, since the detached medium can easily be physically transported. Additionally, it is useful for cases of disaster, where, for example, a fire destroys the original data, a medium in a remote location will be unaffected, enabling disaster recovery. Off-line storage increases general information security, since it is physically inaccessible from a computer, and data confidentiality or integrity cannot be affected by computer-based attack techniques. Also, if the information stored for archival purposes is rarely accessed, off-line storage is less expensive than tertiary storage. In modern personal computers, most secondary and tertiary storage media are also used for off-line storage. Optical discs and flash memory devices are most popular, and to much lesser extent removable hard disk drives. In enterprise uses, magnetic tape is predominant. Older examples are floppy disks, Zip disks, or punched cards. Characteristics of storage Storage technologies at all levels of the storage hierarchy can be differentiated by evaluating certain core characteristics as well as measuring characteristics specific to a particular implementation. These core characteristics are volatility, mutability, accessibility, and addressability. For any particular implementation of any storage technology, the characteristics worth measuring are capacity and performance. Volatility Non-volatile memory retains the stored information even if not constantly supplied with electric power. It is suitable for long-term storage of information. Volatile memory requires constant power to maintain the stored information. The fastest memory technologies are volatile ones, although that is not a universal rule. Since the primary storage is required to be very fast, it predominantly uses volatile memory. Dynamic random-access memory is a form of volatile memory that also requires the stored information to be periodically reread and rewritten, or refreshed, otherwise it would vanish. Static random-access memory is a form of volatile memory similar to DRAM with the exception that it never needs to be refreshed as long as power is applied; it loses its content when the power supply is lost. An uninterruptible power supply (UPS) can be used to give a computer a brief window of time to move information from primary volatile storage into non-volatile storage before the batteries are exhausted. Some systems, for example EMC Symmetrix, have integrated batteries that maintain volatile storage for several minutes. Mutability Read/write storage or |
propositional calculus, or logical calculus in mathematics Relevance conditional, in relevance logic Conditional (computer programming), a statement or expression in computer programming languages A conditional expression in computer programming languages such as ?: Conditions in a contract Grammar and linguistics Conditional mood (or conditional tense), a verb form in many languages Conditional sentence, a sentence type used to refer to hypothetical situations and their consequences | logic: a proof that asserts a conditional, and proves that the antecedent leads to the consequent Strict conditional, in philosophy, logic, and mathematics Material conditional, in propositional calculus, or logical calculus in mathematics Relevance conditional, in relevance logic Conditional (computer programming), a statement or expression in computer programming languages A conditional expression in computer programming languages such as ?: Conditions in a contract |
used to gauge time and temperature during the firing of ceramic materials Roller cone bit, a drill bit used for drilling through rock, for example when drilling for oil and gas Skid cone, a hollow steel or plastic cone placed over the sawn end of a log Speaker cone, the cone inside a loudspeaker that moves to generate sound Spinning cone columns are used in a form of steam distillation to gently extract volatile chemicals from liquid foodstuffs People Bonnie Ethel Cone (1907–2003), American educator and founder of the University of North Carolina at Charlotte Carin Cone (born 1940), American swimmer, Olympic medalist, world record holder, and gold medal winner from the Pan American Games Chadrick Cone (born 1983), American football wide receiver for the Georgia Force in the Arena Football League Cindy Parlow Cone (born 1978), American soccer player and coach Cone sisters, Claribel Cone (1864–1929), and Etta Cone (1870–1949), collectors and socialites David Cone (born 1963), former Major League Baseball pitcher Edward T. Cone (1917–2004), American music theorist and composer Fairfax M. Cone (1903–1977), director of the American Association of Advertising Agencies Fred Cone (baseball) (1848–1909), pioneer professional baseball player Fred Cone (American football) (born 1926), former professional American football running back Fred P. Cone (1871–1948), twenty-seventh governor of Florida (Frederick Preston) Jason McCaslin (born 1980), nicknamed Cone, bassist for the Canadian band Sum 41 James Hal Cone (born 1938), advocate of Black liberation theology John Cone (born 1974), American professional wrestling referee John J. Cone, the fourth Supreme Knight of the Knights of Columbus from 1898 to 1899 Mac Cone (born 1952), Canadian show jumper Martin Cone (1882–1963), 6th president of St. Ambrose College from 1930 to 1937 Marvin Cone (1891–1965), American painter Moses H. Cone (1857–1908), American textile entrepreneur, conservationist, and philanthropist Reuben Cone (1788–1851), pioneer and landowner in Atlanta, Georgia Robert W. Cone (1957-2016), major general in the United States Army, and Special Assistant to the Commanding General of TRADOC Sara Cone Bryant (1873–?), author of various children's book in the early 20th century Spencer Cone Jones (1836–1915), President of the Maryland State Senate, Mayor of Rockville, Maryland Spencer Houghton Cone (1785–1855), American Baptist minister and president of the American and Foreign Bible Society Tim Cone (born 1957), American basketball coach Business Cone Mills Corporation, a textile manufacturer Computing Cone (software), a text-based e-mail and news client for Unix-like operating systems Cone tracing, a derivative of the ray-tracing algorithm that replaces rays, which have no thickness, with cones Second-order cone programming, a library of routines that implements a predictor corrector variant of the semidefinite programming algorithm Biology and medicine Cone cell, in anatomy, a type of light-sensitive cell found along with rods in the retina of the eye Cone dystrophy, an inherited ocular disorder characterized by the loss of cone cells Cone snail, a carnivorous mollusc of the family Conidae Cone-billed tanager (Conothraupis mesoleuca), a species of bird in the family Thraupidae Conifer cone, a seed-bearing organ on conifer plants Growth cone, a dynamic, actin-supported extension of a developing axon seeking its synaptic target Witch-hazel cone gall aphid (Hormaphis hamamelidis), a minuscule insect, a member of the aphid superfamily Coning, a brain herniation in which the cerebellar tonsils move downwards through the foramen magnum Astronomy Cone Nebula (also known as NGC 2264), an H II region in the constellation of Monoceros Ionization cone, cones of material extending out from spiral galaxies Geography Cinder cone, a steep conical hill of volcanic fragments around and downwind from a volcanic vent Cone (hill), a hill in | Cone (1903–1977), director of the American Association of Advertising Agencies Fred Cone (baseball) (1848–1909), pioneer professional baseball player Fred Cone (American football) (born 1926), former professional American football running back Fred P. Cone (1871–1948), twenty-seventh governor of Florida (Frederick Preston) Jason McCaslin (born 1980), nicknamed Cone, bassist for the Canadian band Sum 41 James Hal Cone (born 1938), advocate of Black liberation theology John Cone (born 1974), American professional wrestling referee John J. Cone, the fourth Supreme Knight of the Knights of Columbus from 1898 to 1899 Mac Cone (born 1952), Canadian show jumper Martin Cone (1882–1963), 6th president of St. Ambrose College from 1930 to 1937 Marvin Cone (1891–1965), American painter Moses H. Cone (1857–1908), American textile entrepreneur, conservationist, and philanthropist Reuben Cone (1788–1851), pioneer and landowner in Atlanta, Georgia Robert W. Cone (1957-2016), major general in the United States Army, and Special Assistant to the Commanding General of TRADOC Sara Cone Bryant (1873–?), author of various children's book in the early 20th century Spencer Cone Jones (1836–1915), President of the Maryland State Senate, Mayor of Rockville, Maryland Spencer Houghton Cone (1785–1855), American Baptist minister and president of the American and Foreign Bible Society Tim Cone (born 1957), American basketball coach Business Cone Mills Corporation, a textile manufacturer Computing Cone (software), a text-based e-mail and news client for Unix-like operating systems Cone tracing, a derivative of the ray-tracing algorithm that replaces rays, which have no thickness, with cones Second-order cone programming, a library of routines that implements a predictor corrector variant of the semidefinite programming algorithm Biology and medicine Cone cell, in anatomy, a type of light-sensitive cell found along with rods in the retina of the eye Cone dystrophy, an inherited ocular disorder characterized by the loss of cone cells Cone snail, a carnivorous mollusc of the family Conidae Cone-billed tanager (Conothraupis mesoleuca), a species of bird in the family Thraupidae Conifer cone, a seed-bearing organ on conifer plants Growth cone, a dynamic, actin-supported extension of a developing axon seeking its synaptic target Witch-hazel cone gall aphid (Hormaphis hamamelidis), a minuscule insect, a member of the aphid superfamily Coning, a brain herniation in which the cerebellar tonsils move downwards through the foramen magnum Astronomy Cone Nebula (also known as NGC 2264), an H II region in the constellation of Monoceros Ionization cone, cones of material extending out from spiral galaxies Geography Cinder cone, a steep conical hill of volcanic fragments around and downwind from a volcanic vent Cone (hill), a hill in the shape of a cone which may or may not be volcanic in origin Cone (Phrygia), a town and bishopric of ancient Phrygia Dirt cone, a feature of a glacier or snow patch, in which dirt forms a coating insulating the ice below Parasitic cone (or satellite cone), a geographical feature found around a volcano Shatter cone, rare geological feature in the bedrock beneath meteorite impact craters or underground nuclear explosions Volcanic cone, among the simplest volcanic formations in the world Mapping Lambert conformal conic projection (LCC), a conic map projection, which is often used for aeronautical charts Places Cone, Michigan, an unincorporated community in Michigan Cone, Texas, an unincorporated community in Crosby |
defined as: Therefore, At equilibrium: leading to: and Obtaining the value of the standard Gibbs energy change, allows the calculation of the equilibrium constant. Addition of reactants or products For a reactional system at equilibrium: Qr = Keq; ξ = ξeq. If the activities of constituents are modified, the value of the reaction quotient changes and becomes different from the equilibrium constant: Qr ≠ Keq and then If activity of a reagent i increases the reaction quotient decreases. Then and The reaction will shift to the right (i.e. in the forward direction, and thus more products will form). If activity of a product j increases, then and The reaction will shift to the left (i.e. in the reverse direction, and thus less products will form). Note that activities and equilibrium constants are dimensionless numbers. Treatment of activity The expression for the equilibrium constant can be rewritten as the product of a concentration quotient, Kc and an activity coefficient quotient, Γ. [A] is the concentration of reagent A, etc. It is possible in principle to obtain values of the activity coefficients, γ. For solutions, equations such as the Debye–Hückel equation or extensions such as Davies equation Specific ion interaction theory or Pitzer equations may be used.Software (below) However this is not always possible. It is common practice to assume that Γ is a constant, and to use the concentration quotient in place of the thermodynamic equilibrium constant. It is also general practice to use the term equilibrium constant instead of the more accurate concentration quotient. This practice will be followed here. For reactions in the gas phase partial pressure is used in place of concentration and fugacity coefficient in place of activity coefficient. In the real world, for example, when making ammonia in industry, fugacity coefficients must be taken into account. Fugacity, f, is the product of partial pressure and fugacity coefficient. The chemical potential of a species in the real gas phase is given by so the general expression defining an equilibrium constant is valid for both solution and gas phases. Concentration quotients In aqueous solution, equilibrium constants are usually determined in the presence of an "inert" electrolyte such as sodium nitrate, NaNO3, or potassium perchlorate, KClO4. The ionic strength of a solution is given by where ci and zi stand for the concentration and ionic charge of ion type i, and the sum is taken over all the N types of charged species in solution. When the concentration of dissolved salt is much higher than the analytical concentrations of the reagents, the ions originating from the dissolved salt determine the ionic strength, and the ionic strength is effectively constant. Since activity coefficients depend on ionic strength, the activity coefficients of the species are effectively independent of concentration. Thus, the assumption that Γ is constant is justified. The concentration quotient is a simple multiple of the equilibrium constant. However, Kc will vary with ionic strength. If it is measured at a series of different ionic strengths, the value can be extrapolated to zero ionic strength. The concentration quotient obtained in this manner is known, paradoxically, as a thermodynamic equilibrium constant. Before using a published value of an equilibrium constant in conditions of ionic strength different from the conditions used in its determination, the value should be adjustedSoftware (below). Metastable mixtures A mixture may appear to have no tendency to change, though it is not at equilibrium. For example, a mixture of SO2 and O2 is metastable as there is a kinetic barrier to formation of the product, SO3. 2 SO2 + O2 2 SO3 The barrier can be overcome when a catalyst is also present in the mixture as in the contact process, but the catalyst does not affect the equilibrium concentrations. Likewise, the formation of bicarbonate from carbon dioxide and water is very slow under normal conditions CO2 + 2 H2O + H3O+ but almost instantaneous in the presence of the catalytic enzyme carbonic anhydrase. Pure substances When pure substances (liquids or solids) are involved in equilibria their activities do not appear in the equilibrium constant because their numerical values are considered one. Applying the general formula for an equilibrium constant to the specific case of a dilute solution of acetic acid in water one obtains CH3CO2H + H2O CH3CO2− + H3O+ For all but very concentrated solutions, the water can be considered a "pure" liquid, and therefore it has an activity of one. The equilibrium constant expression is therefore usually written as . A particular case is the self-ionization of water 2 H2O H3O+ + OH− Because water is the solvent, and has an activity of one, the self-ionization constant of water is defined as It is perfectly legitimate to write [H+] for the hydronium ion concentration, since the state of solvation of the proton is constant (in dilute solutions) and so does not affect the equilibrium concentrations. Kw varies with variation in ionic strength and/or temperature. The concentrations of H+ and OH− are not independent quantities. Most commonly [OH−] is replaced by Kw[H+]−1 in equilibrium constant expressions which would otherwise include hydroxide ion. Solids also do not appear in the equilibrium constant expression, if they are considered to be pure and thus their activities taken to be one. An example is the Boudouard reaction: 2 CO CO2 + C for which the equation (without solid carbon) is written as: Multiple equilibria Consider the case of a dibasic acid H2A. When dissolved in water, the mixture will contain H2A, HA− and A2−. This equilibrium can be split into two steps in each of which one proton is liberated.K1 and K2 are examples of stepwise equilibrium constants. The overall equilibrium constant, βD, is product of the stepwise constants. {H2A} <=> {A^{2-}} + {2H+}: Note that these constants are dissociation constants because the products on the right hand side of the equilibrium expression are dissociation products. In many systems, it is preferable to use association constants.β1 and β2 are examples of association constants. Clearly and ; and For multiple equilibrium systems, also see: theory of Response reactions. Effect of temperature The effect of changing temperature on an equilibrium constant is given by the van 't Hoff equation Thus, for exothermic reactions (ΔH is negative), K decreases with an increase in temperature, but, for endothermic reactions, (ΔH is positive) K increases with an increase temperature. An alternative formulation is At first sight this appears to offer a means of obtaining the standard molar enthalpy of the reaction by studying the variation of K with temperature. In practice, however, the method is unreliable because error propagation almost always gives very large errors on the values calculated in this way. Effect of electric and magnetic fields The effect of electric field on equilibrium has been studied by Manfred Eigen among others. Types of equilibrium Equilibrium can be broadly classified as heterogeneous and homogeneous equilibrium. Homogeneous equilibrium consists of reactants and products belonging in the same phase whereas heterogeneous equilibrium comes into play for reactants and products in different phases. In the gas phase: rocket engines The industrial synthesis such as ammonia in the Haber–Bosch process (depicted right) takes place through a succession of equilibrium steps including adsorption processes Atmospheric chemistry Seawater and other natural waters: chemical oceanography Distribution between two phases log D distribution coefficient: important for pharmaceuticals where lipophilicity is a significant property of a drug Liquid–liquid extraction, Ion exchange, Chromatography Solubility product Uptake and release of oxygen by hemoglobin in blood Acid–base equilibria: acid dissociation constant, hydrolysis, buffer solutions, indicators, acid–base homeostasis Metal–ligand complexation: sequestering agents, chelation therapy, MRI contrast reagents, Schlenk equilibrium Adduct formation: host–guest chemistry, supramolecular chemistry, molecular recognition, dinitrogen tetroxide In certain oscillating reactions, the approach to equilibrium is not asymptotically but in the form of a damped oscillation . The related Nernst equation in electrochemistry gives the difference in electrode potential as a function of redox concentrations. When molecules on each side of the equilibrium are able to further react irreversibly in secondary reactions, the final product ratio is determined according to the Curtin–Hammett principle. In these applications, terms such as stability constant, formation constant, binding constant, affinity constant, association constant and dissociation constant are used. In biochemistry, it is common to give units for binding constants, which serve to define the concentration units used when the constant's value was determined. Composition of a mixture When the only equilibrium is that of the formation of a 1:1 adduct as the composition of a mixture, there are many ways that the composition of a mixture can be calculated. For example, see ICE table for a traditional method of calculating the pH of a solution of a weak acid. There are three approaches to the general calculation of the composition of a mixture at equilibrium. The most basic approach is to manipulate the various equilibrium constants until the desired concentrations are expressed in terms of measured equilibrium constants (equivalent to measuring chemical potentials) and initial conditions. Minimize the Gibbs energy of the system. Satisfy the equation of mass balance. The equations of mass balance are simply statements that demonstrate that the total concentration of each reactant must be constant by the law of conservation of mass. Mass-balance equations In general, the calculations are rather complicated or complex. For instance, in the case of a dibasic acid, H2A dissolved in water the two reactants can be specified as the conjugate base, A2−, and the proton, H+. The following equations of mass-balance could apply equally well to a base such as 1,2-diaminoethane, in which case the base itself is designated as the reactant A: with TA the total concentration of | + {2H+}: Note that these constants are dissociation constants because the products on the right hand side of the equilibrium expression are dissociation products. In many systems, it is preferable to use association constants.β1 and β2 are examples of association constants. Clearly and ; and For multiple equilibrium systems, also see: theory of Response reactions. Effect of temperature The effect of changing temperature on an equilibrium constant is given by the van 't Hoff equation Thus, for exothermic reactions (ΔH is negative), K decreases with an increase in temperature, but, for endothermic reactions, (ΔH is positive) K increases with an increase temperature. An alternative formulation is At first sight this appears to offer a means of obtaining the standard molar enthalpy of the reaction by studying the variation of K with temperature. In practice, however, the method is unreliable because error propagation almost always gives very large errors on the values calculated in this way. Effect of electric and magnetic fields The effect of electric field on equilibrium has been studied by Manfred Eigen among others. Types of equilibrium Equilibrium can be broadly classified as heterogeneous and homogeneous equilibrium. Homogeneous equilibrium consists of reactants and products belonging in the same phase whereas heterogeneous equilibrium comes into play for reactants and products in different phases. In the gas phase: rocket engines The industrial synthesis such as ammonia in the Haber–Bosch process (depicted right) takes place through a succession of equilibrium steps including adsorption processes Atmospheric chemistry Seawater and other natural waters: chemical oceanography Distribution between two phases log D distribution coefficient: important for pharmaceuticals where lipophilicity is a significant property of a drug Liquid–liquid extraction, Ion exchange, Chromatography Solubility product Uptake and release of oxygen by hemoglobin in blood Acid–base equilibria: acid dissociation constant, hydrolysis, buffer solutions, indicators, acid–base homeostasis Metal–ligand complexation: sequestering agents, chelation therapy, MRI contrast reagents, Schlenk equilibrium Adduct formation: host–guest chemistry, supramolecular chemistry, molecular recognition, dinitrogen tetroxide In certain oscillating reactions, the approach to equilibrium is not asymptotically but in the form of a damped oscillation . The related Nernst equation in electrochemistry gives the difference in electrode potential as a function of redox concentrations. When molecules on each side of the equilibrium are able to further react irreversibly in secondary reactions, the final product ratio is determined according to the Curtin–Hammett principle. In these applications, terms such as stability constant, formation constant, binding constant, affinity constant, association constant and dissociation constant are used. In biochemistry, it is common to give units for binding constants, which serve to define the concentration units used when the constant's value was determined. Composition of a mixture When the only equilibrium is that of the formation of a 1:1 adduct as the composition of a mixture, there are many ways that the composition of a mixture can be calculated. For example, see ICE table for a traditional method of calculating the pH of a solution of a weak acid. There are three approaches to the general calculation of the composition of a mixture at equilibrium. The most basic approach is to manipulate the various equilibrium constants until the desired concentrations are expressed in terms of measured equilibrium constants (equivalent to measuring chemical potentials) and initial conditions. Minimize the Gibbs energy of the system. Satisfy the equation of mass balance. The equations of mass balance are simply statements that demonstrate that the total concentration of each reactant must be constant by the law of conservation of mass. Mass-balance equations In general, the calculations are rather complicated or complex. For instance, in the case of a dibasic acid, H2A dissolved in water the two reactants can be specified as the conjugate base, A2−, and the proton, H+. The following equations of mass-balance could apply equally well to a base such as 1,2-diaminoethane, in which case the base itself is designated as the reactant A: with TA the total concentration of species A. Note that it is customary to omit the ionic charges when writing and using these equations. When the equilibrium constants are known and the total concentrations are specified there are two equations in two unknown "free concentrations" [A] and [H]. This follows from the fact that [HA] = β1[A][H], [H2A] = β2[A][H]2 and [OH] = Kw[H]−1 so the concentrations of the "complexes" are calculated from the free concentrations and the equilibrium constants. General expressions applicable to all systems with two reagents, A and B would be It is easy to see how this can be extended to three or more reagents. Polybasic acids The composition of solutions containing reactants A and H is easy to calculate as a function of p[H]. When [H] is known, the free concentration [A] is calculated from the mass-balance equation in A. The diagram alongside, shows an example of the hydrolysis of the aluminium Lewis acid Al3+(aq) shows the species concentrations for a 5 × 10−6 M solution of an aluminium salt as a function of pH. Each concentration is shown as a percentage of the total aluminium. Solution and precipitation The diagram above illustrates the point that a precipitate that is not one of the main species in the solution equilibrium may be formed. At pH just below 5.5 the main species present in a 5 μM solution of Al3+ are aluminium hydroxides Al(OH)2+, and , but on raising the pH Al(OH)3 precipitates from the solution. This occurs because Al(OH)3 has a very large lattice energy. As the pH rises more and more Al(OH)3 comes out of solution. This is an example of Le Châtelier's principle in action: Increasing the concentration of the hydroxide ion causes more aluminium hydroxide to precipitate, which removes hydroxide from the solution. When the hydroxide concentration becomes sufficiently high the soluble aluminate, , is formed. Another common instance where precipitation occurs is when a metal cation interacts with an anionic ligand to form an electrically neutral complex. If the complex is hydrophobic, it will precipitate out of water. This occurs with the nickel ion Ni2+ and dimethylglyoxime, (dmgH2): in this case the lattice energy of the solid is not particularly large, but it greatly exceeds the energy of solvation of the molecule Ni(dmgH)2. Minimization of Gibbs energy At equilibrium, at a specified temperature and pressure, and with no external forces, the Gibbs free energy G is at a minimum: where μj is the chemical potential of molecular species j, and Nj is the amount of molecular species j. It may be expressed in terms of thermodynamic activity as: where is the chemical potential in the standard state, R is the gas constant T is the absolute temperature, and Aj is the activity. For a closed system, no particles may enter or leave, although they may combine in various ways. The total number of atoms of each element will remain constant. This means that the minimization above must be subjected to the constraints: where aij is the number of atoms of element i in molecule j and b is the total number of atoms of element i, which is a constant, since the system is closed. If there are a total of k types of atoms in the system, then there will be k such equations. If ions are involved, an additional row is added to the aij matrix specifying the respective charge on each molecule which will sum to zero. This is a standard problem in optimisation, known as constrained minimisation. The most common method of solving it is using the method of Lagrange multipliers (although other methods may be used). Define: where the λi are the Lagrange multipliers, one for each element. This allows each of the Nj and λj to be treated independently, and it can be |
their smallest elements first (as in the illustrations above) or by comparing their largest elements first. The latter option has the advantage that adding a new largest element to S will not change the initial part of the enumeration, but just add the new k-combinations of the larger set after the previous ones. Repeating this process, the enumeration can be extended indefinitely with k-combinations of ever larger sets. If moreover the intervals of the integers are taken to start at 0, then the k-combination at a given place i in the enumeration can be computed easily from i, and the bijection so obtained is known as the combinatorial number system. It is also known as "rank"/"ranking" and "unranking" in computational mathematics. There are many ways to enumerate k combinations. One way is to visit all the binary numbers less than 2n. Choose those numbers having k nonzero bits, although this is very inefficient even for small n (e.g. n = 20 would require visiting about one million numbers while the maximum number of allowed k combinations is about 186 thousand for k = 10). The positions of these 1 bits in such a number is a specific k-combination of the set { 1, ..., n }. Another simple, faster way is to track k index numbers of the elements selected, starting with {0 .. k−1} (zero-based) or {1 .. k} (one-based) as the first allowed k-combination and then repeatedly moving to the next allowed k-combination by incrementing the last index number if it is lower than n-1 (zero-based) or n (one-based) or the last index number x that is less than the index number following it minus one if such an index exists and resetting the index numbers after x to {x+1, x+2, ...}. Number of combinations with repetition A k-combination with repetitions, or k-multicombination, or multisubset of size k from a set S of size n is given by a set of k not necessarily distinct elements of S, where order is not taken into account: two sequences define the same multiset if one can be obtained from the other by permuting the terms. In other words, it is a sample of k elements from a set of n elements allowing for duplicates (i.e., with replacement) but disregarding different orderings (e.g. {2,1,2} = {1,2,2}). Associate an index to each element of S and think of the elements of S as types of objects, then we can let denote the number of elements of type i in a multisubset. The number of multisubsets of size k is then the number of nonnegative integer (so allowing zero) solutions of the Diophantine equation: If S has n elements, the number of such k-multisubsets is denoted by, a notation that is analogous to the binomial coefficient which counts k-subsets. This expression, n multichoose k, can also be given in terms of binomial coefficients: This relationship can be easily proved using a representation known as stars and bars. A solution of the above Diophantine equation can be represented by stars, a separator (a bar), then more stars, another separator, and so on. The total number of stars in this representation is k and the number of bars is n - 1 (since a separation into n parts needs n-1 separators). Thus, a string of k + n - 1 (or n + k - 1) symbols (stars and bars) corresponds to a solution if there are k stars in the string. Any solution can be represented by choosing k out of positions to place stars and filling the remaining positions with bars. For example, the solution of the equation (n = 4 and k = 10) can be represented by The number of such strings is the number of ways to place 10 stars in 13 positions, which is the number of 10-multisubsets of a set with 4 elements. As with binomial coefficients, there are several relationships between these multichoose expressions. For example, for , This identity follows from interchanging the stars and bars in the above representation. Example of counting multisubsets For example, if you have four types of donuts (n = 4) on a menu to choose from and you want three donuts (k = 3), the number of ways to choose the donuts with repetition can be calculated as This result can be verified by listing all the 3-multisubsets of the set S = {1,2,3,4}. This is displayed in the following table. The second column lists the donuts you actually chose, the third column shows the nonnegative integer solutions of the equation and the last column gives the stars and bars representation of the solutions. Number of k-combinations for all k The number of k-combinations for all k is the number of subsets of a set of n elements. There | relation for 0 ≤ k ≤ n. This expresses a symmetry that is evident from the binomial formula, and can also be understood in terms of k-combinations by taking the complement of such a combination, which is an -combination. Finally there is a formula which exhibits this symmetry directly, and has the merit of being easy to remember: where n! denotes the factorial of n. It is obtained from the previous formula by multiplying denominator and numerator by !, so it is certainly computationally less efficient than that formula. The last formula can be understood directly, by considering the n! permutations of all the elements of S. Each such permutation gives a k-combination by selecting its first k elements. There are many duplicate selections: any combined permutation of the first k elements among each other, and of the final (n − k) elements among each other produces the same combination; this explains the division in the formula. From the above formulas follow relations between adjacent numbers in Pascal's triangle in all three directions: . Together with the basic cases , these allow successive computation of respectively all numbers of combinations from the same set (a row in Pascal's triangle), of k-combinations of sets of growing sizes, and of combinations with a complement of fixed size . Example of counting combinations As a specific example, one can compute the number of five-card hands possible from a standard fifty-two card deck as: Alternatively one may use the formula in terms of factorials and cancel the factors in the numerator against parts of the factors in the denominator, after which only multiplication of the remaining factors is required: Another alternative computation, equivalent to the first, is based on writing which gives . When evaluated in the following order, , this can be computed using only integer arithmetic. The reason is that when each division occurs, the intermediate result that is produced is itself a binomial coefficient, so no remainders ever occur. Using the symmetric formula in terms of factorials without performing simplifications gives a rather extensive calculation: Enumerating k-combinations One can enumerate all k-combinations of a given set S of n elements in some fixed order, which establishes a bijection from an interval of integers with the set of those k-combinations. Assuming S is itself ordered, for instance S = { 1, 2, ..., n }, there are two natural possibilities for ordering its k-combinations: by comparing their smallest elements first (as in the illustrations above) or by comparing their largest elements first. The latter option has the advantage that adding a new largest element to S will not change the initial part of the enumeration, but just add the new k-combinations of the larger set after the previous ones. Repeating this process, the enumeration can be extended indefinitely with k-combinations of ever larger sets. If moreover the intervals of the integers are taken to start at 0, then the k-combination at a given place i in the enumeration can be computed easily from i, and the bijection so obtained is known as the combinatorial number system. It is also known as "rank"/"ranking" and "unranking" in computational mathematics. There are many ways to enumerate k combinations. One way is to visit all the binary numbers less than 2n. Choose those numbers having k nonzero bits, although this is very inefficient even for small n (e.g. n = 20 would require visiting about one million numbers while the maximum number of allowed k combinations is about 186 thousand for k = 10). The positions of these 1 bits in such a number is a specific k-combination of the set { 1, ..., n }. Another simple, faster way is to track k index numbers of the elements selected, starting with {0 .. k−1} (zero-based) or {1 .. k} (one-based) as the first allowed k-combination and then repeatedly moving to the next allowed k-combination by incrementing the last index number if it is lower than n-1 (zero-based) or n (one-based) or the last index number x that is less than the index number following it minus one if such an index exists and resetting the index numbers after x to {x+1, x+2, ...}. Number of combinations with repetition A k-combination with repetitions, or k-multicombination, or multisubset of size k from a set S of size n is given by a set of k not necessarily distinct elements of S, where order is not taken into account: two sequences define the same multiset if one can be obtained from the other by permuting the terms. In other words, it is a sample of k elements from a set of n elements allowing for duplicates (i.e., with replacement) but disregarding different orderings (e.g. {2,1,2} = {1,2,2}). Associate an index to each element of S and think of the elements of S as types of objects, then we can let denote the number of elements of type i in a multisubset. The number of multisubsets of size k is then the number of nonnegative integer (so allowing zero) solutions of the Diophantine equation: If S has n elements, the number of such k-multisubsets is denoted by, a notation that is analogous to the binomial coefficient which counts k-subsets. This expression, n multichoose k, can also be given in terms of binomial coefficients: This relationship can be easily proved using a representation known as stars and bars. A solution of the above Diophantine equation can be represented by stars, a separator (a bar), then more stars, another separator, and so on. The total number of stars in this representation is k and the number of bars is n - 1 (since a separation into n parts needs n-1 separators). Thus, a string of k + n - 1 (or n + k - 1) symbols (stars and bars) corresponds to a solution if there are k stars in the string. Any solution can be represented by choosing k out of positions to |
1953 by Richard R. Carhart, in a Rand Corporation Research Memorandum. Types On virtually all computer platforms, software can be grouped into a few broad categories. Purpose, or domain of use Based on the goal, computer software can be divided into: Application software uses the computer system to perform special functions beyond the basic operation of the computer itself. There are many different types of application software because the range of tasks that can be performed with a modern computer is so large—see list of software. System software manages hardware behaviour, as to provide basic functionalities that are required by users, or for other software to run properly, if at all. System software is also designed for providing a platform for running application software, and it includes the following: Operating systems are essential collections of software that manage resources and provide common services for other software that runs "on top" of them. Supervisory programs, boot loaders, shells and window systems are core parts of operating systems. In practice, an operating system comes bundled with additional software (including application software) so that a user can potentially do some work with a computer that only has one operating system. Device drivers operate or control a particular type of device that is attached to a computer. Each device needs at least one corresponding device driver; because a computer typically has at minimum at least one input device and at least one output device, a computer typically needs more than one device driver. Utilities are computer programs designed to assist users in the maintenance and care of their computers. Malicious software, or malware, is software that is developed to harm or disrupt computers. Malware is closely associated with computer-related crimes, though some malicious programs may have been designed as practical jokes. Nature or domain of execution Desktop applications such as web browsers and Microsoft Office, as well as smartphone and tablet applications (called "apps"). JavaScript scripts are pieces of software traditionally embedded in web pages that are run directly inside the web browser when a web page is loaded without the need for a web browser plugin. Software written in other programming languages can also be run within the web browser if the software is either translated into JavaScript, or if a web browser plugin that supports that language is installed; the most common example of the latter is ActionScript scripts, which are supported by the Adobe Flash plugin. Server software, including: Web applications, which usually run on the web server and output dynamically generated web pages to web browsers, using e.g. PHP, Java, ASP.NET, or even JavaScript that runs on the server. In modern times these commonly include some JavaScript to be run in the web browser as well, in which case they typically run partly on the server, partly in the web browser. Plugins and extensions are software that extends or modifies the functionality of another piece of software, and require that software be used in order to function. Embedded software resides as firmware within embedded systems, devices dedicated to a single use or a few uses such as cars and televisions (although some embedded devices such as wireless chipsets can themselves be part of an ordinary, non-embedded computer system such as a PC or smartphone). In the embedded system context there is sometimes no clear distinction between the system software and the application software. However, some embedded systems run embedded operating systems, and these systems do retain the distinction between system software and application software (although typically there will only be one, fixed application which is always run). Microcode is a special, relatively obscure type of embedded software which tells the processor itself how to execute machine code, so it is actually a lower level than machine code. It is typically proprietary to the processor manufacturer, and any necessary correctional microcode software updates are supplied by them to users (which is much cheaper than shipping replacement processor hardware). Thus an ordinary programmer would not expect to ever have to deal with it. Programming tools Programming tools are also software in the form of programs or applications that developers use to create, debug, maintain, or otherwise support software. Software is written in one or more programming languages; there are many programming languages in existence, and each has at least one implementation, each of which consists of its own set of programming tools. These tools may be relatively self-contained programs such as compilers, debuggers, interpreters, linkers, and text editors, that can be combined to accomplish a task; or they may form an integrated development environment (IDE), which combines much or all of the functionality of such self-contained tools. IDEs may do this by either invoking the relevant individual tools or by re-implementing their functionality in a new way. An IDE can make it easier to do specific tasks, such as searching in files in a particular project. Many programming language implementations provide the option of using both individual tools or an IDE. Topics Architecture People who use modern general purpose computers (as opposed to embedded systems, analog computers and supercomputers) usually see three layers of software performing a variety of tasks: platform, application, and user software. Platform software The platform includes the firmware, device drivers, an operating system, and typically a graphical user interface which, in total, allow a user to interact with the computer and its peripherals (associated equipment). Platform software often comes bundled with the computer. On a PC one will usually have the ability to change the platform software. Application software Application software is what most people think of when they think of software. Typical examples include office suites and video games. Application software is often purchased separately from computer hardware. Sometimes applications are bundled with the computer, but that does not change the fact that they run as independent applications. Applications are usually independent programs from the operating system, though they are often tailored for specific platforms. Most users think of compilers, databases, and other "system software" as applications. User-written software End-user development tailors systems to meet users' specific needs. User software includes spreadsheet templates and word processor templates. Even email filters are a kind of user software. Users create this software themselves and often overlook how important it is. Depending on how competently the user-written software has been integrated into default application packages, many users may not be aware of the distinction between the original packages, and what has been added by co-workers. Executionpammi Computer software has to be "loaded" into the computer's storage (such as the hard drive or memory). Once the software has loaded, the computer is able to execute the software. This involves passing instructions from the application software, through the system software, to the hardware which ultimately receives the instruction as machine code. Each instruction causes the computer to carry out an operation—moving data, carrying out a computation, or altering the control flow of instructions. Data movement is typically from one place in memory to another. Sometimes it involves moving data between memory and registers which enable high-speed data access in the CPU. Moving data, especially large amounts of it, can be costly; this | tools. These tools may be relatively self-contained programs such as compilers, debuggers, interpreters, linkers, and text editors, that can be combined to accomplish a task; or they may form an integrated development environment (IDE), which combines much or all of the functionality of such self-contained tools. IDEs may do this by either invoking the relevant individual tools or by re-implementing their functionality in a new way. An IDE can make it easier to do specific tasks, such as searching in files in a particular project. Many programming language implementations provide the option of using both individual tools or an IDE. Topics Architecture People who use modern general purpose computers (as opposed to embedded systems, analog computers and supercomputers) usually see three layers of software performing a variety of tasks: platform, application, and user software. Platform software The platform includes the firmware, device drivers, an operating system, and typically a graphical user interface which, in total, allow a user to interact with the computer and its peripherals (associated equipment). Platform software often comes bundled with the computer. On a PC one will usually have the ability to change the platform software. Application software Application software is what most people think of when they think of software. Typical examples include office suites and video games. Application software is often purchased separately from computer hardware. Sometimes applications are bundled with the computer, but that does not change the fact that they run as independent applications. Applications are usually independent programs from the operating system, though they are often tailored for specific platforms. Most users think of compilers, databases, and other "system software" as applications. User-written software End-user development tailors systems to meet users' specific needs. User software includes spreadsheet templates and word processor templates. Even email filters are a kind of user software. Users create this software themselves and often overlook how important it is. Depending on how competently the user-written software has been integrated into default application packages, many users may not be aware of the distinction between the original packages, and what has been added by co-workers. Executionpammi Computer software has to be "loaded" into the computer's storage (such as the hard drive or memory). Once the software has loaded, the computer is able to execute the software. This involves passing instructions from the application software, through the system software, to the hardware which ultimately receives the instruction as machine code. Each instruction causes the computer to carry out an operation—moving data, carrying out a computation, or altering the control flow of instructions. Data movement is typically from one place in memory to another. Sometimes it involves moving data between memory and registers which enable high-speed data access in the CPU. Moving data, especially large amounts of it, can be costly; this is sometimes avoided by using "pointers" to data instead. Computations include simple operations such as incrementing the value of a variable data element. More complex computations may involve many operations and data elements together. Quality and reliability Software quality is very important, especially for commercial and system software. If software is faulty, it can delete a person's work, crash the computer and do other unexpected things. Faults and errors are called "bugs" which are often discovered during alpha and beta testing. Software is often also a victim to what is known as software aging, the progressive performance degradation resulting from a combination of unseen bugs. Many bugs are discovered and fixed through software testing. However, software testing rarely—if ever—eliminates every bug; some programmers say that "every program has at least one more bug" (Lubarsky's Law). In the waterfall method of software development, separate testing teams are typically employed, but in newer approaches, collectively termed agile software development, developers often do all their own testing, and demonstrate the software to users/clients regularly to obtain feedback. Software can be tested through unit testing, regression testing and other methods, which are done manually, or most commonly, automatically, since the amount of code to be tested can be large. Programs containing command software enable hardware engineering and system operations to function much easier together. License The software's license gives the user the right to use the software in the licensed environment, and in the case of free software licenses, also grants other rights such as the right to make copies. Proprietary software can be divided into two types: freeware, which includes the category of "free trial" software or "freemium" software (in the past, the term shareware was often used for free trial/freemium software). As the name suggests, freeware can be used for free, although in the case of free trials or freemium software, this is sometimes only true for a limited period of time or with limited functionality. software available for a fee, which can only be legally used on purchase of a license. Open-source software comes with a free software license, granting the recipient the rights to modify and redistribute the software. Patents Software patents, like other types of patents, are theoretically supposed to give an inventor an exclusive, time-limited license for a detailed idea (e.g. an algorithm) on how to implement a piece of software, or a component of a piece of software. Ideas for useful things that software could do, and user requirements, are not supposed to be patentable, and concrete implementations (i.e. the actual software packages implementing the patent) are not supposed to be patentable either—the latter are already covered by copyright, generally automatically. So software patents are supposed to cover the middle area, between requirements and concrete implementation. In some countries, a requirement for the claimed invention to have an effect on the physical world may also be part of the requirements for a software patent to be held valid—although since all useful software has effects on the physical world, this requirement may be open to debate. Meanwhile, American copyright law was applied to various aspects of the writing of the software code. Software patents are controversial in the software industry with many people holding different views about them. One of the sources of controversy is that the aforementioned split between initial ideas and patent does not seem to be honored in practice by patent lawyers—for example the patent for aspect-oriented programming (AOP), which purported to claim rights over any programming tool implementing the idea of AOP, howsoever implemented. Another source of controversy is the effect on innovation, with many distinguished experts and companies arguing that software is such a fast-moving field that software patents merely create vast additional litigation costs and risks, and actually retard innovation. In the case of debates about software patents outside the United States, the argument has been made that large American corporations and patent lawyers are likely to be the primary beneficiaries of allowing or continue to allow software patents. Design and implementation Design and implementation of software varies depending on the complexity of the software. For instance, the design and creation of Microsoft Word took much more time than designing and developing Microsoft Notepad because the latter has much more basic functionality. Software is usually developed in integrated development environments (IDE) like Eclipse, IntelliJ and Microsoft Visual Studio that can simplify the process and compile the software. As noted in a different section, software is usually created on top of existing software and the application programming interface (API) that the underlying software provides like GTK+, JavaBeans or Swing. Libraries (APIs) can be categorized by their purpose. For instance, the Spring Framework is used for implementing enterprise applications, the Windows Forms library is used for designing graphical user interface (GUI) applications like Microsoft Word, and Windows Communication Foundation is used for designing web services. When a program is designed, it relies upon the API. For instance, a Microsoft Windows desktop application might call API functions in the .NET Windows Forms library like Form1.Close() and Form1.Show() to close or open the application. Without these APIs, the programmer needs to write these functionalities entirely themselves. Companies like Oracle and Microsoft provide their |
by the Persian Banu Musa brothers, who described an automated mechanical flute player in the Book of Ingenious Devices. In 1206, the Arab engineer Al-Jazari invented a programmable drum machine where a musical mechanical automaton could be made to play different rhythms and drum patterns, via pegs and cams. In 1801, the Jacquard loom could produce entirely different weaves by changing the "program" – a series of pasteboard cards with holes punched in them. Code-breaking algorithms have also existed for centuries. In the 9th century, the Arab mathematician Al-Kindi described a cryptographic algorithm for deciphering encrypted code, in A Manuscript on Deciphering Cryptographic Messages. He gave the first description of cryptanalysis by frequency analysis, the earliest code-breaking algorithm. The first computer program is generally dated to 1843, when mathematician Ada Lovelace published an algorithm to calculate a sequence of Bernoulli numbers, intended to be carried out by Charles Babbage's Analytical Engine. In the 1880s Herman Hollerith invented the concept of storing data in machine-readable form. Later a control panel (plug board) added to his 1906 Type I Tabulator allowed it to be programmed for different jobs, and by the late 1940s, unit record equipment such as the IBM 602 and IBM 604, were programmed by control panels in a similar way, as were the first electronic computers. However, with the concept of the stored-program computer introduced in 1949, both programs and data were stored and manipulated in the same way in computer memory. Machine language Machine code was the language of early programs, written in the instruction set of the particular machine, often in binary notation. Assembly languages were soon developed that let the programmer specify instruction in a text format (e.g., ADD X, TOTAL), with abbreviations for each operation code and meaningful names for specifying addresses. However, because an assembly language is little more than a different notation for a machine language, two machines with different instruction sets also have different assembly languages. Compiler languages High-level languages made the process of developing a program simpler and more understandable, and less bound to the underlying hardware. The first compiler related tool, the A-0 System, was developed in 1952 by Grace Hopper, who also coined the term 'compiler'. FORTRAN, the first widely used high-level language to have a functional implementation, came out in 1957, and many other languages were soon developed—in particular, COBOL aimed at commercial data processing, and Lisp for computer research. These compiled languages allow the programmer to write programs in terms that are syntactically richer, and more capable of abstracting the code, making it easy to target for varying machine instruction sets via compilation declarations and heuristics. Compilers harnessed the power of computers to make programming easier by allowing programmers to specify calculations by entering a formula using infix notation. Source code entry Programs were mostly entered using punched cards or paper tape. By the late 1960s, data storage devices and computer terminals became inexpensive enough that programs could be created by typing directly into the computers. Text editors were also developed that allowed changes and corrections to be made much more easily than with punched cards. Modern programming Quality requirements Whatever the approach to development may be, the final program must satisfy some fundamental properties. The following properties are among the most important: Reliability: how often the results of a program are correct. This depends on conceptual correctness of algorithms and minimization of programming mistakes, such as mistakes in resource management (e.g., buffer overflows and race conditions) and logic errors (such as division by zero or off-by-one errors). Robustness: how well a program anticipates problems due to errors (not bugs). This includes situations such as incorrect, inappropriate or corrupt data, unavailability of needed resources such as memory, operating system services, and network connections, user error, and unexpected power outages. Usability: the ergonomics of a program: the ease with which a person can use the program for its intended purpose or in some cases even unanticipated purposes. Such issues can make or break its success even regardless of other issues. This involves a wide range of textual, graphical, and sometimes hardware elements that improve the clarity, intuitiveness, cohesiveness and completeness of a program's user interface. Portability: the range of computer hardware and operating system platforms on which the source code of a program can be compiled/interpreted and run. This depends on differences in the programming facilities provided by the different platforms, including hardware and operating system resources, expected behavior of the hardware and operating system, and availability of platform-specific compilers (and sometimes libraries) for the language of the source code. Maintainability: the ease with which a program can be modified by its present or future developers in order to make improvements or to customize, fix bugs and security holes, or adapt it to new environments. Good practices during initial development make the difference in this regard. This quality may not be directly apparent to the end user but it can significantly affect the fate of a program over the long term. Efficiency/performance: Measure of system resources a program consumes (processor time, memory space, slow devices such as disks, network bandwidth and to some extent even user interaction): the less, the better. This also includes careful management of resources, for example cleaning up temporary files and eliminating memory leaks. This is often discussed under the shadow of a chosen programming language. Although the language certainly affects performance, even slower languages, such as Python, can execute programs instantly from a human perspective. Speed, resource usage, and performance are important for programs that bottleneck the system, but efficient use of programmer time is also important and is related to cost: more hardware may be cheaper. Readability of source code In computer programming, readability refers to the ease with which a human reader can comprehend the purpose, control flow, and operation of source code. It affects the aspects of quality above, including portability, usability and most importantly maintainability. Readability is important because programmers spend the majority of their time reading, trying to understand and modifying existing source code, rather than writing new source code. Unreadable code often leads to bugs, inefficiencies, and duplicated code. A study found that a few simple readability transformations made code shorter and drastically reduced the time to understand it. Following a consistent programming style often helps readability. However, readability is more than just programming style. Many factors, having little or nothing to do with the ability of the computer to efficiently compile and execute the code, contribute to readability. Some of these factors include: Different indent styles (whitespace) Comments Decomposition Naming conventions for objects (such as variables, classes, functions, procedures, etc.) The presentation aspects of this (such as indents, line breaks, color highlighting, and so on) are often handled by the source code editor, but the content aspects reflect the programmer's talent and skills. Various visual programming languages have also been developed with the intent to resolve readability concerns by adopting non-traditional approaches to code structure and display. Integrated development environments (I.D.Es) aim to integrate all such help. Techniques like Code refactoring can enhance readability. Algorithmic complexity The academic field and the engineering practice of computer programming are both largely concerned with discovering and implementing the most efficient algorithms for a given class of problems. For this purpose, algorithms are classified into orders using so-called Big O notation, which expresses resource use, such as execution time or memory consumption, in terms of the size of an input. Expert programmers are familiar with a variety of well-established algorithms and their respective complexities and use this knowledge to choose algorithms | 1960s, data storage devices and computer terminals became inexpensive enough that programs could be created by typing directly into the computers. Text editors were also developed that allowed changes and corrections to be made much more easily than with punched cards. Modern programming Quality requirements Whatever the approach to development may be, the final program must satisfy some fundamental properties. The following properties are among the most important: Reliability: how often the results of a program are correct. This depends on conceptual correctness of algorithms and minimization of programming mistakes, such as mistakes in resource management (e.g., buffer overflows and race conditions) and logic errors (such as division by zero or off-by-one errors). Robustness: how well a program anticipates problems due to errors (not bugs). This includes situations such as incorrect, inappropriate or corrupt data, unavailability of needed resources such as memory, operating system services, and network connections, user error, and unexpected power outages. Usability: the ergonomics of a program: the ease with which a person can use the program for its intended purpose or in some cases even unanticipated purposes. Such issues can make or break its success even regardless of other issues. This involves a wide range of textual, graphical, and sometimes hardware elements that improve the clarity, intuitiveness, cohesiveness and completeness of a program's user interface. Portability: the range of computer hardware and operating system platforms on which the source code of a program can be compiled/interpreted and run. This depends on differences in the programming facilities provided by the different platforms, including hardware and operating system resources, expected behavior of the hardware and operating system, and availability of platform-specific compilers (and sometimes libraries) for the language of the source code. Maintainability: the ease with which a program can be modified by its present or future developers in order to make improvements or to customize, fix bugs and security holes, or adapt it to new environments. Good practices during initial development make the difference in this regard. This quality may not be directly apparent to the end user but it can significantly affect the fate of a program over the long term. Efficiency/performance: Measure of system resources a program consumes (processor time, memory space, slow devices such as disks, network bandwidth and to some extent even user interaction): the less, the better. This also includes careful management of resources, for example cleaning up temporary files and eliminating memory leaks. This is often discussed under the shadow of a chosen programming language. Although the language certainly affects performance, even slower languages, such as Python, can execute programs instantly from a human perspective. Speed, resource usage, and performance are important for programs that bottleneck the system, but efficient use of programmer time is also important and is related to cost: more hardware may be cheaper. Readability of source code In computer programming, readability refers to the ease with which a human reader can comprehend the purpose, control flow, and operation of source code. It affects the aspects of quality above, including portability, usability and most importantly maintainability. Readability is important because programmers spend the majority of their time reading, trying to understand and modifying existing source code, rather than writing new source code. Unreadable code often leads to bugs, inefficiencies, and duplicated code. A study found that a few simple readability transformations made code shorter and drastically reduced the time to understand it. Following a consistent programming style often helps readability. However, readability is more than just programming style. Many factors, having little or nothing to do with the ability of the computer to efficiently compile and execute the code, contribute to readability. Some of these factors include: Different indent styles (whitespace) Comments Decomposition Naming conventions for objects (such as variables, classes, functions, procedures, etc.) The presentation aspects of this (such as indents, line breaks, color highlighting, and so on) are often handled by the source code editor, but the content aspects reflect the programmer's talent and skills. Various visual programming languages have also been developed with the intent to resolve readability concerns by adopting non-traditional approaches to code structure and display. Integrated development environments (I.D.Es) aim to integrate all such help. Techniques like Code refactoring can enhance readability. Algorithmic complexity The academic field and the engineering practice of computer programming are both largely concerned with discovering and implementing the most efficient algorithms for a given class of problems. For this purpose, algorithms are |
be found nearly everywhere in Geoffrey Chaucer's poetry, e.g. in Troilus and Criseyde, The Knight's Tale, The Clerk's Tale, The Franklin's Tale, The Parson's Tale and The Tale of Melibee, in the character of Lady Nature in The Parliament of Fowls and some of the shorter poems, such as Truth, The Former Age and Lak of Stedfastnesse. Chaucer translated the work in his Boece. The Italian composer Luigi Dallapiccola used some of the text in his choral work Canti di prigionia (1938). The Australian composer Peter Sculthorpe quoted parts of it in his opera or music theatre work Rites of Passage (1972–73), which was commissioned for the opening of the Sydney Opera House but was not ready in time. Tom Shippey in The Road to Middle-earth says how "Boethian" much of the treatment of evil is in Tolkien's The Lord of the Rings. Shippey says that Tolkien knew well the translation of Boethius that was made by King Alfred and he quotes some "Boethian" remarks from Frodo, Treebeard and Elrond. Boethius and Consolatio Philosophiae are cited frequently by the main character Ignatius J. Reilly in the Pulitzer Prize-winning A Confederacy of Dunces (1980). It is a prosimetrical text, meaning that it is written in alternating sections of prose and metered verse. In the course of the text, Boethius displays a virtuosic command of the forms of Latin poetry. It is classified as a Menippean satire, a fusion of allegorical tale, platonic dialogue, and lyrical poetry. In the 20th century there were close to four hundred manuscripts still surviving, a testament to its popularity. Reconstruction of lost songs Hundreds of Latin songs were recorded in neumes from the ninth century through to the thirteenth century, including settings of the poetic passages from Boethius's The Consolation of Philosophy. The music of this song repertory had long been considered irretrievably lost because the notational signs indicated only melodic outlines, relying on now-lapsed oral traditions to fill in the missing details. However, research conducted by Dr Sam Barrett at the University of Cambridge, extended in collaboration with medieval music ensemble Sequentia, has shown that principles of musical setting for this period can be identified, providing crucial information to enable modern realisations. Sequentia performed the world premiere of the reconstructed songs from Boethius's The Consolation of Philosophy at Pembroke College, Cambridge, in April 2016, bringing to life music not heard in over 1,000 years; a number of the songs were subsequently recorded on the CD Boethius: Songs of Consolation. Metra from 11th-Century Canterbury (Glossa, 2018). The detective story behind the recovery of these lost songs is told in a documentary film, and a website launched by the University of Cambridge in 2018 provides further details of the reconstruction process, bringing together manuscripts, reconstructions, and video resources. See also Allegory in the Middle Ages Stoicism The Wheel of Fortune Consolatio Metres of Boethius Girdle book Prosimetrum References Sources Boethius, The Consolation of Philosophy. Trans. Richard H. Green, (Library of the Liberal Arts), 1962. Trans. Joel C. Relihan, (Hackett Publishing), 2001. Trans. P. G. Walsh, (Oxford World's Classics), 2001. Trans. Victor Watts, (Penguin Classics), 2000. Sanderson Beck, The Consolation of Boethius an analysis and commentary. 1996. . Henry Chadwick, Boethius: The Consolations of Music, Logic, Theology and Philosophy, 1990, . The Cambridge History of English and American Literature, Volume I Ch.6.5: De Consolatione Philosophiae, 1907–1921. External links The Consolation of Philosophy, many translations and commentaries from Internet Archive Consolatio Philosophiae in the original Latin with English comments at the University of Georgetown Consolatio Philosophiae from Project Gutenberg, HTML conversion, originally translated by H. R. James, London 1897. The Consolation of Philosophy, Translated by: W.V. Cooper : J.M. Dent and Company London 1902 The Temple Classics, edited by Israel Golancz M.A. Online reading and multiple ebook formats at Ex-classics. Medieval translations into Old English by Alfred the Great, | Consolation stands, by its note of fatalism and its affinities with the Christian doctrine of humility, midway between the pagan philosophy of Seneca the Younger and the later Christian philosophy of consolation represented by Thomas à Kempis. The book is heavily influenced by Plato and his dialogues (as was Boethius himself). Its popularity can in part be explained by its Neoplatonic and Christian ethical messages, although current scholarly research is still far from clear exactly why and how the work became so vastly popular in the Middle Ages. Translations into the vernacular were done by famous notables, including King Alfred (Old English), Jean de Meun (Old French), Geoffrey Chaucer (Middle English), Queen Elizabeth I (Early Modern English) and Notker Labeo (Old High German). Boethius's Consolation of Philosophy was translated into Italian by Alberto della Piagentina (1332), Anselmo Tanso (Milan, 1520), Lodovico Domenichi (Florence, 1550), Benedetto Varchi (Florence, 1551), Cosimo Bartoli (Florence, 1551) and Tommaso Tamburini (Palermo, 1657). Found within the Consolation are themes that have echoed throughout the Western canon: the female figure of wisdom that informs Dante, the ascent through the layered universe that is shared with Milton, the reconciliation of opposing forces that find their way into Chaucer in The Knight's Tale, and the Wheel of Fortune so popular throughout the Middle Ages. Citations from it occur frequently in Dante's Divina Commedia. Of Boethius, Dante remarked “The blessed soul who exposes the deceptive world to anyone who gives ear to him.”<ref>Dante The Divine Comedy. "Blessed souls" inhabit Dante's Paradise, and appear as flames. (see note above).</ref> Boethian influence can be found nearly everywhere in Geoffrey Chaucer's poetry, e.g. in Troilus and Criseyde, The Knight's Tale, The Clerk's Tale, The Franklin's Tale, The Parson's Tale and The Tale of Melibee, in the character of Lady Nature in The Parliament of Fowls and some of the shorter poems, such as Truth, The Former Age and Lak of Stedfastnesse. Chaucer translated the work in his Boece. The Italian composer Luigi Dallapiccola used some of the text in his choral work Canti di prigionia (1938). The Australian composer Peter Sculthorpe quoted parts of it in his opera or music theatre work Rites of Passage (1972–73), which was commissioned for the opening of the Sydney Opera House but was not ready in time. Tom Shippey in The Road to Middle-earth says how "Boethian" much of the treatment of evil is in Tolkien's The Lord of the Rings. Shippey says that Tolkien knew well the translation of Boethius that was made by King Alfred and he quotes some "Boethian" remarks from Frodo, Treebeard and Elrond. Boethius and |
of poison is considered an act of one who is too cowardly and dishonorable to fight; and indeed, the only character who explicitly fits these characteristics is Jade Fox. The poison is a weapon of her bitterness and quest for vengeance: she poisons the master of Wudang, attempts to poison Jen, and succeeds in killing Mu Bai using a poisoned needle. In further play on this theme by the director, Jade Fox, as she dies, refers to the poison from a young child, "the deceit of an eight-year-old girl", referring to what she considers her own spiritual poisoning by her young apprentice Jen. Li Mu Bai himself warns that, without guidance, Jen could become a "poison dragon". China of the imagination The story is set during the Qing dynasty (1644–1912), but it does not specify an exact time. Lee sought to present a "China of the imagination" rather than an accurate vision of Chinese history. At the same time, Lee also wanted to make a film that western audiences would want to see. Thus, the film is shot for a balance between Eastern and Western aesthetics. There are some scenes showing uncommon artistry for the typical martial arts film such as an airborne battle among wispy bamboo plants. Production The film was adapted from the novel Crouching Tiger, Hidden Dragon by Wang Dulu, serialized between 1941 and 1942 on Qingdao Xinmin News. The novel was the fourth book in a series of five. In the contract reached between Columbia Pictures and Ang Lee and Hsu Li-kong, they agreed to invest US$6 million in filming, but the stipulated recovery amount must be more than six times before the two parties will start to pay dividends. Casting Shu Qi was Ang Lee's first choice for the role of Jen, but she turned it down. Filming Although its Academy Award was presented to Taiwan, Crouching Tiger, Hidden Dragon was in fact an international co-production between companies in four regions: the Chinese company China Film Co-Production Corporation; the American companies Columbia Pictures Film Production Asia, Sony Pictures Classics, and Good Machine; the Hong Kong company EDKO Film; and the Taiwanese Zoom Hunt International Productions Company, Ltd; as well as the unspecified United China Vision, and Asia Union Film and Entertainment Ltd., created solely for this film. The film was made in Beijing, with location shooting in the Anhui, Hebei, Jiangsu, and Xinjiang provinces of China. The first phase of shooting was in the Gobi Desert where it consistently rained. Director Ang Lee noted, "I didn't take one break in eight months, not even for half a day. I was miserable—I just didn't have the extra energy to be happy. Near the end, I could hardly breathe. I thought I was about to have a stroke." The stunt work was mostly performed by the actors themselves and Ang Lee stated in an interview that computers were used "only to remove the safety wires that held the actors" aloft. "Most of the time you can see their faces", he added, "That's really them in the trees." Another compounding issue was the difference between accents of the four lead actors: Chow Yun-fat is from Hong Kong and speaks Cantonese natively; Michelle Yeoh is from Malaysia and grew up speaking English and Malay so she learned the Mandarin lines phonetically; Chang Chen is from Taiwan and he speaks Mandarin in a Taiwanese accent. Only Zhang Ziyi spoke with a native Mandarin accent that Ang Lee wanted. Chow Yun Fat said, on "the first day [of shooting], I had to do 28 takes just because of the language. That's never happened before in my life." The film specifically targeted Western audiences rather than the domestic audiences who were already used to Wuxia films, as a result high quality English subtitles were needed. Ang Lee, who was educated in the West, personally edited the subtitles to ensure they were satisfactory for Western audiences. Soundtrack The score was composed by Tan Dun, originally performed by Shanghai Symphony Orchestra, Shanghai National Orchestra, and Shanghai Percussion Ensemble. It also features many solo passages for cello played by Yo-Yo Ma. The "last track" ("A Love Before Time") features Coco Lee, who later performed it at the Academy Awards. The music for the entire film was produced in two weeks. Release Marketing The film was adapted into a video game, a comics series, and a 34-episode Taiwanese television series based on the original novel. The latter was released in 2004 as New Crouching Tiger, Hidden Dragon for US and Canadian release. Home media The film was released on VHS and DVD on 5 June 2001 by Columbia TriStar Home Entertainment. Reception Box office The film premiered in cinemas on 8 December 2000, in limited release within the US. During its opening weekend, the film opened in 15th place, grossing $663,205 in business, showing at 16 locations. On 12 January 2001, Crouching Tiger, Hidden Dragon premiered in cinemas in wide release throughout the US grossing $8,647,295 in business, ranking in sixth place. The film Save the Last Dance came in first place during that weekend, grossing $23,444,930. The film's revenue dropped by almost 30% in its second week of release, earning $6,080,357. For that particular weekend, the film fell to eighth place screening in 837 theaters. Save the Last Dance remained unchanged in first place, grossing $15,366,047 in box-office revenue. During its final week in release, Crouching Tiger, Hidden Dragon opened in a distant 50th place with $37,233 in revenue. The film went on to top out domestically at $128,078,872 in total ticket sales through a 31-week theatrical run. Internationally, the film took in an additional $85,446,864 in box-office business for a combined worldwide total of $213,525,736. For 2000 as a whole, the film cumulatively ranked at a worldwide box-office performance position of 19. Critical response Crouching Tiger, Hidden Dragon was very well received in the Western world, receiving numerous awards. On Rotten Tomatoes, the film holds an approval rating of 97% based on 157 reviews, with an average rating of 8.61/10. The site's critical consensus states: "The movie that catapulted Ang Lee into the ranks of upper echelon Hollywood filmmakers, Crouching Tiger, Hidden Dragon features a deft mix of amazing martial arts battles, beautiful scenery, and tasteful drama." Metacritic reported the film had an average score of 94 out of 100, based on 32 reviews, indicating "universal acclaim". Some Chinese-speaking viewers were bothered by the accents of the leading actors. Neither Chow (a native Cantonese speaker) nor Yeoh (who was born and raised in Malaysia) spoke Mandarin as a mother tongue. All four main actors spoke with different accents: Chow speaks with a Cantonese accent; Yeoh with a Malaysian accent; Chang Chen a Taiwanese accent; and Zhang Ziyi a Beijing accent. Yeoh responded to this complaint in a 28 December 2000, interview with Cinescape. She argued, "My character lived outside of Beijing, and so I didn't have to do the Beijing accent." When the interviewer, Craig Reid, remarked, "My mother-in-law has this strange Sichuan-Mandarin accent that's hard for me to understand.", Yeoh responded: "Yes, provinces all have their very own strong accents. When we first started the movie, Cheng Pei Pei was going to have her accent, and Chang Zhen was going to have his accent, and this person would have that accent. And in the end nobody could understand what they were saying. Forget about us, even the crew from Beijing thought this was all weird." The film led to a boost in popularity of Chinese wuxia films in the western world, where they were previously little known, and led to films such as House of Flying Daggers and Hero marketed towards Western audiences. The film also provided the breakthrough role for Zhang Ziyi's career, who noted: Film Journal noted that Crouching Tiger, Hidden Dragon "pulled off the rare trifecta of critical acclaim, boffo box-office and gestalt shift", in reference to its ground-breaking success for a subtitled film in the American market. Accolades Gathering widespread critical acclaim at the Toronto and New York film festivals, the film also became a favorite when Academy Awards nominations were announced in 2001. The film was screened out of competition at the 2000 Cannes Film Festival. The film received ten Academy Award nominations, which was the highest ever for a non-English language film, up until it was tied by Roma (2018). The film is ranked at number 497 on Empire's 2008 list of the 500 greatest movies of all time. and at number 66 in the magazine's 100 Best Films of World Cinema, published in 2010. In 2010, the Independent Film & Television Alliance selected the film as one of the 30 Most Significant Independent Films of the last 30 years. In 2016, it was voted the 35th-best film of the 21st century as picked by 177 film critics from around the world in a poll conducted by BBC. The film was included in BBC's 2018 list of The 100 greatest foreign language films ranked by 209 critics from 43 countries around the world. In 2019, The Guardian ranked the film 51st in its 100 best films of the 21st century list. Sequel A direct-to-television sequel to the film, Crouching Tiger, Hidden Dragon: Sword of Destiny, was released in 2016. It was directed by Yuen Woo-ping, who was the action choreographer for the first film. It is a co-production between Pegasus Media, China Film Group Corporation, and the Weinstein Company. Unlike the original film, the sequel was filmed in English for international release and dubbed to Mandarin for Chinese releases. Sword of Destiny is based on the book Iron Knight, Silver Vase, the next (and last) novel in the Crane-Iron Pentalogy. It features a mostly | The film was adapted from the novel Crouching Tiger, Hidden Dragon by Wang Dulu, serialized between 1941 and 1942 on Qingdao Xinmin News. The novel was the fourth book in a series of five. In the contract reached between Columbia Pictures and Ang Lee and Hsu Li-kong, they agreed to invest US$6 million in filming, but the stipulated recovery amount must be more than six times before the two parties will start to pay dividends. Casting Shu Qi was Ang Lee's first choice for the role of Jen, but she turned it down. Filming Although its Academy Award was presented to Taiwan, Crouching Tiger, Hidden Dragon was in fact an international co-production between companies in four regions: the Chinese company China Film Co-Production Corporation; the American companies Columbia Pictures Film Production Asia, Sony Pictures Classics, and Good Machine; the Hong Kong company EDKO Film; and the Taiwanese Zoom Hunt International Productions Company, Ltd; as well as the unspecified United China Vision, and Asia Union Film and Entertainment Ltd., created solely for this film. The film was made in Beijing, with location shooting in the Anhui, Hebei, Jiangsu, and Xinjiang provinces of China. The first phase of shooting was in the Gobi Desert where it consistently rained. Director Ang Lee noted, "I didn't take one break in eight months, not even for half a day. I was miserable—I just didn't have the extra energy to be happy. Near the end, I could hardly breathe. I thought I was about to have a stroke." The stunt work was mostly performed by the actors themselves and Ang Lee stated in an interview that computers were used "only to remove the safety wires that held the actors" aloft. "Most of the time you can see their faces", he added, "That's really them in the trees." Another compounding issue was the difference between accents of the four lead actors: Chow Yun-fat is from Hong Kong and speaks Cantonese natively; Michelle Yeoh is from Malaysia and grew up speaking English and Malay so she learned the Mandarin lines phonetically; Chang Chen is from Taiwan and he speaks Mandarin in a Taiwanese accent. Only Zhang Ziyi spoke with a native Mandarin accent that Ang Lee wanted. Chow Yun Fat said, on "the first day [of shooting], I had to do 28 takes just because of the language. That's never happened before in my life." The film specifically targeted Western audiences rather than the domestic audiences who were already used to Wuxia films, as a result high quality English subtitles were needed. Ang Lee, who was educated in the West, personally edited the subtitles to ensure they were satisfactory for Western audiences. Soundtrack The score was composed by Tan Dun, originally performed by Shanghai Symphony Orchestra, Shanghai National Orchestra, and Shanghai Percussion Ensemble. It also features many solo passages for cello played by Yo-Yo Ma. The "last track" ("A Love Before Time") features Coco Lee, who later performed it at the Academy Awards. The music for the entire film was produced in two weeks. Release Marketing The film was adapted into a video game, a comics series, and a 34-episode Taiwanese television series based on the original novel. The latter was released in 2004 as New Crouching Tiger, Hidden Dragon for US and Canadian release. Home media The film was released on VHS and DVD on 5 June 2001 by Columbia TriStar Home Entertainment. Reception Box office The film premiered in cinemas on 8 December 2000, in limited release within the US. During its opening weekend, the film opened in 15th place, grossing $663,205 in business, showing at 16 locations. On 12 January 2001, Crouching Tiger, Hidden Dragon premiered in cinemas in wide release throughout the US grossing $8,647,295 in business, ranking in sixth place. The film Save the Last Dance came in first place during that weekend, grossing $23,444,930. The film's revenue dropped by almost 30% in its second week of release, earning $6,080,357. For that particular weekend, the film fell to eighth place screening in 837 theaters. Save the Last Dance remained unchanged in first place, grossing $15,366,047 in box-office revenue. During its final week in release, Crouching Tiger, Hidden Dragon opened in a distant 50th place with $37,233 in revenue. The film went on to top out domestically at $128,078,872 in total ticket sales through a 31-week theatrical run. Internationally, the film took in an additional $85,446,864 in box-office business for a combined worldwide total of $213,525,736. For 2000 as a whole, the film cumulatively ranked at a worldwide box-office performance position of 19. Critical response Crouching Tiger, Hidden Dragon was very well received in the Western world, receiving numerous awards. On Rotten Tomatoes, the film holds an approval rating of 97% based on 157 reviews, with an average rating of 8.61/10. The site's critical consensus states: "The movie that catapulted Ang Lee into the ranks of upper echelon Hollywood filmmakers, Crouching Tiger, Hidden Dragon features a deft mix of amazing martial arts battles, beautiful scenery, and tasteful drama." Metacritic reported the film had an average score of 94 out of 100, based on 32 reviews, indicating "universal acclaim". Some Chinese-speaking viewers were bothered by the accents of the leading actors. Neither Chow (a native Cantonese speaker) nor Yeoh (who was born and raised in Malaysia) spoke Mandarin as a mother tongue. All four main actors spoke with different accents: Chow speaks with a Cantonese accent; Yeoh with a Malaysian accent; Chang Chen a Taiwanese accent; and Zhang Ziyi a Beijing accent. Yeoh responded to this complaint in a 28 December 2000, interview with Cinescape. She argued, "My character lived outside of Beijing, and so I didn't have to do the Beijing accent." When the interviewer, Craig Reid, remarked, "My mother-in-law has this strange Sichuan-Mandarin accent that's hard for me to understand.", Yeoh responded: "Yes, provinces all have their very own strong |
least one occasion. He took Barcelona in a great siege in 797. Charlemagne kept his daughters at home with him and refused to allow them to contract sacramental marriages (though he originally condoned an engagement between his eldest daughter Rotrude and Constantine VI of Byzantium, this engagement was annulled when Rotrude was 11). Charlemagne's opposition to his daughters' marriages may possibly have intended to prevent the creation of cadet branches of the family to challenge the main line, as had been the case with Tassilo of Bavaria. However, he tolerated their extramarital relationships, even rewarding their common-law husbands and treasuring the illegitimate grandchildren they produced for him. He also refused to believe stories of their wild behaviour. After his death the surviving daughters were banished from the court by their brother, the pious Louis, to take up residence in the convents they had been bequeathed by their father. At least one of them, Bertha, had a recognised relationship, if not a marriage, with Angilbert, a member of Charlemagne's court circle. Italian campaigns Conquest of the Lombard kingdom At his succession in 772, Pope Adrian I demanded the return of certain cities in the former exarchate of Ravenna in accordance with a promise at the succession of Desiderius. Instead, Desiderius took over certain papal cities and invaded the Pentapolis, heading for Rome. Adrian sent ambassadors to Charlemagne in autumn requesting he enforce the policies of his father, Pepin. Desiderius sent his own ambassadors denying the pope's charges. The ambassadors met at Thionville, and Charlemagne upheld the pope's side. Charlemagne demanded what the pope had requested, but Desiderius swore never to comply. Charlemagne and his uncle Bernard crossed the Alps in 773 and chased the Lombards back to Pavia, which they then besieged. Charlemagne temporarily left the siege to deal with Adelchis, son of Desiderius, who was raising an army at Verona. The young prince was chased to the Adriatic littoral and fled to Constantinople to plead for assistance from Constantine V, who was waging war with Bulgaria. The siege lasted until the spring of 774 when Charlemagne visited the pope in Rome. There he confirmed his father's grants of land, with some later chronicles falsely claiming that he also expanded them, granting Tuscany, Emilia, Venice and Corsica. The pope granted him the title patrician. He then returned to Pavia, where the Lombards were on the verge of surrendering. In return for their lives, the Lombards surrendered and opened the gates in early summer. Desiderius was sent to the abbey of Corbie, and his son Adelchis died in Constantinople, a patrician. Charles, unusually, had himself crowned with the Iron Crown and made the magnates of Lombardy pay homage to him at Pavia. Only Duke Arechis II of Benevento refused to submit and proclaimed independence. Charlemagne was then master of Italy as king of the Lombards. He left Italy with a garrison in Pavia and a few Frankish counts in place the same year. Instability continued in Italy. In 776, Dukes Hrodgaud of Friuli and Hildeprand of Spoleto rebelled. Charlemagne rushed back from Saxony and defeated the Duke of Friuli in battle; the Duke was slain. The Duke of Spoleto signed a treaty. Their co-conspirator, Arechis, was not subdued, and Adelchis, their candidate in Byzantium, never left that city. Northern Italy was now faithfully his. Southern Italy In 787, Charlemagne directed his attention towards the Duchy of Benevento, where Arechis II was reigning independently with the self-given title of Princeps. Charlemagne's siege of Salerno forced Arechis into submission. However, after Arechis II's death in 787, his son Grimoald III proclaimed the Duchy of Benevento newly independent. Grimoald was attacked many times by Charles' or his sons' armies, without achieving a definitive victory. Charlemagne lost interest and never again returned to Southern Italy where Grimoald was able to keep the Duchy free from Frankish suzerainty. Carolingian expansion to the south Vasconia and the Pyrenees The destructive war led by Pepin in Aquitaine, although brought to a satisfactory conclusion for the Franks, proved the Frankish power structure south of the Loire was feeble and unreliable. After the defeat and death of Waiofar in 768, while Aquitaine submitted again to the Carolingian dynasty, a new rebellion broke out in 769 led by Hunald II, a possible son of Waifer. He took refuge with the ally Duke Lupus II of Gascony, but probably out of fear of Charlemagne's reprisal, Lupus handed him over to the new King of the Franks to whom he pledged loyalty, which seemed to confirm the peace in the Basque area south of the Garonne. In the campaign of 769, Charlemagne seems to have followed a policy of "overwhelming force" and avoided a major pitched battle Wary of new Basque uprisings, Charlemagne seems to have tried to contain Duke Lupus's power by appointing Seguin as the Count of Bordeaux (778) and other counts of Frankish background in bordering areas (Toulouse, County of Fézensac). The Basque Duke, in turn, seems to have contributed decisively or schemed the Battle of Roncevaux Pass (referred to as "Basque treachery"). The defeat of Charlemagne's army in Roncevaux (778) confirmed his determination to rule directly by establishing the Kingdom of Aquitaine (ruled by Louis the Pious) based on a power base of Frankish officials, distributing lands among colonisers and allocating lands to the Church, which he took as an ally. A Christianisation programme was put in place across the high Pyrenees (778). The new political arrangement for Vasconia did not sit well with local lords. As of 788 Adalric was fighting and capturing Chorson, Carolingian Count of Toulouse. He was eventually released, but Charlemagne, enraged at the compromise, decided to depose him and appointed his trustee William of Gellone. William, in turn, fought the Basques and defeated them after banishing Adalric (790). From 781 (Pallars, Ribagorça) to 806 (Pamplona under Frankish influence), taking the County of Toulouse for a power base, Charlemagne asserted Frankish authority over the Pyrenees by subduing the south-western marches of Toulouse (790) and establishing vassal counties on the southern Pyrenees that were to make up the Marca Hispanica. As of 794, a Frankish vassal, the Basque lord Belasko (al-Galashki, 'the Gaul') ruled Álava, but Pamplona remained under Cordovan and local control up to 806. Belasko and the counties in the Marca Hispánica provided the necessary base to attack the Andalusians (an expedition led by William Count of Toulouse and Louis the Pious to capture Barcelona in 801). Events in the Duchy of Vasconia (rebellion in Pamplona, count overthrown in Aragon, Duke Seguin of Bordeaux deposed, uprising of the Basque lords, etc.) were to prove it ephemeral upon Charlemagne's death. Roncesvalles campaign According to the Muslim historian Ibn al-Athir, the Diet of Paderborn had received the representatives of the Muslim rulers of Zaragoza, Girona, Barcelona and Huesca. Their masters had been cornered in the Iberian peninsula by Abd ar-Rahman I, the Umayyad emir of Cordova. These "Saracen" (Moorish and Muwallad) rulers offered their homage to the king of the Franks in return for military support. Seeing an opportunity to extend Christendom and his own power, and believing the Saxons to be a fully conquered nation, Charlemagne agreed to go to Spain. In 778, he led the Neustrian army across the Western Pyrenees, while the Austrasians, Lombards, and Burgundians passed over the Eastern Pyrenees. The armies met at Saragossa and Charlemagne received the homage of the Muslim rulers, Sulayman al-Arabi and Kasmin ibn Yusuf, but the city did not fall for him. Indeed, Charlemagne faced the toughest battle of his career. The Muslims forced him to retreat, so he decided to go home, as he could not trust the Basques, whom he had subdued by conquering Pamplona. He turned to leave Iberia, but as his army was crossing back through the Pass of Roncesvalles, one of the most famous events of his reign occurred: the Basques attacked and destroyed his rearguard and baggage train. The Battle of Roncevaux Pass, though less a battle than a skirmish, left many famous dead, including the seneschal Eggihard, the count of the palace Anselm, and the warden of the Breton March, Roland, inspiring the subsequent creation of The Song of Roland (La Chanson de Roland), regarded as the first major work in the French language. Contact with the Saracens The conquest of Italy brought Charlemagne in contact with the Saracens who, at the time, controlled the Mediterranean. Charlemagne's eldest son, Pepin the Hunchback, was much occupied with Saracens in Italy. Charlemagne conquered Corsica and Sardinia at an unknown date and in 799 the Balearic Islands. The islands were often attacked by Saracen pirates, but the counts of Genoa and Tuscany (Boniface) controlled them with large fleets until the end of Charlemagne's reign. Charlemagne even had contact with the caliphal court in Baghdad. In 797 (or possibly 801), the caliph of Baghdad, Harun al-Rashid, presented Charlemagne with an Asian elephant named Abul-Abbas and a clock. Wars with the Moors In Hispania, the struggle against the Moors continued unabated throughout the latter half of his reign. Louis was in charge of the Spanish border. In 785, his men captured Girona permanently and extended Frankish control into the Catalan littoral for the duration of Charlemagne's reign (the area remained nominally Frankish until the Treaty of Corbeil in 1258). The Muslim chiefs in the northeast of Islamic Spain were constantly rebelling against Cordovan authority, and they often turned to the Franks for help. The Frankish border was slowly extended until 795, when Girona, Cardona, Ausona and Urgell were united into the new Spanish March, within the old duchy of Septimania. In 797, Barcelona, the greatest city of the region, fell to the Franks when Zeid, its governor, rebelled against Cordova and, failing, handed it to them. The Umayyad authority recaptured it in 799. However, Louis of Aquitaine marched the entire army of his kingdom over the Pyrenees and besieged it for two years, wintering there from 800 to 801, when it capitulated. The Franks continued to press forward against the emir. They probably took Tarragona and forced the submission of Tortosa in 809. The last conquest brought them to the mouth of the Ebro and gave them raiding access to Valencia, prompting the Emir al-Hakam I to recognise their conquests in 813. Eastern campaigns Saxon Wars Charlemagne was engaged in almost constant warfare throughout his reign, often at the head of his elite scara bodyguard squadrons. In the Saxon Wars, spanning thirty years and eighteen battles, he conquered Saxonia and proceeded to convert it to Christianity. The Germanic Saxons were divided into four subgroups in four regions. Nearest to Austrasia was Westphalia and farthest away was Eastphalia. Between them was Engria and north of these three, at the base of the Jutland peninsula, was Nordalbingia. In his first campaign, in 773, Charlemagne forced the Engrians to submit and cut down an Irminsul pillar near Paderborn. The campaign was cut short by his first expedition to Italy. He returned in 775, marching through Westphalia and conquering the Saxon fort at Sigiburg. He then crossed Engria, where he defeated the Saxons again. Finally, in Eastphalia, he defeated a Saxon force, and its leader Hessi converted to Christianity. Charlemagne returned through Westphalia, leaving encampments at Sigiburg and Eresburg, which had been important Saxon bastions. He then controlled Saxony with the exception of Nordalbingia, but Saxon resistance had not ended. Following his subjugation of the Dukes of Friuli and Spoleto, Charlemagne returned rapidly to Saxony in 776, where a rebellion had destroyed his fortress at Eresburg. The Saxons were once again defeated, but their main leader, Widukind, escaped to Denmark, his wife's home. Charlemagne built a new camp at Karlstadt. In 777, he called a national diet at Paderborn to integrate Saxony fully into the Frankish kingdom. Many Saxons were baptised as Christians. In the summer of 779, he again invaded Saxony and reconquered Eastphalia, Engria and Westphalia. At a diet near Lippe, he divided the land into missionary districts and himself assisted in several mass baptisms (780). He then returned to Italy and, for the first time, the Saxons did not immediately revolt. Saxony was peaceful from 780 to 782. He returned to Saxony in 782 and instituted a code of law and appointed counts, both Saxon and Frank. The laws were draconian on religious issues; for example, the Capitulatio de partibus Saxoniae prescribed death to Saxon pagans who refused to convert to Christianity. This led to renewed conflict. That year, in autumn, Widukind returned and led a new revolt. In response, at Verden in Lower Saxony, Charlemagne is recorded as having ordered the execution of 4,500 Saxon prisoners by beheading, known as the Massacre of Verden ("Verdener Blutgericht"). The killings triggered three years of renewed bloody warfare. During this war, the East Frisians between the Lauwers and the Weser joined the Saxons in revolt and were finally subdued. The war ended with Widukind accepting baptism. The Frisians afterwards asked for missionaries to be sent to them and a bishop of their own nation, Ludger, was sent. Charlemagne also promulgated a law code, the Lex Frisonum, as he did for most subject peoples. Thereafter, the Saxons maintained the peace for seven years, but in 792 Westphalia again rebelled. The Eastphalians and Nordalbingians joined them in 793, but the insurrection was unpopular and was put down by 794. An Engrian rebellion followed in 796, but the presence of Charlemagne, Christian Saxons and Slavs quickly crushed it. The last insurrection occurred in 804, more than thirty years after Charlemagne's first campaign against them, but also failed. According to Einhard: Submission of Bavaria By 774, Charlemagne had invaded the Kingdom of Lombardy, and he later annexed the Lombardian territories and assumed its crown, placing the Papal States under Frankish protection. The Duchy of Spoleto south of Rome was acquired in 774, while in the central western parts of Europe, the Duchy of Bavaria was absorbed and the Bavarian policy continued of establishing tributary marches, (borders protected in return for tribute or taxes) among the Slavic Serbs and Czechs. The remaining power confronting the Franks in the east were the Avars. However, Charlemagne acquired other Slavic areas, including Bohemia, Moravia, Austria and Croatia. In 789, Charlemagne turned to Bavaria. He claimed that Tassilo III, Duke of Bavaria was an unfit ruler, due to his oath-breaking. The charges were exaggerated, but Tassilo was deposed anyway and put in the monastery of Jumièges. In 794, Tassilo was made to renounce any claim to Bavaria for himself and his family (the Agilolfings) at the synod of Frankfurt; he formally handed over to the king all of the rights he had held. Bavaria was subdivided into Frankish counties, as had been done with Saxony. Avar campaigns In 788, the Avars, an Asian nomadic group that had settled down in what is today Hungary (Einhard called them Huns), invaded Friuli and Bavaria. Charlemagne was preoccupied with other matters until 790 when he marched down the Danube and ravaged Avar territory to the Győr. A Lombard army under Pippin then marched into the Drava valley and ravaged Pannonia. The campaigns ended when the Saxons revolted again in 792. For the next two years, Charlemagne was occupied, along with the Slavs, against the Saxons. Pippin and Duke Eric of Friuli continued, however, to assault the Avars' ring-shaped strongholds. The great Ring of the Avars, their capital fortress, was taken twice. The booty was sent to Charlemagne at his capital, Aachen, and redistributed to his followers and to foreign rulers, including King Offa of Mercia. Soon the Avar tuduns had lost the will to fight and travelled to Aachen to become vassals to Charlemagne and to become Christians. Charlemagne accepted their surrender and sent one native chief, baptised Abraham, back to Avaria with the ancient title of khagan. Abraham kept his people in line, but in 800, the Bulgarians under Khan Krum attacked the remains of the Avar state. In 803, Charlemagne sent a Bavarian army into Pannonia, defeating and bringing an end to the Avar confederation. In November of the same year, Charlemagne went to Regensburg where the Avar leaders acknowledged him as their ruler. In 805, the Avar khagan, who had already been baptised, went to Aachen to ask permission to settle with his people south-eastward from Vienna. The Transdanubian territories became integral parts of the Frankish realm, which was abolished by the Magyars in 899–900. Northeast Slav expeditions In 789, in recognition of his new pagan neighbours, the Slavs, Charlemagne marched an Austrasian-Saxon army across the Elbe into Obotrite territory. The Slavs ultimately submitted, led by their leader Witzin. Charlemagne then accepted the surrender of the Veleti under Dragovit and demanded many hostages. He also demanded permission to send missionaries into this pagan region unmolested. The army marched to the Baltic before turning around and marching to the Rhine, winning much booty with no harassment. The tributary Slavs became loyal allies. In 795, when the Saxons broke the peace, the Abotrites and Veleti rebelled with their new ruler against the Saxons. Witzin died in battle and Charlemagne avenged him by harrying the Eastphalians on the Elbe. Thrasuco, his successor, led his men to conquest over the Nordalbingians and handed their leaders over to Charlemagne, who honoured him. The Abotrites remained loyal until Charles' death and fought later against the Danes. Southeast Slav expeditions When Charlemagne incorporated much of Central Europe, he brought the Frankish state face to face with the Avars and Slavs in the southeast. The most southeast Frankish neighbours were Croats, who settled in Lower Pannonia and Duchy of Croatia. While fighting the Avars, the Franks had called for their support. During the 790s, he won a major victory over them in 796. Duke Vojnomir of Lower Pannonia aided Charlemagne, and the Franks made themselves overlords over the Croats of northern Dalmatia, Slavonia and Pannonia. The Frankish commander Eric of Friuli wanted to extend his dominion by conquering the Littoral Croat Duchy. During that time, Dalmatian Croatia was ruled by Duke Višeslav of Croatia. In the Battle of Trsat, the forces of Eric fled their positions and were routed by the forces of Višeslav. Eric was among those killed which was a great blow for the Carolingian Empire. Charlemagne also directed his attention to the Slavs to the west of the Avar khaganate: the Carantanians and Carniolans. These people were subdued by the Lombards and Bavarii and made tributaries, but were never fully incorporated into the Frankish state. Imperium Coronation In 799, Pope Leo III had been assaulted by some of the Romans, who tried to put out his eyes and tear out his tongue. Leo escaped and fled to Charlemagne at Paderborn. Charlemagne, advised by scholar Alcuin, travelled to Rome, in November 800 and held a synod. On 23 December, Leo swore an oath of innocence to Charlemagne. His position having thereby been weakened, the Pope sought to restore his status. Two days later, at Mass, on Christmas Day (25 December), when Charlemagne knelt at the altar to pray, the Pope crowned him Imperator Romanorum ("Emperor of the Romans") in Saint Peter's Basilica. In so doing, the Pope rejected the legitimacy of Empress Irene of Constantinople: Charlemagne's coronation as Emperor, though intended to represent the continuation of the unbroken line of Emperors from Augustus to Constantine VI, had the effect of setting up two separate (and often opposing) Empires and two separate claims to imperial authority. It led to war in 802, and for centuries to come, the Emperors of both West and East would make competing claims of sovereignty over the whole. Einhard says that Charlemagne was ignorant of the Pope's intent and did not want any such coronation: A number of modern scholars, however, suggest that Charlemagne was indeed aware of the coronation; certainly, he cannot have missed the bejewelled crown waiting on the altar when he came to pray—something even contemporary sources support. Debate Historians have debated for centuries whether Charlemagne was aware before the coronation of the Pope's intention to crown him Emperor (Charlemagne declared that he would not have entered Saint Peter's had he known, according to chapter twenty-eight of Einhard's Vita Karoli Magni), but that debate obscured the more significant question of why the Pope granted the title and why Charlemagne accepted it. Collins points out "[t]hat the motivation behind the acceptance of the imperial title was a romantic and antiquarian interest in reviving the Roman Empire is highly unlikely." For one thing, such romance would not have appealed either to Franks or Roman Catholics at the turn of the ninth century, both of whom viewed the Classical heritage of the Roman Empire with distrust. The Franks took pride in having "fought against and thrown from their shoulders the heavy yoke of the Romans" and "from the knowledge gained in baptism, clothed in gold and precious stones the bodies of the holy martyrs whom the Romans had killed by fire, by the sword and by wild animals", as Pepin III described it in a law of 763 or 764. Furthermore, the new title—carrying with it the risk that the new emperor would "make drastic changes to the traditional styles and procedures of government" or "concentrate his attentions on Italy or on Mediterranean concerns more generally"—risked alienating the Frankish leadership. For both the Pope and Charlemagne, the Roman Empire remained a significant power in European politics at this time. The Byzantine Empire, based in Constantinople, continued to hold a substantial portion of Italy, with borders not far south of Rome. Charles' sitting in judgment of the Pope could be seen as usurping the prerogatives of the Emperor in Constantinople: For the Pope, then, there was "no living Emperor at that time" though Henri Pirenne disputes this saying that the coronation "was not in any sense explained by the fact that at this moment a woman was reigning in Constantinople". Nonetheless, the Pope took the extraordinary step of creating one. The papacy had since 727 been in conflict with Irene's predecessors in Constantinople over a number of issues, chiefly the continued Byzantine adherence to the doctrine of iconoclasm, the destruction of Christian images; while from 750, the secular power of the Byzantine Empire in central Italy had been nullified. By bestowing the Imperial crown upon Charlemagne, the Pope arrogated to himself "the right to appoint ... the Emperor of the Romans, ... establishing the imperial crown as his own personal gift but simultaneously granting himself implicit superiority over the Emperor whom he had created." And "because the Byzantines had proved so unsatisfactory from every point of view—political, military and doctrinal—he would select a westerner: the one man who by his wisdom and statesmanship and the vastness of his dominions ... stood out head and shoulders above his contemporaries." With Charlemagne's coronation, therefore, "the Roman Empire remained, so far as either of them [Charlemagne and Leo] were concerned, one and indivisible, with Charles as its Emperor", though there can have been "little doubt that the coronation, with all that it implied, would be furiously contested in Constantinople". Alcuin writes hopefully in his letters of an Imperium Christianum ("Christian Empire"), wherein, "just as the inhabitants of the [Roman Empire] had been united by a common Roman citizenship", presumably this new empire would be united by a common Christian faith. This is the view of Pirenne when he says "Charles was the Emperor of the ecclesia as the Pope conceived it, of the Roman Church, regarded as the universal Church". The Imperium Christianum was further supported at a number of synods all across Europe by Paulinus of Aquileia. What is known, from the Byzantine chronicler Theophanes, is that Charlemagne's reaction to his coronation was to take the initial steps towards securing the Constantinopolitan throne by sending envoys of marriage to Irene, and that Irene reacted somewhat favourably to them. It is important to distinguish between the universalist and localist conceptions of the empire, which remain controversial among historians. According to the former, the empire was a universal monarchy, a "commonwealth of the whole world, whose sublime unity transcended every minor distinction"; and the emperor "was entitled to the obedience of Christendom". According to the latter, the emperor had no ambition for universal dominion; his realm was limited in the same way as that of every other ruler, and when he made more far-reaching claims his object was normally to ward off the attacks either of the Pope or of the Byzantine emperor. According to this view, also, the origin of the empire is to be explained by specific local circumstances rather than by overarching theories. According to Ohnsorge, for a long time, it had been the custom of Byzantium to designate the German princes as spiritual "sons" of the Romans. What might have been acceptable in the fifth century had become provoking and insulting to the Franks in the eighth century. Charles came to believe that the Roman emperor, who claimed to head the world hierarchy of states, was, in reality, no greater than Charles himself, a king as other kings, since beginning in 629 he had entitled himself "Basileus" (translated literally as "king"). Ohnsorge finds it significant that the chief wax seal of Charles, which bore only the inscription: "Christe, protege Carolum regem Francorum [Christ, protect Charles, king of the Franks], was used from 772 to 813, even during the imperial period and was not replaced by a special imperial seal; indicating that Charles felt himself to be just the king of the Franks. Finally, Ohnsorge points out that in the spring of 813 at Aachen Charles crowned his only surviving son, Louis, as the emperor without recourse to Rome with only the acclamation of his Franks. The form in which this acclamation was offered was Frankish-Christian rather than Roman. This implies both independence from Rome and a Frankish (non-Roman) understanding of empire. May-Harting argues that the Imperial title was Charlemagne's face-saving offer to incorporate the recently conquered Saxons. Since the Saxons did not have an institution of kingship for their own ethnicity, claiming the right to rule them as King of the Saxons was not possible. Hence, it is argued, Charlemagne used the supra-ethnic Imperial title to incorporate the Saxons, which helped to cement the diverse peoples under his rule. Imperial title Charlemagne used these circumstances to claim that he was the "renewer of the Roman Empire", which had declined under the Byzantines. In his official charters, Charles preferred the style Karolus serenissimus Augustus a Deo coronatus magnus pacificus imperator Romanum gubernans imperium ("Charles, most serene Augustus crowned by God, the great, peaceful emperor ruling the Roman empire") to the more direct Imperator Romanorum ("Emperor of the Romans"). The title of Emperor remained in the Carolingian family for years to come, but divisions of territory and in-fighting over supremacy of the Frankish state weakened its significance. The papacy itself never forgot the title nor abandoned the right to bestow it. When the family of Charles ceased to produce worthy heirs, the Pope gladly crowned whichever Italian magnate could best protect him from his local enemies. The empire would remain in continuous existence for over a millennium, as the Holy Roman Empire, a true imperial successor to Charles. Imperial diplomacy The iconoclasm of the Byzantine Isaurian Dynasty was endorsed by the Franks. The Second Council of Nicaea reintroduced the veneration of icons under Empress Irene. The council was not recognised by Charlemagne since no Frankish emissaries had been invited, even though Charlemagne ruled more than three provinces of the classical Roman empire and was considered equal in rank to the Byzantine emperor. And while the Pope supported the reintroduction of the iconic veneration, he politically digressed from Byzantium. He certainly desired to increase the influence of the papacy, to honour his saviour Charlemagne, and to solve the constitutional issues then most troubling to European jurists in an era when Rome was not in the hands of an emperor. Thus, Charlemagne's assumption of the imperial title was not a usurpation in the eyes of the Franks or Italians. It was, however, seen as such in Byzantium, where it was protested by Irene and her successor Nikephoros I—neither of whom had any great effect in enforcing their protests. The East Romans, however, still held several territories in Italy: Venice (what was left of the Exarchate of Ravenna), Reggio (in Calabria), Otranto (in Apulia), and Naples (the Ducatus Neapolitanus). These regions remained outside of Frankish hands until 804, when the Venetians, torn by infighting, transferred their allegiance to the Iron Crown of Pippin, Charles' son. The Pax Nicephori ended. Nicephorus ravaged the coasts with a fleet, initiating the only instance of war between the Byzantines and the Franks. The conflict lasted until 810 when the pro-Byzantine party in Venice gave their city back to the Byzantine Emperor, and the two emperors of Europe made peace: Charlemagne received the Istrian peninsula and in 812 the emperor Michael I Rangabe recognised his status as Emperor, although not necessarily as "Emperor of the Romans". Danish attacks After the conquest of Nordalbingia, the Frankish frontier was brought into contact with Scandinavia. The pagan Danes, "a race almost unknown to his ancestors, but destined to be only too well known to his sons" as Charles Oman described them, inhabiting the Jutland peninsula, had heard many stories from Widukind and his allies who had taken refuge with them about the dangers of the Franks and the fury which their Christian king could direct against pagan neighbours. In 808, the king of the Danes, Godfred, expanded the vast Danevirke across the isthmus of Schleswig. This defence, last employed in the Danish-Prussian War of 1864, was at its beginning a long earthenwork rampart. The Danevirke protected Danish land and gave Godfred the opportunity to harass Frisia and Flanders with pirate raids. He also subdued the Frank-allied Veleti and fought the Abotrites. Godfred invaded Frisia, joked of visiting Aachen, but was murdered before he could do any more, either by a Frankish assassin or by one of his own men. Godfred was succeeded by his nephew Hemming, who concluded the Treaty of Heiligen with Charlemagne in late 811. Death In 813, Charlemagne called Louis the Pious, king of Aquitaine, his only surviving legitimate son, to his court. There Charlemagne crowned his son as co-emperor and sent him back to Aquitaine. He then spent the autumn hunting before returning to Aachen on 1 November. In January, he fell ill with pleurisy. In deep depression (mostly because many of his plans were not yet realised), he took to his bed on 21 January and as Einhard tells it: He was buried that same day, in Aachen Cathedral. The earliest surviving planctus, the Planctus de obitu Karoli, was composed by a monk of Bobbio, which he had patronised. A later story, told by Otho of Lomello, Count of the Palace at Aachen in the time of Emperor Otto III, would claim that he and Otto had discovered Charlemagne's tomb: Charlemagne, they claimed, was seated upon a throne, wearing a crown and holding a sceptre, his flesh almost entirely incorrupt. In 1165, Emperor Frederick I re-opened the tomb again and placed the emperor in a sarcophagus beneath the floor of the cathedral. In 1215 Emperor Frederick II re-interred him in a casket made of gold and silver known as the Karlsschrein. Charlemagne's death emotionally affected many of his subjects, particularly those of the literary clique who had surrounded him at Aachen. An anonymous monk of Bobbio lamented: Louis succeeded him as Charles had intended. He left a testament allocating his assets in 811 that was not updated prior to his death. He left most of his wealth to the Church, to be used for charity. His empire lasted only another generation in its entirety; its division, according to custom, between Louis's own sons after their father's death laid the foundation for the modern states of Germany and France. Administration Organisation The Carolingian king exercised the bannum, the right to rule and command. Under the Franks, it was a royal prerogative but could be delegated. He had supreme jurisdiction in judicial matters, made legislation, led the army, and protected both the Church and the poor. His administration was an attempt to organise the kingdom, church and nobility around him. As an administrator, Charlemagne stands out for his many reforms: monetary, governmental, military, cultural and ecclesiastical. He is the main protagonist of the "Carolingian Renaissance". Military Charlemagne's success rested primarily on novel siege technologies and excellent logistics rather than the long-claimed "cavalry revolution" led by Charles Martel in 730s. However, the stirrup, which made the "shock cavalry" lance charge possible, was not introduced to the Frankish kingdom until the late eighth century. Horses were used extensively by the Frankish military because they provided a quick, long-distance method of transporting troops, which was critical to building and maintaining the large empire. Economic and monetary reforms Charlemagne had an important role in determining Europe's immediate economic future. Pursuing his father's reforms, Charlemagne abolished the monetary system based on the gold . Instead, he and the Anglo-Saxon King Offa of Mercia took up Pippin's system for pragmatic reasons, notably a shortage of the metal. The gold shortage was a direct consequence of the conclusion of peace with Byzantium, which resulted in ceding Venice and Sicily to the East and losing their trade routes to Africa. The resulting standardisation economically harmonised and unified the complex array of currencies that had been in use at the commencement of his reign, thus simplifying trade and commerce. Charlemagne established a new standard, the (from the Latin , the modern pound), which was based upon a pound of silver—a unit of both money and weight—worth 20 sous (from the Latin [which was primarily an accounting device and never actually minted], the modern shilling) or 240 (from the Latin , the modern penny). During this period, the and the were counting units; only the was a coin of the realm. Charlemagne instituted principles for accounting practice by means of the Capitulare de villis of 802, which laid down strict rules for the way in which incomes and expenses were to be recorded. Charlemagne applied this system to much of the European continent, and Offa's standard was voluntarily adopted by much of England. After Charlemagne's death, continental coinage degraded, and most | the Franks, proved the Frankish power structure south of the Loire was feeble and unreliable. After the defeat and death of Waiofar in 768, while Aquitaine submitted again to the Carolingian dynasty, a new rebellion broke out in 769 led by Hunald II, a possible son of Waifer. He took refuge with the ally Duke Lupus II of Gascony, but probably out of fear of Charlemagne's reprisal, Lupus handed him over to the new King of the Franks to whom he pledged loyalty, which seemed to confirm the peace in the Basque area south of the Garonne. In the campaign of 769, Charlemagne seems to have followed a policy of "overwhelming force" and avoided a major pitched battle Wary of new Basque uprisings, Charlemagne seems to have tried to contain Duke Lupus's power by appointing Seguin as the Count of Bordeaux (778) and other counts of Frankish background in bordering areas (Toulouse, County of Fézensac). The Basque Duke, in turn, seems to have contributed decisively or schemed the Battle of Roncevaux Pass (referred to as "Basque treachery"). The defeat of Charlemagne's army in Roncevaux (778) confirmed his determination to rule directly by establishing the Kingdom of Aquitaine (ruled by Louis the Pious) based on a power base of Frankish officials, distributing lands among colonisers and allocating lands to the Church, which he took as an ally. A Christianisation programme was put in place across the high Pyrenees (778). The new political arrangement for Vasconia did not sit well with local lords. As of 788 Adalric was fighting and capturing Chorson, Carolingian Count of Toulouse. He was eventually released, but Charlemagne, enraged at the compromise, decided to depose him and appointed his trustee William of Gellone. William, in turn, fought the Basques and defeated them after banishing Adalric (790). From 781 (Pallars, Ribagorça) to 806 (Pamplona under Frankish influence), taking the County of Toulouse for a power base, Charlemagne asserted Frankish authority over the Pyrenees by subduing the south-western marches of Toulouse (790) and establishing vassal counties on the southern Pyrenees that were to make up the Marca Hispanica. As of 794, a Frankish vassal, the Basque lord Belasko (al-Galashki, 'the Gaul') ruled Álava, but Pamplona remained under Cordovan and local control up to 806. Belasko and the counties in the Marca Hispánica provided the necessary base to attack the Andalusians (an expedition led by William Count of Toulouse and Louis the Pious to capture Barcelona in 801). Events in the Duchy of Vasconia (rebellion in Pamplona, count overthrown in Aragon, Duke Seguin of Bordeaux deposed, uprising of the Basque lords, etc.) were to prove it ephemeral upon Charlemagne's death. Roncesvalles campaign According to the Muslim historian Ibn al-Athir, the Diet of Paderborn had received the representatives of the Muslim rulers of Zaragoza, Girona, Barcelona and Huesca. Their masters had been cornered in the Iberian peninsula by Abd ar-Rahman I, the Umayyad emir of Cordova. These "Saracen" (Moorish and Muwallad) rulers offered their homage to the king of the Franks in return for military support. Seeing an opportunity to extend Christendom and his own power, and believing the Saxons to be a fully conquered nation, Charlemagne agreed to go to Spain. In 778, he led the Neustrian army across the Western Pyrenees, while the Austrasians, Lombards, and Burgundians passed over the Eastern Pyrenees. The armies met at Saragossa and Charlemagne received the homage of the Muslim rulers, Sulayman al-Arabi and Kasmin ibn Yusuf, but the city did not fall for him. Indeed, Charlemagne faced the toughest battle of his career. The Muslims forced him to retreat, so he decided to go home, as he could not trust the Basques, whom he had subdued by conquering Pamplona. He turned to leave Iberia, but as his army was crossing back through the Pass of Roncesvalles, one of the most famous events of his reign occurred: the Basques attacked and destroyed his rearguard and baggage train. The Battle of Roncevaux Pass, though less a battle than a skirmish, left many famous dead, including the seneschal Eggihard, the count of the palace Anselm, and the warden of the Breton March, Roland, inspiring the subsequent creation of The Song of Roland (La Chanson de Roland), regarded as the first major work in the French language. Contact with the Saracens The conquest of Italy brought Charlemagne in contact with the Saracens who, at the time, controlled the Mediterranean. Charlemagne's eldest son, Pepin the Hunchback, was much occupied with Saracens in Italy. Charlemagne conquered Corsica and Sardinia at an unknown date and in 799 the Balearic Islands. The islands were often attacked by Saracen pirates, but the counts of Genoa and Tuscany (Boniface) controlled them with large fleets until the end of Charlemagne's reign. Charlemagne even had contact with the caliphal court in Baghdad. In 797 (or possibly 801), the caliph of Baghdad, Harun al-Rashid, presented Charlemagne with an Asian elephant named Abul-Abbas and a clock. Wars with the Moors In Hispania, the struggle against the Moors continued unabated throughout the latter half of his reign. Louis was in charge of the Spanish border. In 785, his men captured Girona permanently and extended Frankish control into the Catalan littoral for the duration of Charlemagne's reign (the area remained nominally Frankish until the Treaty of Corbeil in 1258). The Muslim chiefs in the northeast of Islamic Spain were constantly rebelling against Cordovan authority, and they often turned to the Franks for help. The Frankish border was slowly extended until 795, when Girona, Cardona, Ausona and Urgell were united into the new Spanish March, within the old duchy of Septimania. In 797, Barcelona, the greatest city of the region, fell to the Franks when Zeid, its governor, rebelled against Cordova and, failing, handed it to them. The Umayyad authority recaptured it in 799. However, Louis of Aquitaine marched the entire army of his kingdom over the Pyrenees and besieged it for two years, wintering there from 800 to 801, when it capitulated. The Franks continued to press forward against the emir. They probably took Tarragona and forced the submission of Tortosa in 809. The last conquest brought them to the mouth of the Ebro and gave them raiding access to Valencia, prompting the Emir al-Hakam I to recognise their conquests in 813. Eastern campaigns Saxon Wars Charlemagne was engaged in almost constant warfare throughout his reign, often at the head of his elite scara bodyguard squadrons. In the Saxon Wars, spanning thirty years and eighteen battles, he conquered Saxonia and proceeded to convert it to Christianity. The Germanic Saxons were divided into four subgroups in four regions. Nearest to Austrasia was Westphalia and farthest away was Eastphalia. Between them was Engria and north of these three, at the base of the Jutland peninsula, was Nordalbingia. In his first campaign, in 773, Charlemagne forced the Engrians to submit and cut down an Irminsul pillar near Paderborn. The campaign was cut short by his first expedition to Italy. He returned in 775, marching through Westphalia and conquering the Saxon fort at Sigiburg. He then crossed Engria, where he defeated the Saxons again. Finally, in Eastphalia, he defeated a Saxon force, and its leader Hessi converted to Christianity. Charlemagne returned through Westphalia, leaving encampments at Sigiburg and Eresburg, which had been important Saxon bastions. He then controlled Saxony with the exception of Nordalbingia, but Saxon resistance had not ended. Following his subjugation of the Dukes of Friuli and Spoleto, Charlemagne returned rapidly to Saxony in 776, where a rebellion had destroyed his fortress at Eresburg. The Saxons were once again defeated, but their main leader, Widukind, escaped to Denmark, his wife's home. Charlemagne built a new camp at Karlstadt. In 777, he called a national diet at Paderborn to integrate Saxony fully into the Frankish kingdom. Many Saxons were baptised as Christians. In the summer of 779, he again invaded Saxony and reconquered Eastphalia, Engria and Westphalia. At a diet near Lippe, he divided the land into missionary districts and himself assisted in several mass baptisms (780). He then returned to Italy and, for the first time, the Saxons did not immediately revolt. Saxony was peaceful from 780 to 782. He returned to Saxony in 782 and instituted a code of law and appointed counts, both Saxon and Frank. The laws were draconian on religious issues; for example, the Capitulatio de partibus Saxoniae prescribed death to Saxon pagans who refused to convert to Christianity. This led to renewed conflict. That year, in autumn, Widukind returned and led a new revolt. In response, at Verden in Lower Saxony, Charlemagne is recorded as having ordered the execution of 4,500 Saxon prisoners by beheading, known as the Massacre of Verden ("Verdener Blutgericht"). The killings triggered three years of renewed bloody warfare. During this war, the East Frisians between the Lauwers and the Weser joined the Saxons in revolt and were finally subdued. The war ended with Widukind accepting baptism. The Frisians afterwards asked for missionaries to be sent to them and a bishop of their own nation, Ludger, was sent. Charlemagne also promulgated a law code, the Lex Frisonum, as he did for most subject peoples. Thereafter, the Saxons maintained the peace for seven years, but in 792 Westphalia again rebelled. The Eastphalians and Nordalbingians joined them in 793, but the insurrection was unpopular and was put down by 794. An Engrian rebellion followed in 796, but the presence of Charlemagne, Christian Saxons and Slavs quickly crushed it. The last insurrection occurred in 804, more than thirty years after Charlemagne's first campaign against them, but also failed. According to Einhard: Submission of Bavaria By 774, Charlemagne had invaded the Kingdom of Lombardy, and he later annexed the Lombardian territories and assumed its crown, placing the Papal States under Frankish protection. The Duchy of Spoleto south of Rome was acquired in 774, while in the central western parts of Europe, the Duchy of Bavaria was absorbed and the Bavarian policy continued of establishing tributary marches, (borders protected in return for tribute or taxes) among the Slavic Serbs and Czechs. The remaining power confronting the Franks in the east were the Avars. However, Charlemagne acquired other Slavic areas, including Bohemia, Moravia, Austria and Croatia. In 789, Charlemagne turned to Bavaria. He claimed that Tassilo III, Duke of Bavaria was an unfit ruler, due to his oath-breaking. The charges were exaggerated, but Tassilo was deposed anyway and put in the monastery of Jumièges. In 794, Tassilo was made to renounce any claim to Bavaria for himself and his family (the Agilolfings) at the synod of Frankfurt; he formally handed over to the king all of the rights he had held. Bavaria was subdivided into Frankish counties, as had been done with Saxony. Avar campaigns In 788, the Avars, an Asian nomadic group that had settled down in what is today Hungary (Einhard called them Huns), invaded Friuli and Bavaria. Charlemagne was preoccupied with other matters until 790 when he marched down the Danube and ravaged Avar territory to the Győr. A Lombard army under Pippin then marched into the Drava valley and ravaged Pannonia. The campaigns ended when the Saxons revolted again in 792. For the next two years, Charlemagne was occupied, along with the Slavs, against the Saxons. Pippin and Duke Eric of Friuli continued, however, to assault the Avars' ring-shaped strongholds. The great Ring of the Avars, their capital fortress, was taken twice. The booty was sent to Charlemagne at his capital, Aachen, and redistributed to his followers and to foreign rulers, including King Offa of Mercia. Soon the Avar tuduns had lost the will to fight and travelled to Aachen to become vassals to Charlemagne and to become Christians. Charlemagne accepted their surrender and sent one native chief, baptised Abraham, back to Avaria with the ancient title of khagan. Abraham kept his people in line, but in 800, the Bulgarians under Khan Krum attacked the remains of the Avar state. In 803, Charlemagne sent a Bavarian army into Pannonia, defeating and bringing an end to the Avar confederation. In November of the same year, Charlemagne went to Regensburg where the Avar leaders acknowledged him as their ruler. In 805, the Avar khagan, who had already been baptised, went to Aachen to ask permission to settle with his people south-eastward from Vienna. The Transdanubian territories became integral parts of the Frankish realm, which was abolished by the Magyars in 899–900. Northeast Slav expeditions In 789, in recognition of his new pagan neighbours, the Slavs, Charlemagne marched an Austrasian-Saxon army across the Elbe into Obotrite territory. The Slavs ultimately submitted, led by their leader Witzin. Charlemagne then accepted the surrender of the Veleti under Dragovit and demanded many hostages. He also demanded permission to send missionaries into this pagan region unmolested. The army marched to the Baltic before turning around and marching to the Rhine, winning much booty with no harassment. The tributary Slavs became loyal allies. In 795, when the Saxons broke the peace, the Abotrites and Veleti rebelled with their new ruler against the Saxons. Witzin died in battle and Charlemagne avenged him by harrying the Eastphalians on the Elbe. Thrasuco, his successor, led his men to conquest over the Nordalbingians and handed their leaders over to Charlemagne, who honoured him. The Abotrites remained loyal until Charles' death and fought later against the Danes. Southeast Slav expeditions When Charlemagne incorporated much of Central Europe, he brought the Frankish state face to face with the Avars and Slavs in the southeast. The most southeast Frankish neighbours were Croats, who settled in Lower Pannonia and Duchy of Croatia. While fighting the Avars, the Franks had called for their support. During the 790s, he won a major victory over them in 796. Duke Vojnomir of Lower Pannonia aided Charlemagne, and the Franks made themselves overlords over the Croats of northern Dalmatia, Slavonia and Pannonia. The Frankish commander Eric of Friuli wanted to extend his dominion by conquering the Littoral Croat Duchy. During that time, Dalmatian Croatia was ruled by Duke Višeslav of Croatia. In the Battle of Trsat, the forces of Eric fled their positions and were routed by the forces of Višeslav. Eric was among those killed which was a great blow for the Carolingian Empire. Charlemagne also directed his attention to the Slavs to the west of the Avar khaganate: the Carantanians and Carniolans. These people were subdued by the Lombards and Bavarii and made tributaries, but were never fully incorporated into the Frankish state. Imperium Coronation In 799, Pope Leo III had been assaulted by some of the Romans, who tried to put out his eyes and tear out his tongue. Leo escaped and fled to Charlemagne at Paderborn. Charlemagne, advised by scholar Alcuin, travelled to Rome, in November 800 and held a synod. On 23 December, Leo swore an oath of innocence to Charlemagne. His position having thereby been weakened, the Pope sought to restore his status. Two days later, at Mass, on Christmas Day (25 December), when Charlemagne knelt at the altar to pray, the Pope crowned him Imperator Romanorum ("Emperor of the Romans") in Saint Peter's Basilica. In so doing, the Pope rejected the legitimacy of Empress Irene of Constantinople: Charlemagne's coronation as Emperor, though intended to represent the continuation of the unbroken line of Emperors from Augustus to Constantine VI, had the effect of setting up two separate (and often opposing) Empires and two separate claims to imperial authority. It led to war in 802, and for centuries to come, the Emperors of both West and East would make competing claims of sovereignty over the whole. Einhard says that Charlemagne was ignorant of the Pope's intent and did not want any such coronation: A number of modern scholars, however, suggest that Charlemagne was indeed aware of the coronation; certainly, he cannot have missed the bejewelled crown waiting on the altar when he came to pray—something even contemporary sources support. Debate Historians have debated for centuries whether Charlemagne was aware before the coronation of the Pope's intention to crown him Emperor (Charlemagne declared that he would not have entered Saint Peter's had he known, according to chapter twenty-eight of Einhard's Vita Karoli Magni), but that debate obscured the more significant question of why the Pope granted the title and why Charlemagne accepted it. Collins points out "[t]hat the motivation behind the acceptance of the imperial title was a romantic and antiquarian interest in reviving the Roman Empire is highly unlikely." For one thing, such romance would not have appealed either to Franks or Roman Catholics at the turn of the ninth century, both of whom viewed the Classical heritage of the Roman Empire with distrust. The Franks took pride in having "fought against and thrown from their shoulders the heavy yoke of the Romans" and "from the knowledge gained in baptism, clothed in gold and precious stones the bodies of the holy martyrs whom the Romans had killed by fire, by the sword and by wild animals", as Pepin III described it in a law of 763 or 764. Furthermore, the new title—carrying with it the risk that the new emperor would "make drastic changes to the traditional styles and procedures of government" or "concentrate his attentions on Italy or on Mediterranean concerns more generally"—risked alienating the Frankish leadership. For both the Pope and Charlemagne, the Roman Empire remained a significant power in European politics at this time. The Byzantine Empire, based in Constantinople, continued to hold a substantial portion of Italy, with borders not far south of Rome. Charles' sitting in judgment of the Pope could be seen as usurping the prerogatives of the Emperor in Constantinople: For the Pope, then, there was "no living Emperor at that time" though Henri Pirenne disputes this saying that the coronation "was not in any sense explained by the fact that at this moment a woman was reigning in Constantinople". Nonetheless, the Pope took the extraordinary step of creating one. The papacy had since 727 been in conflict with Irene's predecessors in Constantinople over a number of issues, chiefly the continued Byzantine adherence to the doctrine of iconoclasm, the destruction of Christian images; while from 750, the secular power of the Byzantine Empire in central Italy had been nullified. By bestowing the Imperial crown upon Charlemagne, the Pope arrogated to himself "the right to appoint ... the Emperor of the Romans, ... establishing the imperial crown as his own personal gift but simultaneously granting himself implicit superiority over the Emperor whom he had created." And "because the Byzantines had proved so unsatisfactory from every point of view—political, military and doctrinal—he would select a westerner: the one man who by his wisdom and statesmanship and the vastness of his dominions ... stood out head and shoulders above his contemporaries." With Charlemagne's coronation, therefore, "the Roman Empire remained, so far as either of them [Charlemagne and Leo] were concerned, one and indivisible, with Charles as its Emperor", though there can have been "little doubt that the coronation, with all that it implied, would be furiously contested in Constantinople". Alcuin writes hopefully in his letters of an Imperium Christianum ("Christian Empire"), wherein, "just as the inhabitants of the [Roman Empire] had been united by a common Roman citizenship", presumably this new empire would be united by a common Christian faith. This is the view of Pirenne when he says "Charles was the Emperor of the ecclesia as the Pope conceived it, of the Roman Church, regarded as the universal Church". The Imperium Christianum was further supported at a number of synods all across Europe by Paulinus of Aquileia. What is known, from the Byzantine chronicler Theophanes, is that Charlemagne's reaction to his coronation was to take the initial steps towards securing the Constantinopolitan throne by sending envoys of marriage to Irene, and that Irene reacted somewhat favourably to them. It is important to distinguish between the universalist and localist conceptions of the empire, which remain controversial among historians. According to the former, the empire was a universal monarchy, a "commonwealth of the whole world, whose sublime unity transcended every minor distinction"; and the emperor "was entitled to the obedience of Christendom". According to the latter, the emperor had no ambition for universal dominion; his realm was limited in the same way as that of every other ruler, and when he made more far-reaching claims his object was normally to ward off the attacks either of the Pope or of the Byzantine emperor. According to this view, also, the origin of the empire is to be explained by specific local circumstances rather than by overarching theories. According to Ohnsorge, for a long time, it had been the custom of Byzantium to designate the German princes as spiritual "sons" of the Romans. What might have been acceptable in the fifth century had become provoking and insulting to the Franks in the eighth century. Charles came to believe that the Roman emperor, who claimed to head the world hierarchy of states, was, in reality, no greater than Charles himself, a king as other kings, since beginning in 629 he had entitled himself "Basileus" (translated literally as "king"). Ohnsorge finds it significant that the chief wax seal of Charles, which bore only the inscription: "Christe, protege Carolum regem Francorum [Christ, protect Charles, king of the Franks], was used from 772 to 813, even during the imperial period and was not replaced by a special imperial seal; indicating that Charles felt himself to be just the king of the Franks. Finally, Ohnsorge points out that in the spring of 813 at Aachen Charles crowned his only surviving son, Louis, as the emperor without recourse to Rome with only the acclamation of his Franks. The form in which this acclamation was offered was Frankish-Christian rather than Roman. This implies both independence from Rome and a Frankish (non-Roman) understanding of empire. May-Harting argues that the Imperial title was Charlemagne's face-saving offer to incorporate the recently conquered Saxons. Since the Saxons did not have an institution of kingship for their own ethnicity, claiming the right to rule them as King of the Saxons was not possible. Hence, it is argued, Charlemagne used the supra-ethnic Imperial title to incorporate the Saxons, which helped to cement the diverse peoples under his rule. Imperial title Charlemagne used these circumstances to claim that he was the "renewer of the Roman Empire", which had declined under the Byzantines. In his official charters, Charles preferred the style Karolus serenissimus Augustus a Deo coronatus magnus pacificus imperator Romanum gubernans imperium ("Charles, most serene Augustus crowned by God, the great, peaceful emperor ruling the Roman empire") to the more direct Imperator Romanorum ("Emperor of the Romans"). The title of Emperor remained in the Carolingian family for years to come, but divisions of territory and in-fighting over supremacy of the Frankish state weakened its significance. The papacy itself never forgot the title nor abandoned the right to bestow it. When the family of Charles ceased to produce worthy heirs, the Pope gladly crowned whichever Italian magnate could best protect him from his local enemies. The empire would remain in continuous existence for over a millennium, as the Holy Roman Empire, a true imperial successor to Charles. Imperial diplomacy The iconoclasm of the Byzantine Isaurian Dynasty was endorsed by the Franks. The Second Council of Nicaea reintroduced the veneration of icons under Empress Irene. The council was not recognised by Charlemagne since no Frankish emissaries had been invited, even though Charlemagne ruled more than three provinces of the classical Roman empire and was considered equal in rank to the Byzantine emperor. And while the Pope supported the reintroduction of the iconic veneration, he politically digressed from Byzantium. He certainly desired to increase the influence of the papacy, to honour his saviour Charlemagne, and to solve the constitutional issues then most troubling to European jurists in an era when Rome was not in the hands of an emperor. Thus, Charlemagne's assumption of the imperial title was not a usurpation in the eyes of the Franks or Italians. It was, however, seen as such in Byzantium, where it was protested by Irene and her successor Nikephoros I—neither of whom had any great effect in enforcing their protests. The East Romans, however, still held several territories in Italy: Venice (what was left of the Exarchate of Ravenna), Reggio (in Calabria), Otranto (in Apulia), and Naples (the Ducatus Neapolitanus). These regions remained outside of Frankish hands until 804, when the Venetians, torn by infighting, transferred their allegiance to the Iron Crown of Pippin, Charles' son. The Pax Nicephori ended. Nicephorus ravaged the coasts with a fleet, initiating the only instance of war between the Byzantines and the Franks. The conflict lasted until 810 when the pro-Byzantine party in Venice gave their city back to the Byzantine Emperor, and the two emperors of Europe made peace: Charlemagne received the Istrian peninsula and in 812 the emperor Michael I Rangabe recognised his status as Emperor, although not necessarily as "Emperor of the Romans". Danish attacks After the conquest of Nordalbingia, the Frankish frontier was brought into contact with Scandinavia. The pagan Danes, "a race almost unknown to his ancestors, but destined to be only too well known to his sons" as Charles Oman described them, inhabiting the Jutland peninsula, had heard many stories from Widukind and his allies who had taken refuge with them about the dangers of the Franks and the fury which their Christian king could direct against pagan neighbours. In 808, the king of the Danes, Godfred, expanded the vast Danevirke across the isthmus of Schleswig. This defence, last employed in the Danish-Prussian War of 1864, was at its beginning a long earthenwork rampart. The Danevirke protected Danish land and gave Godfred the opportunity to harass Frisia and Flanders with pirate raids. He also subdued the Frank-allied Veleti and fought the Abotrites. Godfred invaded Frisia, joked of visiting Aachen, but was murdered before he could do any more, either by a Frankish assassin or by one of his own men. Godfred was succeeded by his nephew Hemming, who concluded the Treaty of Heiligen with Charlemagne in late 811. Death In 813, Charlemagne called Louis the Pious, king of Aquitaine, his only surviving legitimate son, to his court. There Charlemagne crowned his son as co-emperor and sent him back to Aquitaine. He then spent the autumn hunting before returning to Aachen on 1 November. In January, he fell ill with pleurisy. In deep depression (mostly because many of his plans were not yet realised), he took to his bed on 21 January and as Einhard tells it: He was buried that same day, in Aachen Cathedral. The earliest surviving planctus, the Planctus de obitu Karoli, was composed by a monk of Bobbio, which he had patronised. A later story, told by Otho of Lomello, Count of the Palace at Aachen in the time of Emperor Otto III, would claim that he and Otto had discovered Charlemagne's tomb: Charlemagne, they claimed, was seated upon a throne, wearing a crown and holding a sceptre, his flesh almost entirely incorrupt. In 1165, Emperor Frederick I re-opened the tomb again and placed the emperor in a sarcophagus beneath the floor of the cathedral. In 1215 Emperor Frederick II re-interred him in a casket made of gold and silver known as the Karlsschrein. Charlemagne's death emotionally affected many of his subjects, particularly those of the literary clique who had surrounded him at Aachen. An anonymous monk of Bobbio lamented: Louis succeeded him as Charles had intended. He left a testament allocating his assets in 811 that was not updated prior to his death. He left most of his wealth to the Church, to be used for charity. His empire lasted only another generation in its entirety; its division, according to custom, between Louis's own sons after their father's death laid the foundation for the modern states of Germany and France. Administration Organisation The Carolingian king exercised the bannum, the right to rule and command. Under the Franks, it was a royal prerogative but could be delegated. He had supreme jurisdiction in judicial matters, made legislation, led the army, and protected both the Church and the poor. His administration was an attempt to organise the kingdom, church and nobility around him. As an administrator, Charlemagne stands out for his many reforms: monetary, governmental, military, cultural and ecclesiastical. He is the main protagonist of the "Carolingian Renaissance". Military Charlemagne's success rested primarily on novel siege technologies and excellent logistics rather than the long-claimed "cavalry revolution" led by Charles Martel in 730s. However, the stirrup, which made the "shock cavalry" lance charge possible, was not introduced to the Frankish kingdom until the late eighth century. Horses were used extensively by the Frankish military because they provided a quick, long-distance method of transporting troops, which was critical to building and maintaining the large empire. Economic and monetary reforms Charlemagne had an important role in determining Europe's immediate economic future. Pursuing his father's reforms, Charlemagne abolished the monetary system based on the gold . Instead, he and the Anglo-Saxon King Offa of Mercia took up Pippin's system for pragmatic reasons, notably a shortage of the metal. The gold shortage was a direct consequence of the conclusion of peace with Byzantium, which resulted in ceding Venice and Sicily to the East and losing their trade routes to Africa. The resulting standardisation economically harmonised and unified the complex array of currencies that had been in use at the commencement of his reign, thus simplifying trade and commerce. Charlemagne established a new standard, the (from the Latin , the modern pound), which was based upon a pound of silver—a unit of both money and weight—worth 20 sous (from the Latin [which was primarily an accounting device and never actually minted], the modern shilling) or 240 (from the Latin , the modern penny). During this period, the and the were counting units; only the was a coin of the realm. Charlemagne instituted principles for accounting practice by means of the Capitulare de villis of 802, which laid down strict rules for the way in which incomes and expenses were to be recorded. Charlemagne applied this system to much of the European continent, and Offa's standard was voluntarily adopted by much of England. After Charlemagne's death, continental coinage degraded, and most of Europe resorted to using the continued high-quality English coin until about 1100. Jews in Charlemagne's realm Early in Charlemagne's rule he tacitly allowed Jews to monopolise money lending. The lending of money in return for interest was proscribed in 814 because it violated Church law. Charlemagne introduced the Capitulary for the Jews, a prohibition on Jews engaging in money-lending due to the religious convictions of the majority of his constituents. Effectively banning money lending was a reversal of his earlier recorded general policy. Charlemagne also performed a significant number of microeconomic reforms, such as direct control of prices and feudal levies. He invited Italian Jews to immigrate, as royal clients independent of the feudal landowners, and form trading communities in the agricultural regions of Provence and the Rhineland. Their trading activities augmented the otherwise almost exclusively agricultural economies of these regions. Charlemagne's Capitulary for the Jews was not representative of his overall economic relationship or attitude towards the Frankish Jews; this relationship evolved throughout his reign. His personal physician, for example, was Jewish, and he employed one Jew, Isaac, who was his personal representative to the Muslim caliphate of Baghdad. Education reforms Part of Charlemagne's success as a warrior, an administrator and ruler can be traced to his admiration for learning and education. His reign is often referred to as the Carolingian Renaissance because of the flowering of scholarship, literature, art and architecture that characterise it. Charlemagne came into contact with the culture and learning of other countries (especially Moorish Spain, Anglo-Saxon England, and Lombard Italy) due to his vast conquests. He greatly increased the provision of monastic schools and scriptoria (centres for book-copying) in Francia. Charlemagne was a lover of books, sometimes having them read to him during meals. He was thought to enjoy the works of Augustine of Hippo. His court played a key role in producing books that taught elementary Latin and different aspects of the church. It also played a part in creating a royal library that contained in-depth works on language and Christian faith. Charlemagne encouraged clerics to translate Christian creeds and prayers into their respective vernaculars as well to teach grammar and music. Due to the increased interest of intellectual pursuits and the urging of their king, the monks accomplished so much copying that almost every manuscript from that time was preserved. At the same time, at the urging of their king, scholars were producing more secular books on many subjects, including history, poetry, art, music, law, theology, etc. Due to the increased number of titles, private libraries flourished. These were mainly supported by aristocrats and churchmen who could afford to sustain them. At Charlemagne's court, a library was founded and a number of copies of books were produced, to be distributed by Charlemagne. Book production was completed slowly by hand and took place mainly in large monastic libraries. Books were so in demand during Charlemagne's time that these libraries lent out some books, but only if that borrower offered valuable collateral in return. Most of the surviving works of classical Latin were copied and preserved by Carolingian scholars. Indeed, the earliest manuscripts available for many ancient texts are Carolingian. It is almost certain that a text which survived to the Carolingian age survives still. The pan-European nature of Charlemagne's influence is indicated by the origins of many of the men who worked for him: Alcuin, an Anglo-Saxon from York; Theodulf, a Visigoth, probably from Septimania; Paul the Deacon, Lombard; Italians Peter of Pisa and Paulinus of Aquileia; and Franks Angilbert, Angilram, Einhard and Waldo of Reichenau. Charlemagne promoted the liberal arts at court, ordering that his children and grandchildren be well-educated, and even studying himself (in a time when even leaders who promoted education did not take time to learn themselves) under the tutelage of Peter of Pisa, from whom he learned grammar; Alcuin, with whom he studied rhetoric, dialectic (logic), and astronomy (he was particularly interested in the movements of the stars); and Einhard, who tutored him in arithmetic. His great scholarly failure, as Einhard relates, was his inability to write: when in his old age he attempted to learn—practising the formation of letters in his bed during his free time on books and wax tablets he hid under his pillow—"his effort came too late in life and achieved little success", and his ability to read—which Einhard is silent about, and which no contemporary source supports—has also been called into question. In 800, Charlemagne enlarged the hostel at the Muristan in Jerusalem and added a library to it. He certainly had not been personally in Jerusalem. Church reforms Charlemagne expanded the reform Church's programme unlike his father, Pippin, and uncle, Carloman. The deepening of the spiritual life was later to be seen as central to public policy and royal governance. His reform focused on strengthening the church's power structure, improving clergy's skill and moral quality, standardising liturgical practices, improvements on the basic tenets of the faith and the rooting out of paganism. His authority extended over church and state. He could discipline clerics, control ecclesiastical property and define orthodox doctrine. Despite the harsh legislation and sudden change, he had developed support from clergy who approved his desire to deepen the piety and morals of his subjects. In 809–810, Charlemagne called a church council in Aachen, which confirmed the unanimous belief in the West that the Holy Spirit proceeds from the Father and the Son (ex Patre Filioque) and sanctioned inclusion in the Nicene Creed of |
standard also defines a "replacement" decoder, which maps all content labelled as certain encodings to the replacement character (�), refusing to process it at all. This is intended to prevent attacks (e.g. cross site scripting) which may exploit a difference between the client and server in what encodings are supported in order to mask malicious content. Although the same security concern applies to ISO-2022-JP and UTF-16, which also allow sequences of ASCII bytes to be interpreted differently, this approach was not seen as feasible for them since they are comparatively more frequently used in deployed content. The following encodings receive this treatment: Character references In addition to native character encodings, characters can also be encoded as character references, which can be numeric character references (decimal or hexadecimal) or character entity references. Character entity references are also sometimes referred to as named entities, or HTML entities for HTML. HTML's usage of character references derives from SGML. HTML character references A numeric character reference in HTML refers to a character by its Universal Character Set/Unicode code point, and uses the format &#nnnn; or &#xhhhh; where nnnn is the code point in decimal form, and hhhh is the code point in hexadecimal form. The x must be lowercase in XML documents. The nnnn or hhhh may be any number of digits and may include leading zeros. The hhhh may mix uppercase and lowercase, though uppercase is the usual style. Not all web browsers or email clients used by receivers of HTML documents, or text editors used by authors of HTML documents, will be able to render all HTML characters. Most modern software is able to display most or all of the characters for the user's language, and will draw a box or other clear indicator for characters they cannot render. For codes from 0 to 127, the original 7-bit ASCII standard set, most of these characters can be used without a character reference. Codes from 160 to 255 can all be created using character entity names. Only a few higher-numbered codes can be created using entity names, but all can be created by decimal number character reference. Character entity references can also have the format &name; where name is a case-sensitive alphanumeric string. For example, "λ" can also be encoded as λ in an HTML document. The character entity references <, >, " and & are predefined in HTML and SGML, because <, >, " and & are already used to delimit markup. This notably did not include XML's ' (') entity prior to HTML5. For a list of all named HTML character entity references along with the versions in which they were introduced, see List of XML and HTML character entity references. Unnecessary use of HTML character references may significantly reduce HTML readability. If the character encoding for a web page is chosen appropriately, then HTML character references are usually only required for markup delimiting characters as mentioned above, and for a few special characters (or none at all if a native Unicode encoding like UTF-8 is used). Incorrect HTML entity escaping may also open up security vulnerabilities for injection attacks such as cross-site scripting. If HTML attributes are left unquoted, certain characters, most importantly whitespace, such as space and tab, must be escaped using entities. Other languages related to HTML have their own methods of escaping characters. XML character references Unlike traditional HTML with its large range of character entity references, in XML there are only five predefined character entity references. These are used to escape characters that are markup sensitive in certain contexts: & → & (ampersand, U+0026) < → < (less-than sign, U+003C) > → > (greater-than sign, U+003E) " → " (quotation mark, U+0022) ' → ' (apostrophe, U+0027) All other character entity references have to be defined before they can be used. For example, use of é (which gives é, Latin lower-case E with acute accent, U+00E9 in Unicode) in an XML document will generate an error unless the entity has already been defined. XML also requires that the x in hexadecimal numeric references be in lowercase: for example ਛ | named entities, or HTML entities for HTML. HTML's usage of character references derives from SGML. HTML character references A numeric character reference in HTML refers to a character by its Universal Character Set/Unicode code point, and uses the format &#nnnn; or &#xhhhh; where nnnn is the code point in decimal form, and hhhh is the code point in hexadecimal form. The x must be lowercase in XML documents. The nnnn or hhhh may be any number of digits and may include leading zeros. The hhhh may mix uppercase and lowercase, though uppercase is the usual style. Not all web browsers or email clients used by receivers of HTML documents, or text editors used by authors of HTML documents, will be able to render all HTML characters. Most modern software is able to display most or all of the characters for the user's language, and will draw a box or other clear indicator for characters they cannot render. For codes from 0 to 127, the original 7-bit ASCII standard set, most of these characters can be used without a character reference. Codes from 160 to 255 can all be created using character entity names. Only a few higher-numbered codes can be created using entity names, but all can be created by decimal number character reference. Character entity references can also have the format &name; where name is a case-sensitive alphanumeric string. For example, "λ" can also be encoded as λ in an HTML document. The character entity references <, >, " and & are predefined in HTML and SGML, because <, >, " and & are already used to delimit markup. This notably did not include XML's ' (') entity prior to HTML5. For a list of all named HTML character entity references along with the versions in which they were introduced, see List of XML and HTML character entity references. Unnecessary use of HTML character references may significantly reduce HTML readability. If the character encoding for a web page is chosen appropriately, then HTML character references are usually only required for markup delimiting characters as mentioned above, and for a few special characters (or none at all if a native Unicode encoding like UTF-8 is used). Incorrect HTML entity escaping may also open up security vulnerabilities for injection attacks |
tubes are very similar to Oberlin, Endo and Koyama's long straight and parallel carbon layers cylindrically arranged around a hollow tube. Multi-wall carbon nanotubes are also sometimes used to refer to double- and triple-wall carbon nanotubes. Carbon nanotubes can also refer to tubes with an undetermined carbon-wall structure and diameters less than 100 nanometers. Such tubes were discovered in 1952 by Radushkevich and Lukyanovich. While nanotubes of other compositions exist, most research has been focused on the carbon ones. Therefore, the "carbon" qualifier is often left implicit in the acronyms, and the names are abbreviated NT, SWNT, and MWNT. The length of a carbon nanotube produced by common production methods is often not reported, but is typically much larger than its diameter. Thus, for many purposes end effects are neglected and the length of carbon nanotubes is assumed infinite. Carbon nanotubes can exhibit remarkable electrical conductivity, while others are semiconductors. They also have exceptional and thermal conductivity, because of their nanostructure and strength of the bonds between carbon atoms. In addition, they can be chemically modified. These properties are expected to be valuable in many areas of technology, such as electronics, optics, composite materials (replacing or complementing carbon fibers), nanotechnology, and other applications of materials science. Rolling up a hexagonal lattice along different directions to form different infinitely long single-wall carbon nanotubes shows that all of these tubes not only have helical but also translational symmetry along the tube axis and many also have nontrivial rotational symmetry about this axis. In addition, most are chiral, meaning the tube and its mirror image cannot be superimposed. This construction also allows single-wall carbon nanotubes to be labeled by a pair of integers. A special group of achiral single-wall carbon nanotubes are metallic, but all the rest are either small or moderate band gap semiconductors. These electrical properties, however, do not depend on whether the hexagonal lattice is rolled from its back to front or from its front to back and hence are the same for the tube and its mirror image. Structure of SWNTs Basic details The structure of an ideal (infinitely long) single-walled carbon nanotube is that of a regular hexagonal lattice drawn on an infinite cylindrical surface, whose vertices are the positions of the carbon atoms. Since the length of the carbon-carbon bonds is fairly fixed, there are constraints on the diameter of the cylinder and the arrangement of the atoms on it. In the study of nanotubes, one defines a zigzag path on a graphene-like lattice as a path that turns 60 degrees, alternating left and right, after stepping through each bond. It is also conventional to define an armchair path as one that makes two left turns of 60 degrees followed by two right turns every four steps. On some carbon nanotubes, there is a closed zigzag path that goes around the tube. One says that the tube is of the zigzag type or configuration, or simply is a zigzag nanotube. If the tube is instead encircled by a closed armchair path, it is said to be of the armchair type, or an armchair nanotube. An infinite nanotube that is of the zigzag (or armchair) type consists entirely of closed zigzag (or armchair) paths, connected to each other. The zigzag and armchair configurations are not the only structures that a single-walled nanotube can have. To describe the structure of a general infinitely long tube, one should imagine it being sliced open by a cut parallel to its axis, that goes through some atom A, and then unrolled flat on the plane, so that its atoms and bonds coincide with those of an imaginary graphene sheet—more precisely, with an infinitely long strip of that sheet. The two halves of the atom A will end up on opposite edges of the strip, over two atoms A1 and A2 of the graphene. The line from A1 to A2 will correspond to the circumference of the cylinder that went through the atom A, and will be perpendicular to the edges of the strip. In the graphene lattice, the atoms can be split into two classes, depending on the directions of their three bonds. Half the atoms have their three bonds directed the same way, and half have their three bonds rotated 180 degrees relative to the first half. The atoms A1 and A2, which correspond to the same atom A on the cylinder, must be in the same class. It follows that the circumference of the tube and the angle of the strip are not arbitrary, because they are constrained to the lengths and directions of the lines that connect pairs of graphene atoms in the same class. Let u and v be two linearly independent vectors that connect the graphene atom A1 to two of its nearest atoms with the same bond directions. That is, if one numbers consecutive carbons around a graphene cell with C1 to C6, then u can be the vector from C1 to C3, and v be the vector from C1 to C5. Then, for any other atom A2 with same class as A1, the vector from A1 to A2 can be written as a linear combination n u + m v, where n and m are integers. And, conversely, each pair of integers (n,m) defines a possible position for A2. Given n and m, one can reverse this theoretical operation by drawing the vector w on the graphene lattice, cutting a strip of the latter along lines perpendicular to w through its endpoints A1 and A2, and rolling the strip into a cylinder so as to bring those two points together. If this construction is applied to a pair (k,0), the result is a zigzag nanotube, with closed zigzag paths of 2k atoms. If it is applied to a pair (k,k), one obtains an armchair tube, with closed armchair paths of 4k atoms. Types Moreover, the structure of the nanotube is not changed if the strip is rotated by 60 degrees clockwise around A1 before applying the hypothetical reconstruction above. Such a rotation changes the corresponding pair (n,m) to the pair (−2m,n+m). It follows that many possible positions of A2 relative to A1 — that is, many pairs (n,m) — correspond to the same arrangement of atoms on the nanotube. That is the case, for example, of the six pairs (1,2), (−2,3), (−3,1), (−1,−2), (2,−3), and (3,−1). In particular, the pairs (k,0) and (0,k) describe the same nanotube geometry. These redundancies can be avoided by considering only pairs (n,m) such that n > 0 and m ≥ 0; that is, where the direction of the vector w lies between those of u (inclusive) and v (exclusive). It can be verified that every nanotube has exactly one pair (n,m) that satisfies those conditions, which is called the tube's type. Conversely, for every type there is a hypothetical nanotube. In fact, two nanotubes have the same type if and only if one can be conceptually rotated and translated so as to match the other exactly. Instead of the type (n,m), the structure of a carbon nanotube can be specified by giving the length of the vector w (that is, the circumference of the nanotube), and the angle α between the directions of u and w, which may range from 0 (inclusive) to 60 degrees clockwise (exclusive). If the diagram is drawn with u horizontal, the latter is the tilt of the strip away from the vertical. Chirality and mirror symmetry A nanotube is chiral if it has type (n,m), with m > 0 and m ≠ n; then its enantiomer (mirror image) has type (m,n), which is different from (n,m). This operation corresponds to mirroring the unrolled strip about the line L through A1 that makes an angle of 30 degrees clockwise from the direction of the u vector (that is, with the direction of the vector u+v). The only types of nanotubes that are achiral are the (k,0) "zigzag" tubes and the (k,k) "armchair" tubes. If two enantiomers are to be considered the same structure, then one may consider only types (n,m) with 0 ≤ m ≤ n and n > 0. Then the angle α between u and w, which may range from 0 to 30 degrees (inclusive both), is called the "chiral angle" of the nanotube. Circumference and diameter From n and m one can also compute the circumference c, which is the length of the vector w, which turns out to be: in picometres. The diameter of the tube is then , that is also in picometres. (These formulas are only approximate, especially for small n and m where the bonds are strained; and they do not take into account the thickness of the wall.) The tilt angle α between u and w and the circumference c are related to the type indices n and m by: where arg(x,y) is the clockwise angle between the X-axis and the vector (x,y); a function that is available in many programming languages as atan2(y,x). Conversely, given c and α, one can get the type (n,m) by the formulas: which must evaluate to integers. Physical limits Narrowest examples If n and m are too small, the structure described by the pair (n,m) will describe a molecule that cannot be reasonably called a "tube", and may not even be stable. For example, the structure theoretically described by the pair (1,0) (the limiting "zigzag" type) would be just a chain of carbons. That is a real molecule, the carbyne; which has some characteristics of nanotubes (such as orbital hybridization, high tensile strength, etc.) — but has no hollow space, and may not be obtainable as a condensed phase. The pair (2,0) would theoretically yield a chain of fused 4-cycles; and (1,1), the limiting "armchair" structure, would yield a chain of bi-connected 4-rings. These structures may not be realizable. The thinnest carbon nanotube proper is the armchair structure with type (2,2), which has a diameter of 0.3 nm. This nanotube was grown inside a multi-walled carbon nanotube. Assigning of the carbon nanotube type was done by a combination of high-resolution transmission electron microscopy (HRTEM), Raman spectroscopy, and density functional theory (DFT) calculations. The thinnest freestanding single-walled carbon nanotube is about 0.43 nm in diameter. Researchers suggested that it can be either (5,1) or (4,2) SWCNT, but the exact type of the carbon nanotube remains questionable. (3,3), (4,3), and (5,1) carbon nanotubes (all about 0.4 nm in diameter) were unambiguously identified using aberration-corrected high-resolution transmission electron microscopy inside double-walled CNTs. Length The observation of the longest carbon nanotubes grown so far, around 1/2 meter (550 mm long), was reported in 2013. These nanotubes were grown on silicon substrates using an improved chemical vapor deposition (CVD) method and represent electrically uniform arrays of single-walled carbon nanotubes. The shortest carbon nanotube can be considered to be the organic compound cycloparaphenylene, which was synthesized in 2008 by Ramesh Jasti. Other small molecule carbon nanotubes have been synthesized since. Density The highest density of CNTs was achieved in 2013, grown on a conductive titanium-coated copper surface that was coated with co-catalysts cobalt and molybdenum at lower than typical temperatures of 450 °C. The tubes averaged a height of 380 nm and a mass density of 1.6 g cm−3. The material showed ohmic conductivity (lowest resistance ~22 kΩ). Variants There is no consensus on some terms describing carbon nanotubes in scientific literature: both "-wall" and "-walled" are being used in combination with "single", "double", "triple", or "multi", and the letter C is often omitted in the abbreviation, for example, multi-walled carbon nanotube (MWNT). The International Standards Organization uses single-wall or multi-wall in its documents. Multi-walled Multi-walled nanotubes (MWNTs) consist of multiple rolled layers (concentric tubes) of graphene. There are two models that can be used to describe the structures of multi-walled nanotubes. In the Russian Doll model, sheets of graphite are arranged in concentric cylinders, e.g., a (0,8) single-walled nanotube (SWNT) within a larger (0,17) single-walled nanotube. In the Parchment model, a single sheet of graphite is rolled in around itself, resembling a scroll of parchment or a rolled newspaper. The interlayer distance in multi-walled nanotubes is close to the distance between graphene layers in graphite, approximately 3.4 Å. The Russian Doll structure is observed more commonly. Its individual shells can be described as SWNTs, which can be metallic or semiconducting. Because of statistical probability and restrictions on the relative diameters of the individual tubes, one of the shells, and thus the whole MWNT, is usually a zero-gap metal. Double-walled carbon nanotubes (DWNTs) form a special class of nanotubes because their morphology and properties are similar to those of SWNTs but they are more resistant to attacks by chemicals. This is especially important when it is necessary to graft chemical functions to the surface of the nanotubes (functionalization) to add properties to the CNT. Covalent functionalization of SWNTs will break some C=C double bonds, leaving "holes" in the structure on the nanotube and thus modifying both its mechanical and electrical properties. In the case of DWNTs, only the outer wall is modified. DWNT synthesis on the gram-scale by the CCVD technique was first proposed in 2003 from the selective reduction of oxide solutions in methane and hydrogen. The telescopic motion ability of inner shells and their unique mechanical properties will permit the use of multi-walled nanotubes as the main movable arms in upcoming nanomechanical devices. The retraction force that occurs to telescopic motion is caused by the Lennard-Jones | produce nanotubes in sizable quantities, including arc discharge, laser ablation, chemical vapor deposition (CVD) and high-pressure carbon monoxide disproportionation (HiPCO). Among these arc discharge, laser ablation, chemical vapor deposition (CVD) are batch by batch process and HiPCO is gas phase continuous process. Most of these processes take place in a vacuum or with process gases. The CVD growth method is popular, as it yields high quantity and has a degree of control over diameter, length and morphology. Using particulate catalysts, large quantities of nanotubes can be synthesized by these methods, but achieving the repeatability becomes a major problem with CVD growth. The HiPCO process advances in catalysis and continuous growth are making CNTs more commercially viable. The HiPCO process helps in producing high purity single walled carbon nanotubes in higher quantity. The HiPCO reactor operates at high temperature 900-1100 °C and high pressure ~30-50 bar. It uses carbon monoxide as the carbon source and iron pentacarbonyl or nickel tetracarbonyl as a catalyst. These catalyst acts as the nucleation site for the nanotubes to grow. Vertically aligned carbon nanotube arrays are also grown by thermal chemical vapor deposition. A substrate (quartz, silicon, stainless steel, etc.) is coated with a catalytic metal (Fe, Co, Ni) layer. Typically that layer is iron, and is deposited via sputtering to a thickness of 1–5 nm. A 10–50 nm underlayer of alumina is often also put down on the substrate first. This imparts controllable wetting and good interfacial properties. When the substrate is heated to the growth temperature (~700 °C), the continuous iron film breaks up into small islands... each island then nucleates a carbon nanotube. The sputtered thickness controls the island size, and this in turn determines the nanotube diameter. Thinner iron layers drive down the diameter of the islands, and they drive down the diameter of the nanotubes grown. The amount of time that the metal island can sit at the growth temperature is limited, as they are mobile, and can merge into larger (but fewer) islands. Annealing at the growth temperature reduces the site density (number of CNT/mm2) while increasing the catalyst diameter. The as-prepared carbon nanotubes always have impurities such as other forms of carbon (amorphous carbon, fullerene, etc.) and non-carbonaceous impurities (metal used for catalyst). These impurities need to be removed to make use of the carbon nanotubes in applications. Functionalization CNTs are known to have weak dispersibility in many solvents such as water as a consequence of strong intermolecular p–p interactions. This hinders the processability of CNTs in industrial applications. In order to tackle the issue, various techniques have been developed over the years to modify the surface of CNTs in order to improve their stability and solubility in water. This enhances the processing and manipulation of insoluble CNTs, rendering them useful for synthesizing innovative CNT nano-fluids with impressive properties that are tuneable for a wide range of applications. Chemical routes such as covalent functionalization have been studied extensively, which involves the oxidation of CNTs via strong acids (e.g. sulfuric acid, nitric acid, or a mixture of both) in order to set the carboxylic groups onto the surface of the CNTs as the final product or for further modification by esterification or amination. Free radical grafting is a promising technique among covalent functionalization methods, in which alkyl or aryl peroxides, substituted anilines, and diazonium salts are used as the starting agents. Free radical grafting of macromolecules (as the functional group) onto the surface of CNTs can improve the solubility of CNTs compared to common acid treatments which involve the attachment of small molecules such as hydroxyl onto the surface of CNTs. the solubility of CNTs can be improved significantly by free-radical grafting because the large functional molecules facilitate the dispersion of CNTs in a variety of solvents, even at a low degree of functionalization. Recently, an innovative, bio-based, environmentally friendly approach has been developed for the covalent functionalization of multi-walled carbon nanotubes (MWCNTs) using clove buds. This approach is innovative and green because it does not use toxic and hazardous acids which are typically used in common carbon nanomaterial functionalization procedures. The MWCNTs are functionalized in one pot using a free radical grafting reaction. The clove-functionalized MWCNTs are then dispersed in water, producing a highly stable multi-walled carbon nanotubes aqueous suspension (nanofluids). Modeling Carbon nanotubes are modelled in a similar manner as traditional composites in which a reinforcement phase is surrounded by a matrix phase. Ideal models such as cylindrical, hexagonal and square models are common. The size of the micromechanics model is highly function of the studied mechanical properties. The concept of representative volume element (RVE) is used to determine the appropriate size and configuration of computer model to replicate the actual behavior of CNT reinforced nanocomposite. Depending on the material property of interest (thermal, electrical, modulus, creep), one RVE might predict the property better than the alternatives. While the implementation of ideal model is computationally efficient, they do not represent microstructural features observed in scanning electron microscopy of actual nanocomposites. To incorporate realistic modeling, computer models are also generated to incorporate variability such as waviness, orientation and agglomeration of multiwall or single wall carbon nanotubes. Metrology There are many metrology standards and reference materials available for carbon nanotubes. For single-wall carbon nanotubes, ISO/TS 10868 describes a measurement method for the diameter, purity, and fraction of metallic nanotubes through optical absorption spectroscopy, while ISO/TS 10797 and ISO/TS 10798 establish methods to characterize the morphology and elemental composition of single-wall carbon nanotubes, using transmission electron microscopy and scanning electron microscopy respectively, coupled with energy dispersive X-ray spectrometry analysis. NIST SRM 2483 is a soot of single-wall carbon nanotubes used as a reference material for elemental analysis, and was characterized using thermogravimetric analysis, prompt gamma activation analysis, induced neutron activation analysis, inductively coupled plasma mass spectroscopy, resonant Raman scattering, UV-visible-near infrared fluorescence spectroscopy and absorption spectroscopy, scanning electron microscopy, and transmission electron microscopy. The Canadian National Research Council also offers a certified reference material SWCNT-1 for elemental analysis using neutron activation analysis and inductively coupled plasma mass spectroscopy. NIST RM 8281 is a mixture of three lengths of single-wall carbon nanotube. For multiwall carbon nanotubes, ISO/TR 10929 identifies the basic properties and the content of impurities, while ISO/TS 11888 describes morphology using scanning electron microscopy, transmission electron microscopy, viscometry, and light scattering analysis. ISO/TS 10798 is also valid for multiwall carbon nanotubes. Chemical modification Carbon nanotubes can be functionalized to attain desired properties that can be used in a wide variety of applications. The two main methods of carbon nanotube functionalization are covalent and non-covalent modifications. Because of their apparent hydrophobic nature, carbon nanotubes tend to agglomerate hindering their dispersion in solvents or viscous polymer melts. The resulting nanotube bundles or aggregates reduce the mechanical performance of the final composite. The surface of the carbon nanotubes can be modified to reduce the hydrophobicity and improve interfacial adhesion to a bulk polymer through chemical attachment. The surface of carbon nanotubes can be chemically modified by coating spinel nanoparticles by hydrothermal synthesis and can be used for water oxidation purposes. In addition, the surface of carbon nanotubes can be fluorinated or halofluorinated by heating while in contact with a fluoroorganic substance, thereby forming partially fluorinated carbons (so called Fluocar materials) with grafted (halo)fluoroalkyl functionality. Applications A primary obstacle for applications of carbon nanotubes has been their cost. Prices for single-walled nanotubes declined from around $1500 per gram as of 2000 to retail prices of around $50 per gram of as-produced 40–60% by weight SWNTs as of March 2010. As of 2016, the retail price of as-produced 75% by weight SWNTs was $2 per gram. Current Current use and application of nanotubes has mostly been limited to the use of bulk nanotubes, which is a mass of rather unorganized fragments of nanotubes. Bulk nanotube materials may never achieve a tensile strength similar to that of individual tubes, but such composites may, nevertheless, yield strengths sufficient for many applications. Bulk carbon nanotubes have already been used as composite fibers in polymers to improve the mechanical, thermal and electrical properties of the bulk product. Easton-Bell Sports, Inc. have been in partnership with Zyvex Performance Materials, using CNT technology in a number of their bicycle components – including flat and riser handlebars, cranks, forks, seatposts, stems and aero bars. Amroy Europe Oy manufactures Hybtonite carbon nanoepoxy resins where carbon nanotubes have been chemically activated to bond to epoxy, resulting in a composite material that is 20% to 30% stronger than other composite materials. It has been used for wind turbines, marine paints and a variety of sports gear such as skis, ice hockey sticks, baseball bats, hunting arrows, and surfboards. Surrey NanoSystems synthesises carbon nanotubes to create vantablack. Other current applications include: "Gecko tape" (also called "nano tape") is often commercially sold as double-sided adhesive tape. It can be used to hang lightweight items such as pictures and decorative items on smooth walls without punching holes in the wall. The carbon nanotube arrays comprising the synthetic setae leave no residue after removal and can stay sticky in extreme temperatures. tips for atomic force microscope probes in tissue engineering, carbon nanotubes can act as scaffolding for bone growth Under development Current research for modern applications include: Utilizing carbon nanotubes as the channel material of carbon nanotube field-effect transistors. Using carbon nanotubes as a scaffold for diverse microfabrication techniques. Energy dissipation in self-organized nanostructures under influence of an electric field. Using carbon nanotubes for environmental monitoring due to their active surface area and their ability to absorb gases. Jack Andraka used carbon nanotubes in his pancreatic cancer test. His method of testing won the Intel International Science and Engineering Fair Gordon E. Moore Award in the spring of 2012. The Boeing Company has patented the use of carbon nanotubes for structural health monitoring of composites used in aircraft structures. This technology will greatly reduce the risk of an in-flight failure caused by structural degradation of aircraft. Zyvex Technologies has also built a 54' maritime vessel, the Piranha Unmanned Surface Vessel, as a technology demonstrator for what is possible using CNT technology. CNTs help improve the structural performance of the vessel, resulting in a lightweight 8,000 lb boat that can carry a payload of 15,000 lb over a range of 2,500 miles. Carbon nanotubes can serve as additives to various structural materials. For instance, nanotubes form a tiny portion of the material(s) in some (primarily carbon fiber) baseball bats, golf clubs, car parts, or damascus steel. IBM expected carbon nanotube transistors to be used on Integrated Circuits by 2020. Potential The strength and flexibility of carbon nanotubes makes them of potential use in controlling other nanoscale structures, which suggests they will have an important role in nanotechnology engineering. The highest tensile strength of an individual multi-walled carbon nanotube has been tested to be 63 GPa. Carbon nanotubes were found in Damascus steel from the 17th century, possibly helping to account for the legendary strength of the swords made of it. Recently, several studies have highlighted the prospect of using carbon nanotubes as building blocks to fabricate three-dimensional macroscopic (>1mm in all three dimensions) all-carbon devices. Lalwani et al. have reported a novel radical initiated thermal crosslinking method to fabricated macroscopic, free-standing, porous, all-carbon scaffolds using single- and multi-walled carbon nanotubes as building blocks. These scaffolds possess macro-, micro-, and nano- structured pores and the porosity can be tailored for specific applications. These 3D all-carbon scaffolds/architectures may be used for the fabrication of the next generation of energy storage, supercapacitors, field emission transistors, high-performance catalysis, photovoltaics, and biomedical devices and implants. CNTs are potential candidates for future via and wire material in nano-scale VLSI circuits. Eliminating electromigration reliability concerns that plague today's Cu interconnects, isolated (single and multi-wall) CNTs can carry current densities in excess of 1000 MA/cm2 without electromigration damage. Single-walled nanotubes are likely candidates for miniaturizing electronics. The most basic building block of these systems is an electric wire, and SWNTs with diameters of an order of a nanometer can be excellent conductors. One useful application of SWNTs is in the development of the first intermolecular field-effect transistors (FET). The first intermolecular logic gate using SWCNT FETs was made in 2001. A logic gate requires both a p-FET and an n-FET. Because SWNTs are p-FETs when exposed to oxygen and n-FETs otherwise, it is possible to expose half of an SWNT to oxygen and protect the other half from it. The resulting SWNT acts as a not logic gate with both p- and n-type FETs in the same molecule. Large quantities of pure CNTs can be made into a freestanding sheet or film by surface-engineered tape-casting (SETC) fabrication technique which is a scalable method to fabricate flexible and foldable sheets with superior properties. Another reported form factor is CNT fiber (a.k.a. filament) by wet spinning. The fiber is either directly spun from the synthesis pot or spun from pre-made dissolved CNTs. Individual fibers can be turned into a yarn. Apart from its strength and flexibility, the main advantage is making an electrically conducting yarn. The electronic properties of individual CNT fibers (i.e. bundle of individual CNT) are governed by the two-dimensional structure of CNTs. The fibers were measured to have a resistivity only one order of magnitude higher than metallic conductors at 300K. By further optimizing the CNTs and CNT fibers, CNT fibers with improved electrical properties could be developed. CNT-based yarns are suitable for applications in energy and electrochemical water treatment when coated with an ion-exchange membrane. Also, CNT-based yarns could replace copper as a winding material. Pyrhönen et al. (2015) have built a motor using CNT winding. Safety and health The National Institute for Occupational Safety and Health (NIOSH) is the leading United States federal agency conducting research and providing guidance on the occupational safety and health implications and applications of nanomaterials. Early scientific studies have indicated that nanoscale particles may pose a greater health risk than bulk materials due to a relative increase in surface area per unit mass. The biological interactions of nanotubes are not well understood, and the field is open to continued toxicological studies. It is often difficult to separate confounding factors, and since carbon is relatively biologically inert, some of the toxicity attributed to carbon nanotubes may be instead due to residual metal catalyst contamination. In previous studies, only Mitsui-7 was reliably demonstrated to be carcinogenic, although for unclear/unknown reasons. Unlike many common |
church in Bohemia started already in the late 14th century. Jan Hus's followers seceded from some practices of the Roman Church and in the Hussite Wars (1419–1434) defeated five crusades organized against them by Sigismund. During the next two centuries, 90% of the population in Bohemia and Moravia were considered Hussites. The pacifist thinker Petr Chelčický inspired the movement of the Moravian Brethren (by the middle of the 15th century) that completely separated from the Roman Catholic Church. On 21 December 1421, Jan Žižka, a successful military commander and mercenary, led his group of forces in the Battle of Kutná Hora, resulting in a victory for the Hussites. He is honoured to this day as a national hero. After 1526 Bohemia came increasingly under Habsburg control as the Habsburgs became first the elected and then in 1627 the hereditary rulers of Bohemia. Between 1583 and 1611 Prague was the official seat of the Holy Roman Emperor Rudolf II and his court. The Defenestration of Prague and subsequent revolt against the Habsburgs in 1618 marked the start of the Thirty Years' War. In 1620, the rebellion in Bohemia was crushed at the Battle of White Mountain and the ties between Bohemia and the Habsburgs' hereditary lands in Austria were strengthened. The leaders of the Bohemian Revolt were executed in 1621. The nobility and the middle class Protestants had to either convert to Catholicism or leave the country. In the "Dark Age" of 1620 to the late 18th century, the population of the Czech lands declined by a third through the expulsion of Czech Protestants as well as due to the war, disease and famine. The Habsburgs prohibited all Christian confessions other than Catholicism. The flowering of Baroque culture shows the ambiguity of this historical period. Ottoman Turks and Tatars invaded Moravia in 1663. In 1679–1680 the Czech lands faced the Great Plague of Vienna and an uprising of serfs. There were peasant uprisings influenced by famine. Serfdom was abolished between 1781 and 1848. Several battles of the Napoleonic Wars took place on the current territory of the Czech Republic. The end of the Holy Roman Empire in 1806 led to degradation of the political status of Bohemia which lost its position of an electorate of the Holy Roman Empire as well as its own political representation in the Imperial Diet. Bohemian lands became part of the Austrian Empire. During the 18th and 19th century the Czech National Revival began its rise, with the purpose to revive Czech language, culture, and national identity. The Revolution of 1848 in Prague, striving for liberal reforms and autonomy of the Bohemian Crown within the Austrian Empire, was suppressed. It seemed that some concessions would be made also to Bohemia, but in the end, the Emperor Franz Joseph I affected a compromise with Hungary only. The Austro-Hungarian Compromise of 1867 and the never realized coronation of Franz Joseph as King of Bohemia led to a disappointment of some Czech politicians. The Bohemian Crown lands became part of the so-called Cisleithania. The Czech Social Democratic and progressive politicians started the fight for universal suffrage. The first elections under universal male suffrage were held in 1907. Czechoslovakia In 1918, during the collapse of the Habsburg Monarchy at the end of World War I, the independent republic of Czechoslovakia, which joined the winning Allied powers, was created, with Tomáš Garrigue Masaryk in the lead. This new country incorporated the Bohemian Crown. The First Czechoslovak Republic comprised only 27% of the population of the former Austria-Hungary, but nearly 80% of the industry, which enabled it to compete with Western industrial states. In 1929 compared to 1913, the gross domestic product increased by 52% and industrial production by 41%. In 1938 Czechoslovakia held 10th place in the world industrial production. Czechoslovakia was the only country in Central and Eastern Europe to remain a democracy throughout the entire interwar period. Although the First Czechoslovak Republic was a unitary state, it provided certain rights to its minorities, the largest being Germans (23.6% in 1921), Hungarians (5.6%) and Ukrainians (3.5%). Western Czechoslovakia was occupied by Nazi Germany, which placed most of the region into the Protectorate of Bohemia and Moravia. The Protectorate was proclaimed part of the Third Reich, and the president and prime minister were subordinated to Nazi Germany's Reichsprotektor. One Nazi concentration camp was located within the Czech territory at Terezín, north of Prague. The vast majority of the Protectorate's Jews were murdered in Nazi-run concentration camps. The Nazi Generalplan Ost called for the extermination, expulsion, Germanization or enslavement of most or all Czechs for the purpose of providing more living space for the German people. There was Czechoslovak resistance to Nazi occupation as well as reprisals against the Czechoslovaks for their anti-Nazi resistance. The German occupation ended on 9 May 1945, with the arrival of the Soviet and American armies and the Prague uprising. Most of Czechoslovakia's German-speakers were forcibly expelled from the country, first as a result of local acts of violence and then under the aegis of an "organized transfer" sanctified by the Soviet Union, the United States, and Great Britain at the Potsdam Conference. In the 1946 elections, the Communist Party gained 38% of the votes and became the largest party in the Czechoslovak parliament, formed a coalition with other parties, and consolidated power. A coup d'état came in 1948 and a single-party government was formed. For the next 41 years, the Czechoslovak Communist state is characterized by certain Eastern Bloc's economic and political features. The Prague Spring political liberalization was stopped by the 1968 Warsaw Pact invasion of Czechoslovakia. Analysts believe that the invasion caused the communist movement to fracture, ultimately leading to the Revolutions of 1989. Czech Republic In November 1989, Czechoslovakia returned to a liberal democracy through the Velvet Revolution. However, Slovak national aspirations strengthened (Hyphen War) and on 1 January 1993, the country peacefully split into the independent countries of the Czech Republic and Slovakia. Both countries went through economic reforms and privatizations, with the intention of creating a market economy. This process was largely successful; in 2006 the Czech Republic was recognized by the World Bank as a "developed country", and in 2009 the Human Development Index ranked it as a nation of "Very High Human Development". From 1991, the Czech Republic, originally as part of Czechoslovakia and since 1993 in its own right, has been a member of the Visegrád Group and from 1995, the OECD. The Czech Republic joined NATO on 12 March 1999 and the European Union on 1 May 2004. On 21 December 2007 the Czech Republic joined the Schengen Area. Until 2017, either the Czech Social Democratic Party or the Civic Democratic Party led the governments of the Czech Republic. In October 2017, populist movement ANO 2011, led by the country's second-richest man, Andrej Babiš, won the elections with three times more votes than its closest rival, the centre-right Civic Democrats. In December 2017, Czech President Miloš Zeman appointed Andrej Babiš as the new prime minister. After the results of the elections in October 2021, Petr Fiala became the new Prime Minister. He formed a government coalition of the Alliance SPOLU (Civic Democratic Party, KDU-ČSL and TOP 09) and the Alliance of Pirates and Mayors. In the election alliance SPOLU, the ANO movement narrowly defeated. Government The Czech Republic is a pluralist multi-party parliamentary representative democracy. The Parliament (Parlament České republiky) is bicameral, with the Chamber of Deputies (, 200 members) and the Senate (, 81 members). The members of the Chamber of Deputies are elected for a four-year term by proportional representation, with a 5% election threshold. There are 14 voting districts, identical to the country's administrative regions. The Chamber of Deputies, the successor to the Czech National Council, has the powers and responsibilities of the now defunct federal parliament of the former Czechoslovakia. The members of the Senate are elected in single-seat constituencies by two-round runoff voting for a six-year term, with one-third elected every even year in the autumn. This arrangement is modeled on the U.S. Senate, but each constituency is roughly the same size and the voting system used is a two-round runoff. The president is a formal head of state with limited and specific powers, who appoints the prime minister, as well the other members of the cabinet on a proposal by the prime minister. From 1993 until 2012, the President of the Czech Republic was selected by a joint session of the parliament for a five-year term, with no more than two consecutive terms (2x Václav Havel, 2x Václav Klaus). Since 2013 the presidential election is direct. Some commentators have argued that, with the introduction of direct election of the President, the Czech Republic has moved away from the parliamentary system and towards a semi-presidential one. The Government's exercise of executive power derives from the Constitution. The members of the government are the Prime Minister, Deputy prime ministers and other ministers. The Government is responsible to the Chamber of Deputies. The Prime Minister is the head of government and wields powers such as the right to set the agenda for most foreign and domestic policy and choose government ministers. |President |Miloš Zeman |SPOZ |8 March 2013 |- |President of the Senate |Miloš Vystrčil |ODS |19 February 2020 |- |President of the Chamber of Deputies |Markéta Pekarová Adamová |TOP 09 |10 November 2021 |- |Prime Minister |Petr Fiala |ODS |28 November 2021 |} Law The Czech Republic is a unitary state, with a civil law system based on the continental type, rooted in Germanic legal culture. The basis of the legal system is the Constitution of the Czech Republic adopted in 1993. The Penal Code is effective from 2010. A new Civil code became effective in 2014. The court system includes district, county, and supreme courts and is divided into civil, criminal, and administrative branches. The Czech judiciary has a triumvirate of supreme courts. The Constitutional Court consists of 15 constitutional judges and oversees violations of the Constitution by either the legislature or by the government. The Supreme Court is formed of 67 judges and is the court of highest appeal for most legal cases heard in the Czech Republic. The Supreme Administrative Court decides on issues of procedural and administrative propriety. It also has jurisdiction over certain political matters, such as the formation and closure of political parties, jurisdictional boundaries between government entities, and the eligibility of persons to stand for public office. The Supreme Court and the Supreme Administrative Court are both based in Brno, as is the Supreme Public Prosecutor's Office. Foreign relations The Czech Republic has ranked as one of the safest or most peaceful countries for the past few decades. It is a member of the United Nations, the European Union, NATO, OECD, Council of Europe and is an observer to the Organization of American States. The embassies of most countries with diplomatic relations with the Czech Republic are located in Prague, while consulates are located across the country. The Czech passport is restricted by visas. According to the 2018 Henley & Partners Visa Restrictions Index, Czech citizens have visa-free access to 173 countries, which ranks them 7th along with Malta and New Zealand. The World Tourism Organization ranks the Czech passport 24th. The US Visa Waiver Program applies to Czech nationals. The Prime Minister and Minister of Foreign Affairs have primary roles in setting foreign policy, although the President also has influence and represents the country abroad. Membership in the European Union and NATO is central to the Czech Republic's foreign policy. The Office for Foreign Relations and Information (ÚZSI) serves as the foreign intelligence agency responsible for espionage and foreign policy briefings, as well as protection of Czech Republic's embassies abroad. The Czech Republic has ties with Slovakia, Poland and Hungary as a member of the Visegrad Group, as well as with Germany, Israel, the United States and the European Union and its members. Czech officials have supported dissenters in Belarus, Moldova, Myanmar and Cuba. Famous Czech diplomats of the past included Count Philip Kinsky of Wchinitz and Tettau, Karl Philipp, Prince of Schwarzenberg, Edvard Beneš, Jan Masaryk, Jiří Dienstbier and Prince Karel Schwarzenberg. Military The Czech armed forces consist of the Czech Land Forces, the Czech Air Force and of specialized support units. The armed forces are managed by the Ministry of Defence. The President of the Czech Republic is Commander-in-chief of the armed forces. In 2004 the army transformed itself into a fully professional organization and compulsory military service was abolished. The country has been a member of NATO since 12 March 1999. Defence spending is approximately 1.28% of the GDP (2021). The armed forces are charged with protecting the Czech Republic and its allies, promoting global security interests, and contributing to NATO. Currently, as a member of NATO, the Czech military are participating in the Resolute Support and KFOR operations and have soldiers in Afghanistan, Mali, Bosnia and Herzegovina, Kosovo, Egypt, Israel and Somalia. The Czech Air Force also served in the Baltic states and Iceland. The main equipment of the Czech military includes JAS 39 Gripen multi-role fighters, Aero L-159 Alca combat aircraft, Mi-35 attack helicopters, armored vehicles (Pandur II, OT-64, OT-90, BVP-2) and tanks (T-72 and T-72M4CZ). The most famous Czech, and therefore Czechoslovak, soldiers and military leaders of the past were Jan Žižka, Albrecht von Wallenstein, Karl Philipp, Prince of Schwarzenberg, Joseph Radetzky von Radetz, Josef Šnejdárek, Heliodor Píka, Ludvík Svoboda, Jan Kubiš, Jozef Gabčík, František Fajtl and Petr Pavel. Administrative divisions Since 2000, the Czech Republic has been divided into thirteen regions (Czech: kraje, singular kraj) and the capital city of Prague. Every region has its own elected regional assembly and a regional governor. In Prague, the assembly and presidential powers are executed by the city council and the mayor. The older seventy-six districts (okresy, singular okres) including three "statutory cities" (without Prague, which had special status) lost most of their importance in 1999 in an administrative reform; they remain as territorial divisions and seats of various branches of state administration. The smallest administrative units are obce (municipalities). As of 2021, the Czech Republic is divided into 6,254 municipalities. Cities and towns are also municipalities. The capital city of Prague is a region and municipality at the same time. Economy The Czech Republic has a developed, high-income export-oriented social market economy based in services, manufacturing and innovation, that maintains a welfare state and the European social model. The Czech Republic participates in the European Single Market as a member of the European Union and is therefore a part of the economy of the European Union, but uses its own currency, the Czech koruna, instead of the euro. It has a per capita GDP rate that is 91% of the EU average and is a member of the OECD. Monetary policy is conducted by the Czech National Bank, whose independence is guaranteed by the Constitution. The Czech Republic ranks 12th in the UN inequality-adjusted human development and 24th in World Bank Human Capital Index. It was described by The Guardian as "one of Europe's most flourishing economies". The COVID-19 pandemic had an expected negative impact on the Czech economy, but economists predict the growth of 3.9% in 2021 and then 4.3% in 2022. , the country's GDP per capita at purchasing power parity is $40,793 and $22,942 at nominal value. According to Allianz A.G., in 2018 the country was an MWC (mean wealth country), ranking 26th in net financial assets. The country experienced a 4.5% GDP growth in 2017. The 2016 unemployment rate was the lowest in the EU at 2.4%, and the 2016 poverty rate was the second lowest of OECD members. Czech Republic ranks 27th in the 2021 Index of Economic Freedom, 24th in the 2016 Global Innovation Index, 29th in the Global Competitiveness Report, 41st in the ease of doing business index and 25th in the Global Enabling Trade Report. The Czech Republic has a diverse economy that ranks 7th in the 2016 Economic Complexity Index. The industrial sector accounts for 37.5% of the economy, while services account for 60% and agriculture for 2.5%. The largest trading partner for both export and import is Germany and the EU in general. Dividends worth CZK 270 billion were paid to the foreign owners of Czech companies in 2017, which has become a political issue. The country has been a member of the Schengen Area since 1 May 2004, having abolished border controls, completely opening its borders with all of its neighbors on 21 December 2007. Industry In 2018 the largest companies by revenue in the Czech Republic were: automobile manufacturer Škoda Auto, utility company ČEZ Group, conglomerate Agrofert, energy trading company EPH, oil processing company Unipetrol, electronics manufacturer Foxconn CZ and steel producer Moravia Steel. Other Czech transportation companies include: Škoda Transportation (tramways, trolleybuses, metro), Tatra (heavy trucks, the second oldest car maker in the world), Avia (medium trucks), Karosa and SOR Libchavy (buses), Aero Vodochody (military aircraft), Let Kunovice (civil aircraft), Zetor (tractors), Jawa Moto (motorcycles) and Čezeta (electric scooters). Škoda Transportation is the fourth largest tram producer in the world; nearly one third of all trams in the world come from Czech factories. The Czech Republic is also the world's largest vinyl records manufacturer, with GZ Media producing about 6 million pieces annually in Loděnice. Česká zbrojovka is among the ten largest firearms producers in the world and five who produce automatic weapons. In the food industry succeeded companies Agrofert, Kofola and Hamé. Energy Production of Czech electricity exceeds consumption by about 10 TWh per year, the excess being exported. Nuclear power presently provides about 30 percent of the total power needs, its share is projected to increase to 40 percent. In 2005, 65.4 percent of electricity was produced by steam and combustion power plants (mostly coal); 30 percent by nuclear plants; and 4.6 percent came from renewable sources, including hydropower. The largest Czech power resource is Temelín Nuclear Power Station, with another nuclear power plant in Dukovany. The Czech Republic is reducing its dependence on highly polluting low-grade brown coal as a source of energy. Natural gas is procured from Russian Gazprom, roughly three quarters of domestic consumption, and from Norwegian companies, which make up most of the remaining quarter. Russian gas is imported via Ukraine, Norwegian gas is transported through Germany. Gas consumption (approx. 100 TWh in 2003–2005) is almost double electricity consumption. South Moravia has small oil and gas deposits. Transportation infrastructure As of 2020, the road network in the Czech Republic is long, out of which are motorways. The speed limit is 50 km/h within towns, 90 km/h outside of towns and 130 km/h on motorways. The Czech Republic has one of the densest rail networks in the world. As of 2020, the country has of lines. Of that number, is electrified, are single-line tracks and are double and multiple-line tracks. The length of tracks is , out of which is electrified. České dráhy (the Czech Railways) is the main railway operator in the country, with about 180 million passengers carried yearly. Maximum speed is limited to 160 km/h. Václav Havel Airport in Prague is the main international airport in the country. In 2019, it handled 17.8 million passengers. In total, the Czech Republic has 91 airports, six of which provide international air services. The public international airports are in Brno, Karlovy Vary, Mnichovo Hradiště, Mošnov (near Ostrava), Pardubice and Prague. The non-public international airports capable of handling airliners are in Kunovice and Vodochody. Russia, via pipelines through Ukraine and to a lesser extent, Norway, via pipelines through Germany, supply the Czech Republic with liquid and natural gas. Communications and IT The Czech Republic ranks in the top 10 countries worldwide with the fastest average internet speed. By the beginning of 2008, there were over 800 mostly local WISPs, with about 350,000 subscribers in 2007. Plans based on either GPRS, EDGE, UMTS or CDMA2000 are being offered by all three mobile phone operators (T-Mobile, O2, Vodafone) and internet provider U:fon. Government-owned Český Telecom slowed down broadband penetration. At the beginning of 2004, local-loop unbundling began and alternative operators started to offer ADSL and also SDSL. This and later privatization of Český Telecom helped drive down prices. On 1 July 2006, Český Telecom was acquired by globalized company (Spain-owned) Telefónica group and adopted the new name Telefónica O2 Czech Republic. , VDSL and ADSL2+ are offered in variants, with download speeds of up to 50 Mbit/s and upload speeds of up to 5 Mbit/s. Cable internet is gaining more popularity with its higher download speeds ranging from 50 Mbit/s to 1 Gbit/s. Two computer security companies, Avast and AVG, were founded in the Czech Republic. In 2016, Avast led by Pavel Baudiš bought rival AVG for US$1.3 billion, together at the time, these companies had a user base of about 400 million people and 40% of the consumer market outside of China. Avast is the leading provider of antivirus software, with a 20.5% market share. Tourism Prague is the fifth most visited city in Europe after London, Paris, Istanbul and Rome. In 2001, the total earnings from tourism reached 118 billion CZK, making up 5.5% of GNP and 9% of overall export earnings. The industry employs more than 110,000 people – over 1% of the population. Guidebooks and tourists reporting overcharging by taxi drivers and pickpocketing problems are mainly in Prague, though the situation has improved recently. Since 2005, Prague's mayor, Pavel Bém, has worked to improve this reputation by cracking down on petty crime and, aside from these problems, Prague is a "safe" city. The Czech Republic's crime rate is described by the United States State department as "low". One of the tourist attractions in the Czech Republic is the Nether district Vítkovice in Ostrava. The Czech Republic boasts 16 UNESCO World Heritage Sites, 3 of them are transnational. , further 14 sites are on the tentative list. Architectural heritage is an object of interest to visitors – it includes castles and châteaux from different historical epoques, namely Karlštejn Castle, Český Krumlov and the Lednice–Valtice Cultural Landscape. There are 12 cathedrals and 15 churches elevated to the rank of basilica by the Pope, calm monasteries. Away from the towns, areas such as Bohemian Paradise, Bohemian Forest and the Giant Mountains attract visitors seeking outdoor pursuits. There is a number of beer festivals. The country is also known for its various museums. Puppetry and marionette exhibitions are with a number of puppet festivals throughout the country. Aquapalace Prague in Čestlice is the largest water park in the country. Science The Czech lands have a long and well-documented history of scientific innovation. Today, the Czech Republic has a highly sophisticated, developed, high-performing, innovation-oriented scientific community supported by the government, industry, and leading Czech Universities. Czech scientists are embedded members of the global scientific community. They contribute annually to multiple international academic journals and collaborate with their colleagues across boundaries and fields. The Czech Republic was ranked 24th in the Global Innovation Index in 2020, up from 26th in 2019. Historically, the Czech lands, especially Prague, have been the seat of scientific discovery going back to early modern times, including Tycho Brahe, Nicolaus Copernicus, and Johannes Kepler. In 1784 the scientific community was first formally organized under the charter of the Royal Czech Society of Sciences. Currently, this organization is known as the Czech Academy of Sciences. Similarly, the Czech lands have a well-established history of scientists, including Nobel laureates biochemists Gerty and Carl Ferdinand Cori, chemist Jaroslav Heyrovský, chemist Otto Wichterle, physicist Peter Grünberg and chemist Antonín Holý. Sigmund Freud, the founder of psychoanalysis, was born in Příbor, Gregor Mendel, the founder of genetics, was born in Hynčice and spent most of his life in Brno. Most of the scientific research was recorded in Latin or in German and archived in libraries supported and managed by religious groups and other denominations as evidenced by historical locations of international renown and heritage such as the Strahov Monastery and the Clementinum in Prague. Increasingly, Czech scientists publish their work and that of their history in English. The current important scientific institution is the already mentioned Academy of Sciences of the Czech Republic, the CEITEC Institute in Brno or the HiLASE and Eli Beamlines centers with the most powerful laser in the world in Dolní Břežany. Prague is the seat of the administrative center of the GSA Agency operating the European navigation system Galileo and the European Union Agency for the Space Programme. Demographics The total fertility rate (TFR) in 2020 was estimated at 1.71 children born/woman, which is below the replacement rate of 2.1. The Czech Republic's population has an average age of 43.3 years. The life expectancy in 2021 was estimated to be 79.5 years (76.55 years male, 82.61 years female). About 77,000 people immigrate to the Czech Republic annually. Vietnamese immigrants began settling in the country during the Communist period, when they were invited as guest workers by the Czechoslovak government. In 2009, there were about 70,000 Vietnamese in the Czech Republic. Most decide to stay in the country permanently. According to results of the 2021 census, the majority of the inhabitants of the Czech Republic are Czechs (57.3%), followed by Moravians (3.4%), Slovaks (0.9%), Ukrainians (0.7%), Viets (0.3%), Poles (0.3%), Russians (0.2%), Silesians (0.1%) and Germans (0.1%). Another 4.0% declared combination of two nationalities (3.6% combination of Czech and other nationality). As the 'nationality' was an optional item, a number of people left this field blank (31.6%). According to some estimates, there are about 250,000 Romani people in the Czech Republic. The Polish minority resides mainly in the Zaolzie region. There were 496,413 (4.5% of population) foreigners residing in the country in 2016, according to the Czech Statistical Office, with the largest groups being Ukrainian (22%), Slovak (22%), Vietnamese (12%), Russian (7%) and German (4%). Most of the foreign population lives in Prague (37.3%) and Central Bohemia Region (13.2%). The Jewish population of Bohemia and Moravia, 118,000 according to the 1930 census, was nearly annihilated by the Nazi Germans during the Holocaust. There were approximately 3,900 Jews in the Czech Republic in 2021. The former Czech prime minister, Jan Fischer, is of Jewish faith. Nationality of residents, who answered the question in the Census 2021: Largest cities Religion About 75% to 79% of residents of the Czech Republic do not declare having any religion or faith in surveys, and the proportion of convinced atheists (30%) is the third highest in the world behind those of China (47%) and Japan (31%). The Czech people have been historically characterized as "tolerant and even indifferent towards religion". Christianization in the 9th and 10th centuries introduced Catholicism. | the World Bank as a "developed country", and in 2009 the Human Development Index ranked it as a nation of "Very High Human Development". From 1991, the Czech Republic, originally as part of Czechoslovakia and since 1993 in its own right, has been a member of the Visegrád Group and from 1995, the OECD. The Czech Republic joined NATO on 12 March 1999 and the European Union on 1 May 2004. On 21 December 2007 the Czech Republic joined the Schengen Area. Until 2017, either the Czech Social Democratic Party or the Civic Democratic Party led the governments of the Czech Republic. In October 2017, populist movement ANO 2011, led by the country's second-richest man, Andrej Babiš, won the elections with three times more votes than its closest rival, the centre-right Civic Democrats. In December 2017, Czech President Miloš Zeman appointed Andrej Babiš as the new prime minister. After the results of the elections in October 2021, Petr Fiala became the new Prime Minister. He formed a government coalition of the Alliance SPOLU (Civic Democratic Party, KDU-ČSL and TOP 09) and the Alliance of Pirates and Mayors. In the election alliance SPOLU, the ANO movement narrowly defeated. Government The Czech Republic is a pluralist multi-party parliamentary representative democracy. The Parliament (Parlament České republiky) is bicameral, with the Chamber of Deputies (, 200 members) and the Senate (, 81 members). The members of the Chamber of Deputies are elected for a four-year term by proportional representation, with a 5% election threshold. There are 14 voting districts, identical to the country's administrative regions. The Chamber of Deputies, the successor to the Czech National Council, has the powers and responsibilities of the now defunct federal parliament of the former Czechoslovakia. The members of the Senate are elected in single-seat constituencies by two-round runoff voting for a six-year term, with one-third elected every even year in the autumn. This arrangement is modeled on the U.S. Senate, but each constituency is roughly the same size and the voting system used is a two-round runoff. The president is a formal head of state with limited and specific powers, who appoints the prime minister, as well the other members of the cabinet on a proposal by the prime minister. From 1993 until 2012, the President of the Czech Republic was selected by a joint session of the parliament for a five-year term, with no more than two consecutive terms (2x Václav Havel, 2x Václav Klaus). Since 2013 the presidential election is direct. Some commentators have argued that, with the introduction of direct election of the President, the Czech Republic has moved away from the parliamentary system and towards a semi-presidential one. The Government's exercise of executive power derives from the Constitution. The members of the government are the Prime Minister, Deputy prime ministers and other ministers. The Government is responsible to the Chamber of Deputies. The Prime Minister is the head of government and wields powers such as the right to set the agenda for most foreign and domestic policy and choose government ministers. |President |Miloš Zeman |SPOZ |8 March 2013 |- |President of the Senate |Miloš Vystrčil |ODS |19 February 2020 |- |President of the Chamber of Deputies |Markéta Pekarová Adamová |TOP 09 |10 November 2021 |- |Prime Minister |Petr Fiala |ODS |28 November 2021 |} Law The Czech Republic is a unitary state, with a civil law system based on the continental type, rooted in Germanic legal culture. The basis of the legal system is the Constitution of the Czech Republic adopted in 1993. The Penal Code is effective from 2010. A new Civil code became effective in 2014. The court system includes district, county, and supreme courts and is divided into civil, criminal, and administrative branches. The Czech judiciary has a triumvirate of supreme courts. The Constitutional Court consists of 15 constitutional judges and oversees violations of the Constitution by either the legislature or by the government. The Supreme Court is formed of 67 judges and is the court of highest appeal for most legal cases heard in the Czech Republic. The Supreme Administrative Court decides on issues of procedural and administrative propriety. It also has jurisdiction over certain political matters, such as the formation and closure of political parties, jurisdictional boundaries between government entities, and the eligibility of persons to stand for public office. The Supreme Court and the Supreme Administrative Court are both based in Brno, as is the Supreme Public Prosecutor's Office. Foreign relations The Czech Republic has ranked as one of the safest or most peaceful countries for the past few decades. It is a member of the United Nations, the European Union, NATO, OECD, Council of Europe and is an observer to the Organization of American States. The embassies of most countries with diplomatic relations with the Czech Republic are located in Prague, while consulates are located across the country. The Czech passport is restricted by visas. According to the 2018 Henley & Partners Visa Restrictions Index, Czech citizens have visa-free access to 173 countries, which ranks them 7th along with Malta and New Zealand. The World Tourism Organization ranks the Czech passport 24th. The US Visa Waiver Program applies to Czech nationals. The Prime Minister and Minister of Foreign Affairs have primary roles in setting foreign policy, although the President also has influence and represents the country abroad. Membership in the European Union and NATO is central to the Czech Republic's foreign policy. The Office for Foreign Relations and Information (ÚZSI) serves as the foreign intelligence agency responsible for espionage and foreign policy briefings, as well as protection of Czech Republic's embassies abroad. The Czech Republic has ties with Slovakia, Poland and Hungary as a member of the Visegrad Group, as well as with Germany, Israel, the United States and the European Union and its members. Czech officials have supported dissenters in Belarus, Moldova, Myanmar and Cuba. Famous Czech diplomats of the past included Count Philip Kinsky of Wchinitz and Tettau, Karl Philipp, Prince of Schwarzenberg, Edvard Beneš, Jan Masaryk, Jiří Dienstbier and Prince Karel Schwarzenberg. Military The Czech armed forces consist of the Czech Land Forces, the Czech Air Force and of specialized support units. The armed forces are managed by the Ministry of Defence. The President of the Czech Republic is Commander-in-chief of the armed forces. In 2004 the army transformed itself into a fully professional organization and compulsory military service was abolished. The country has been a member of NATO since 12 March 1999. Defence spending is approximately 1.28% of the GDP (2021). The armed forces are charged with protecting the Czech Republic and its allies, promoting global security interests, and contributing to NATO. Currently, as a member of NATO, the Czech military are participating in the Resolute Support and KFOR operations and have soldiers in Afghanistan, Mali, Bosnia and Herzegovina, Kosovo, Egypt, Israel and Somalia. The Czech Air Force also served in the Baltic states and Iceland. The main equipment of the Czech military includes JAS 39 Gripen multi-role fighters, Aero L-159 Alca combat aircraft, Mi-35 attack helicopters, armored vehicles (Pandur II, OT-64, OT-90, BVP-2) and tanks (T-72 and T-72M4CZ). The most famous Czech, and therefore Czechoslovak, soldiers and military leaders of the past were Jan Žižka, Albrecht von Wallenstein, Karl Philipp, Prince of Schwarzenberg, Joseph Radetzky von Radetz, Josef Šnejdárek, Heliodor Píka, Ludvík Svoboda, Jan Kubiš, Jozef Gabčík, František Fajtl and Petr Pavel. Administrative divisions Since 2000, the Czech Republic has been divided into thirteen regions (Czech: kraje, singular kraj) and the capital city of Prague. Every region has its own elected regional assembly and a regional governor. In Prague, the assembly and presidential powers are executed by the city council and the mayor. The older seventy-six districts (okresy, singular okres) including three "statutory cities" (without Prague, which had special status) lost most of their importance in 1999 in an administrative reform; they remain as territorial divisions and seats of various branches of state administration. The smallest administrative units are obce (municipalities). As of 2021, the Czech Republic is divided into 6,254 municipalities. Cities and towns are also municipalities. The capital city of Prague is a region and municipality at the same time. Economy The Czech Republic has a developed, high-income export-oriented social market economy based in services, manufacturing and innovation, that maintains a welfare state and the European social model. The Czech Republic participates in the European Single Market as a member of the European Union and is therefore a part of the economy of the European Union, but uses its own currency, the Czech koruna, instead of the euro. It has a per capita GDP rate that is 91% of the EU average and is a member of the OECD. Monetary policy is conducted by the Czech National Bank, whose independence is guaranteed by the Constitution. The Czech Republic ranks 12th in the UN inequality-adjusted human development and 24th in World Bank Human Capital Index. It was described by The Guardian as "one of Europe's most flourishing economies". The COVID-19 pandemic had an expected negative impact on the Czech economy, but economists predict the growth of 3.9% in 2021 and then 4.3% in 2022. , the country's GDP per capita at purchasing power parity is $40,793 and $22,942 at nominal value. According to Allianz A.G., in 2018 the country was an MWC (mean wealth country), ranking 26th in net financial assets. The country experienced a 4.5% GDP growth in 2017. The 2016 unemployment rate was the lowest in the EU at 2.4%, and the 2016 poverty rate was the second lowest of OECD members. Czech Republic ranks 27th in the 2021 Index of Economic Freedom, 24th in the 2016 Global Innovation Index, 29th in the Global Competitiveness Report, 41st in the ease of doing business index and 25th in the Global Enabling Trade Report. The Czech Republic has a diverse economy that ranks 7th in the 2016 Economic Complexity Index. The industrial sector accounts for 37.5% of the economy, while services account for 60% and agriculture for 2.5%. The largest trading partner for both export and import is Germany and the EU in general. Dividends worth CZK 270 billion were paid to the foreign owners of Czech companies in 2017, which has become a political issue. The country has been a member of the Schengen Area since 1 May 2004, having abolished border controls, completely opening its borders with all of its neighbors on 21 December 2007. Industry In 2018 the largest companies by revenue in the Czech Republic were: automobile manufacturer Škoda Auto, utility company ČEZ Group, conglomerate Agrofert, energy trading company EPH, oil processing company Unipetrol, electronics manufacturer Foxconn CZ and steel producer Moravia Steel. Other Czech transportation companies include: Škoda Transportation (tramways, trolleybuses, metro), Tatra (heavy trucks, the second oldest car maker in the world), Avia (medium trucks), Karosa and SOR Libchavy (buses), Aero Vodochody (military aircraft), Let Kunovice (civil aircraft), Zetor (tractors), Jawa Moto (motorcycles) and Čezeta (electric scooters). Škoda Transportation is the fourth largest tram producer in the world; nearly one third of all trams in the world come from Czech factories. The Czech Republic is also the world's largest vinyl records manufacturer, with GZ Media producing about 6 million pieces annually in Loděnice. Česká zbrojovka is among the ten largest firearms producers in the world and five who produce automatic weapons. In the food industry succeeded companies Agrofert, Kofola and Hamé. Energy Production of Czech electricity exceeds consumption by about 10 TWh per year, the excess being exported. Nuclear power presently provides about 30 percent of the total power needs, its share is projected to increase to 40 percent. In 2005, 65.4 percent of electricity was produced by steam and combustion power plants (mostly coal); 30 percent by nuclear plants; and 4.6 percent came from renewable sources, including hydropower. The largest Czech power resource is Temelín Nuclear Power Station, with another nuclear power plant in Dukovany. The Czech Republic is reducing its dependence on highly polluting low-grade brown coal as a source of energy. Natural gas is procured from Russian Gazprom, roughly three quarters of domestic consumption, and from Norwegian companies, which make up most of the remaining quarter. Russian gas is imported via Ukraine, Norwegian gas is transported through Germany. Gas consumption (approx. 100 TWh in 2003–2005) is almost double electricity consumption. South Moravia has small oil and gas deposits. Transportation infrastructure As of 2020, the road network in the Czech Republic is long, out of which are motorways. The speed limit is 50 km/h within towns, 90 km/h outside of towns and 130 km/h on motorways. The Czech Republic has one of the densest rail networks in the world. As of 2020, the country has of lines. Of that number, is electrified, are single-line tracks and are double and multiple-line tracks. The length of tracks is , out of which is electrified. České dráhy (the Czech Railways) is the main railway operator in the country, with about 180 million passengers carried yearly. Maximum speed is limited to 160 km/h. Václav Havel Airport in Prague is the main international airport in the country. In 2019, it handled 17.8 million passengers. In total, the Czech Republic has 91 airports, six of which provide international air services. The public international airports are in Brno, Karlovy Vary, Mnichovo Hradiště, Mošnov (near Ostrava), Pardubice and Prague. The non-public international airports capable of handling airliners are in Kunovice and Vodochody. Russia, via pipelines through Ukraine and to a lesser extent, Norway, via pipelines through Germany, supply the Czech Republic with liquid and natural gas. Communications and IT The Czech Republic ranks in the top 10 countries worldwide with the fastest average internet speed. By the beginning of 2008, there were over 800 mostly local WISPs, with about 350,000 subscribers in 2007. Plans based on either GPRS, EDGE, UMTS or CDMA2000 are being offered by all three mobile phone operators (T-Mobile, O2, Vodafone) and internet provider U:fon. Government-owned Český Telecom slowed down broadband penetration. At the beginning of 2004, local-loop unbundling began and alternative operators started to offer ADSL and also SDSL. This and later privatization of Český Telecom helped drive down prices. On 1 July 2006, Český Telecom was acquired by globalized company (Spain-owned) Telefónica group and adopted the new name Telefónica O2 Czech Republic. , VDSL and ADSL2+ are offered in variants, with download speeds of up to 50 Mbit/s and upload speeds of up to 5 Mbit/s. Cable internet is gaining more popularity with its higher download speeds ranging from 50 Mbit/s to 1 Gbit/s. Two computer security companies, Avast and AVG, were founded in the Czech Republic. In 2016, Avast led by Pavel Baudiš bought rival AVG for US$1.3 billion, together at the time, these companies had a user base of about 400 million people and 40% of the consumer market outside of China. Avast is the leading provider of antivirus software, with a 20.5% market share. Tourism Prague is the fifth most visited city in Europe after London, Paris, Istanbul and Rome. In 2001, the total earnings from tourism reached 118 billion CZK, making up 5.5% of GNP and 9% of overall export earnings. The industry employs more than 110,000 people – over 1% of the population. Guidebooks and tourists reporting overcharging by taxi drivers and pickpocketing problems are mainly in Prague, though the situation has improved recently. Since 2005, Prague's mayor, Pavel Bém, has worked to improve this reputation by cracking down on petty crime and, aside from these problems, Prague is a "safe" city. The Czech Republic's crime rate is described by the United States State department as "low". One of the tourist attractions in the Czech Republic is the Nether district Vítkovice in Ostrava. The Czech Republic boasts 16 UNESCO World Heritage Sites, 3 of them are transnational. , further 14 sites are on the tentative list. Architectural heritage is an object of interest to visitors – it includes castles and châteaux from different historical epoques, namely Karlštejn Castle, Český Krumlov and the Lednice–Valtice Cultural Landscape. There are 12 cathedrals and 15 churches elevated to the rank of basilica by the Pope, calm monasteries. Away from the towns, areas such as Bohemian Paradise, Bohemian Forest and the Giant Mountains attract visitors seeking outdoor pursuits. There is a number of beer festivals. The country is also known for its various museums. Puppetry and marionette exhibitions are with a number of puppet festivals throughout the country. Aquapalace Prague in Čestlice is the largest water park in the country. Science The Czech lands have a long and well-documented history of scientific innovation. Today, the Czech Republic has a highly sophisticated, developed, high-performing, innovation-oriented scientific community supported by the government, industry, and leading Czech Universities. Czech scientists are embedded members of the global scientific community. They contribute annually to multiple international academic journals and collaborate with their colleagues across boundaries and fields. The Czech Republic was ranked 24th in the Global Innovation Index in 2020, up from 26th in 2019. Historically, the Czech lands, especially Prague, have been the seat of scientific discovery going back to early modern times, including Tycho Brahe, Nicolaus Copernicus, and Johannes Kepler. In 1784 the scientific community was first formally organized under the charter of the Royal Czech Society of Sciences. Currently, this organization is known as the Czech Academy of Sciences. Similarly, the Czech lands have a well-established history of scientists, including Nobel laureates biochemists Gerty and Carl Ferdinand Cori, chemist Jaroslav Heyrovský, chemist Otto Wichterle, physicist Peter Grünberg and chemist Antonín Holý. Sigmund Freud, the founder of psychoanalysis, was born in Příbor, Gregor Mendel, the founder of genetics, was born in Hynčice and spent most of his life in Brno. Most of the scientific research was recorded in Latin or in German and archived in libraries supported and managed by religious groups and other denominations as evidenced by historical locations of international renown and heritage such as the Strahov Monastery and the Clementinum in Prague. Increasingly, Czech scientists publish their work and that of their history in English. The current important scientific institution is the already mentioned Academy of Sciences of the Czech Republic, the CEITEC Institute in Brno or the HiLASE and Eli Beamlines centers with the most powerful laser in the world in Dolní Břežany. Prague is the seat of the administrative center of the GSA Agency operating the European navigation system Galileo and the European Union Agency for the Space Programme. Demographics The total fertility rate (TFR) in 2020 was estimated at 1.71 children born/woman, which is below the replacement rate of 2.1. The Czech Republic's population has an average age of 43.3 years. The life expectancy in 2021 was estimated to be 79.5 years (76.55 years male, 82.61 years female). About 77,000 people immigrate to the Czech Republic annually. Vietnamese immigrants began settling in the country during the Communist period, when they were invited as guest workers by the Czechoslovak government. In 2009, there were about 70,000 Vietnamese in the Czech Republic. Most decide to stay in the country permanently. According to results of the 2021 census, the majority of the inhabitants of the Czech Republic are Czechs (57.3%), followed by Moravians (3.4%), Slovaks (0.9%), Ukrainians (0.7%), Viets (0.3%), Poles (0.3%), Russians (0.2%), Silesians (0.1%) and Germans (0.1%). Another 4.0% declared combination of two nationalities (3.6% combination of Czech and other nationality). As the 'nationality' was an optional item, a number of people left this field blank (31.6%). According to some estimates, there are about 250,000 Romani people in the Czech Republic. The Polish minority resides mainly in the Zaolzie region. There were 496,413 (4.5% of population) foreigners residing in the country in 2016, according to the Czech Statistical Office, with the largest groups being Ukrainian (22%), Slovak (22%), Vietnamese (12%), Russian (7%) and German (4%). Most of the foreign population lives in Prague (37.3%) and Central Bohemia Region (13.2%). The Jewish population of Bohemia and Moravia, 118,000 according to the 1930 census, was nearly annihilated by the Nazi Germans during the Holocaust. There were approximately 3,900 Jews in the Czech Republic in 2021. The former Czech prime minister, Jan Fischer, is of Jewish faith. Nationality of residents, who answered the question in the Census 2021: Largest cities Religion About 75% to 79% of residents of the Czech Republic do not declare having any religion or faith in surveys, and the proportion of convinced atheists (30%) is the third highest in the world behind those of China (47%) and Japan (31%). The Czech people have been historically characterized as "tolerant and even indifferent towards religion". Christianization in the 9th and 10th centuries introduced Catholicism. After the Bohemian Reformation, most Czechs became followers of Jan Hus, Petr Chelčický and other regional Protestant Reformers. Taborites and Utraquists were Hussite groups. Towards the end of the Hussite Wars, the Utraquists changed sides and allied with the Catholic Church. Following the joint Utraquist—Catholic victory, Utraquism was accepted as a distinct form of Christianity to be practiced in Bohemia by the Catholic Church while all remaining Hussite groups were prohibited. After the Reformation, some Bohemians went with the teachings of Martin Luther, especially Sudeten Germans. In the wake of the Reformation, Utraquist Hussites took a renewed increasingly anti-Catholic stance, while some of the defeated Hussite factions were revived. After the Habsburgs regained control of Bohemia, the whole population was forcibly converted to Catholicism—even the Utraquist Hussites. Going forward, Czechs have become more wary and pessimistic of religion as such. A history of resistance to the Catholic Church followed. It suffered a schism with the neo-Hussite Czechoslovak Hussite Church in 1920, lost the bulk of its adherents during the Communist era and continues to lose in the modern, ongoing secularization. Protestantism never recovered after the Counter-Reformation was introduced by the Austrian Habsburgs in 1620. According to the 2011 census, 34% of the population stated they had no religion, 10.3% was Catholic, 0.8% was Protestant (0.5% Czech Brethren and 0.4% Hussite), and 9% followed other forms of religion both denominational or not (of which 863 people answered they are Pagan). 45% of the population did not answer the question about religion. From 1991 to 2001 and further to 2011 the adherence to Catholicism decreased from 39% to 27% and then to 10%; Protestantism similarly declined from 3.7% to 2% and then to 0.8%. The Muslim population is estimated to be 20,000 representing 0.2% of the population. The proportion of religious believers varies significantly across the country, from 55% in Zlín Region to 16% in Ústí nad Labem Region. Education and health care Education in the Czech Republic is compulsory for nine years and citizens have access to a tuition-free university education, while the average number of years of education is 13.1. Additionally, the Czech Republic has a "relatively equal" educational system in comparison with other countries in Europe. Founded in 1348, Charles University was the first university in Central Europe. Other major universities in the country are Masaryk University, Czech Technical University, Palacký University, Academy of Performing Arts and University of Economics. The Programme for International Student Assessment, coordinated by the OECD, currently ranks the Czech education system as the 15th most successful in the world, higher than the OECD average. The UN Education Index ranks the Czech Republic 10th (positioned behind Denmark and ahead of South Korea). Health care in the Czech Republic is similar in quality to that of other developed nations. The Czech universal health care system is based on a compulsory insurance model, with fee-for-service care funded by mandatory employment-related insurance plans. According to the 2016 Euro health consumer index, a comparison of healthcare in Europe, the Czech healthcare is 13th, ranked behind Sweden and two positions ahead of the United Kingdom. Culture Art Venus of Dolní Věstonice is the treasure of prehistoric art. Theodoric of Prague was a painter in the Gothic era who decorated the castle Karlstejn. In the Baroque era, there were Wenceslaus Hollar, Jan Kupecký, Karel Škréta, Anton Raphael Mengs or Petr Brandl, sculptors Matthias Braun and Ferdinand Brokoff. In the first half of the 19th century, Josef Mánes joined the romantic movement. In the second half of the 19th century had the main say the so-called "National Theatre generation": sculptor Josef Václav Myslbek and painters Mikoláš Aleš, Václav Brožík, Vojtěch Hynais or Julius Mařák. At the end of the century came a wave of Art Nouveau. Alfons Mucha became the main representative. He is known for Art Nouveau posters and his cycle of 20 large canvases named the Slav Epic, which depicts the history of Czechs and other Slavs. , the Slav Epic can be seen in the Veletržní Palace of the National Gallery in Prague, which manages the largest collection of art in the Czech Republic. Max Švabinský was another Art nouveau painter. The 20th century brought an avant-garde revolution. In the Czech lands mainly expressionist and cubist: Josef Čapek, Emil Filla, Bohumil Kubišta, Jan Zrzavý. Surrealism emerged particularly in the work of Toyen, Josef Šíma and Karel Teige. In the world, however, he pushed mainly František Kupka, a pioneer of abstract painting. |
part of the Republic of German-Austria in accordance with the self-determination principle. The state proclaimed the official ideology that there were no separate Czech and Slovak nations, but only one nation of Czechoslovaks (see Czechoslovakism), to the disagreement of Slovaks and other ethnic groups. Once a unified Czechoslovakia was restored after World War II (after the country had been divided during the war), the conflict between the Czechs and the Slovaks surfaced again. The governments of Czechoslovakia and other Central European nations deported ethnic Germans, reducing the presence of minorities in the nation. Most of the Jews had been killed during the war by the Nazis. *Jews identified themselves as Germans or Hungarians (and Jews only by religion not ethnicity), the sum is, therefore, more than 100%. Interwar period During the period between the two world wars Czechoslovakia was a democratic state. The population was generally literate, and contained fewer alienated groups. The influence of these conditions was augmented by the political values of Czechoslovakia's leaders and the policies they adopted. Under Tomas Masaryk, Czech and Slovak politicians promoted progressive social and economic conditions that served to defuse discontent. Foreign minister Beneš became the prime architect of the Czechoslovak-Romanian-Yugoslav alliance (the "Little Entente", 1921–38) directed against Hungarian attempts to reclaim lost areas. Beneš worked closely with France. Far more dangerous was the German element, which after 1933 became allied with the Nazis in Germany. The increasing feeling of inferiority among the Slovaks, who were hostile to the more numerous Czechs, weakened the country in the late 1930s. Many Slovaks supported an extreme nationalist movement and welcomed the puppet Slovak state set up under Hitler's control in 1939. After 1933, Czechoslovakia remained the only democracy in central and eastern Europe. Munich Agreement, and Two-Step German Occupation In September 1938, Adolf Hitler demanded control of the Sudetenland. On 29 September 1938, Britain and France ceded control in the Appeasement at the Munich Conference; France ignored the military alliance it had with Czechoslovakia. During October 1938, Nazi Germany occupied the Sudetenland border region, effectively crippling Czechoslovak defences. The First Vienna Award assigned a strip of southern Slovakia and Carpathian Ruthenia to Hungary. Poland occupied Zaolzie, an area whose population was majority Polish, in October 1938. On 14 March 1939, the remainder ("rump") of Czechoslovakia was dismembered by the proclamation of the Slovak State, the next day the rest of Carpathian Ruthenia was occupied and annexed by Hungary, while the following day the German Protectorate of Bohemia and Moravia was proclaimed. The eventual goal of the German state under Nazi leadership was to eradicate Czech nationality through assimilation, deportation, and extermination of the Czech intelligentsia; the intellectual elites and middle class made up a considerable number of the 200,000 people who passed through concentration camps and the 250,000 who died during German occupation. Under Generalplan Ost, it was assumed that around 50% of Czechs would be fit for Germanization. The Czech intellectual elites were to be removed not only from Czech territories but from Europe completely. The authors of Generalplan Ost believed it would be best if they emigrated overseas, as even in Siberia they were considered a threat to German rule. Just like Jews, Poles, Serbs, and several other nations, Czechs were considered to be untermenschen by the Nazi state. In 1940, in a secret Nazi plan for the Germanization of the Protectorate of Bohemia and Moravia it was declared that those considered to be of racially Mongoloid origin and the Czech intelligentsia were not to be Germanized. The deportation of Jews to concentration camps was organized under the direction of Reinhard Heydrich, and the fortress town of Terezín was made into a ghetto way station for Jewish families. On 4 June 1942 Heydrich died after being wounded by an assassin in Operation Anthropoid. Heydrich's successor, Colonel General Kurt Daluege, ordered mass arrests and executions and the destruction of the villages of Lidice and Ležáky. In 1943 the German war effort was accelerated. Under the authority of Karl Hermann Frank, German minister of state for Bohemia and Moravia, some 350,000 Czech laborers were dispatched to the Reich. Within the protectorate, all non-war-related industry was prohibited. Most of the Czech population obeyed quiescently up until the final months preceding the end of the war, while thousands were involved in the resistance movement. For the Czechs of the Protectorate Bohemia and Moravia, German occupation was a period of brutal oppression. Czech losses resulting from political persecution and deaths in concentration camps totaled between 36,000 and 55,000. The Jewish populations of Bohemia and Moravia (118,000 according to the 1930 census) were virtually annihilated. Many Jews emigrated after 1939; more than 70,000 were killed; 8,000 survived at Terezín. Several thousand Jews managed to live in freedom or in hiding throughout the occupation. Despite the estimated 136,000 deaths at the hands of the Nazi regime, the population in the Reichsprotektorate saw a net increase during the war years of approximately 250,000 in line with an increased birth rate. On 6 May 1945, the third US Army of General Patton entered Pilsen from the south west. On 9 May 1945, Soviet Red Army troops entered Prague. Communist Czechoslovakia After World War II, pre-war Czechoslovakia was re-established, with the exception of Subcarpathian Ruthenia, which was annexed by the Soviet Union and incorporated into the Ukrainian Soviet Socialist Republic. The Beneš decrees were promulgated concerning ethnic Germans (see Potsdam Agreement) and ethnic Hungarians. Under the decrees, citizenship was abrogated for people of German and Hungarian ethnic origin who had accepted German or Hungarian citizenship during the occupations. In 1948, this provision was cancelled for the Hungarians, but only partially for the Germans. The government then confiscated the property of the Germans and expelled about 90% of the ethnic German population, over 2 million people. Those who remained were collectively accused of supporting the Nazis after the Munich Agreement, as 97.32% of Sudeten Germans had voted for the NSDAP in the December 1938 elections. Almost every decree explicitly stated that the sanctions did not apply to antifascists. Some 250,000 Germans, many married to Czechs, some antifascists, and also those required for the post-war reconstruction of the country, remained in Czechoslovakia. The Beneš Decrees still cause controversy among nationalist groups in the Czech Republic, Germany, Austria and Hungary. Carpathian Ruthenia (Podkarpatská Rus) was occupied by (and in June 1945 formally ceded to) the Soviet Union. In the 1946 parliamentary election, the Communist Party of Czechoslovakia was the winner in the Czech lands, and the Democratic Party won in Slovakia. In February 1948 the Communists seized power. Although they would maintain the fiction of political pluralism through the existence of the National Front, except for a short period in the late 1960s (the Prague Spring) the country had no liberal democracy. Since citizens lacked significant electoral methods of registering protest against government policies, periodically there were street protests that became violent. For example, there were riots in the town of Plzeň in 1953, reflecting economic discontent. Police and army units put down the rebellion, and hundreds were injured but no one was killed. While its economy remained more advanced than those of its neighbors in Eastern Europe, Czechoslovakia grew increasingly economically weak relative to Western Europe. The currency reform of 1953 caused dissatisfaction among Czechoslovak laborers. To equalize the wage rate, Czechoslovaks had to turn in their old money for new at a decreased value. The banks also confiscated savings and bank deposits to control the amount of money in circulation. In the 1950s, Czechoslovakia experienced high economic growth (averaging 7% per year), which allowed for a substantial increase in wages and living standards, thus promoting the stability of the regime. In 1968, when the reformer Alexander Dubček was appointed to the key post of First Secretary of the Czechoslovak Communist Party, there was a brief period of liberalization known as the Prague Spring. In response, after failing to persuade the Czechoslovak leaders to change course, five other members of the Warsaw Pact invaded. Soviet tanks rolled into Czechoslovakia on the night of 20–21 August 1968. Soviet Communist Party General Secretary Leonid Brezhnev viewed this intervention as vital for the preservation of the Soviet, socialist system and vowed to intervene in any state that sought to replace Marxism-Leninism with capitalism. In the week after the invasion there was a spontaneous campaign of civil resistance against the occupation. This resistance involved a wide range of acts of non-cooperation and defiance: this was followed by a period in which the Czechoslovak Communist Party leadership, having been forced in Moscow to make concessions to the Soviet Union, gradually put the brakes on their earlier liberal policies. Meanwhile, one plank of the reform program had been carried out: in 1968–69, Czechoslovakia was turned into a federation of the Czech Socialist Republic and Slovak Socialist Republic. The theory was that under the federation, social and economic inequities between the Czech and Slovak halves of the state would be largely eliminated. A number of ministries, such as education, now became two formally equal bodies in the two formally equal republics. However, the centralized political control by the Czechoslovak Communist Party severely limited the effects of federalization. The 1970s saw the rise of the dissident movement in Czechoslovakia, represented among others by Václav Havel. The movement sought greater political participation and expression in the face of official disapproval, manifested in limitations on work activities, which went as far as a ban on professional employment, the refusal of higher education for the dissidents' children, police harassment and prison. After 1989 In 1989, the Velvet Revolution restored democracy. This occurred at around the same time as the fall of communism in Romania, Bulgaria, Hungary and Poland. The word "socialist" was removed from the country's full name on 29 March 1990 and replaced by "federal". In 1992, because of growing nationalist tensions in the government, Czechoslovakia was peacefully dissolved by parliament. On 1 January 1993 it formally separated into two independent countries, the Czech Republic and the Slovak Republic. Government and politics After World War II, a political monopoly was held by the Communist Party of Czechoslovakia (KSČ). The leader of the KSČ was de facto the most powerful person in the country during this period. Gustáv Husák was elected first secretary of the KSČ in 1969 (changed to general secretary in 1971) and president of Czechoslovakia in 1975. Other parties and organizations existed but functioned in subordinate roles to the KSČ. All political parties, as well as numerous mass organizations, were grouped under umbrella of the National Front. Human rights activists and religious activists were severely repressed. Constitutional development Czechoslovakia had the following constitutions during its history (1918–1992): Temporary constitution of 14 November 1918 (democratic): see History of Czechoslovakia (1918–1938) The 1920 constitution (The Constitutional Document of the Czechoslovak Republic), democratic, in force until 1948, several amendments The Communist 1948 Ninth-of-May Constitution The Communist 1960 Constitution of the Czechoslovak Socialist Republic with major amendments in 1968 (Constitutional Law of Federation), 1971, 1975, 1978, and 1989 (at which point the leading role of the Communist Party was abolished). It was amended several more times during 1990–1992 (for example, 1990, name change to Czecho-Slovakia, 1991 incorporation of the human rights charter) Heads of state and government List of presidents of Czechoslovakia List of prime ministers of Czechoslovakia Foreign policy International agreements and membership In the 1930s, the nation formed a military alliance with France, which collapsed in the Munich Agreement of 1938. After World War II, an active participant in Council for Mutual Economic Assistance (Comecon), Warsaw Pact, United Nations and its specialized agencies; signatory of conference on Security and Cooperation in Europe. Administrative divisions 1918–1923: Different systems in former Austrian territory (Bohemia, Moravia, a small part of Silesia) compared to former Hungarian territory (Slovakia and Ruthenia): three lands (země) (also called district units (kraje)): Bohemia, Moravia, Silesia, plus 21 counties (župy) in today's Slovakia and three counties in today's Ruthenia; both lands and counties were divided into districts (okresy). 1923–1927: As above, except that the Slovak and Ruthenian counties were replaced by six (grand) counties ((veľ)župy) in Slovakia and one (grand) county in Ruthenia, and the numbers and | The country became a Marxist-Leninist state under Soviet domination with a command economy. In 1960, the country officially became a socialist republic, the Czechoslovak Socialist Republic. It was a satellite state of the Soviet Union. 1989–1990: Czechoslovakia formally became a federal republic comprising the Czech Socialist Republic and the Slovak Socialist Republic. In late 1989, the communist rule came to an end during the Velvet Revolution followed by the re-establishment of a democratic parliamentary republic. 1990–1992: Shortly after the Velvet Revolution, the state was renamed the Czech and Slovak Federative Republic, consisting of the Czech Republic and the Slovak Republic (Slovakia) until the peaceful dissolution on 1 January 1993. Neighbors Austria 1918–1938, 1945–1992 Germany (both predecessors, West Germany and East Germany, were neighbors between 1949 and 1990) Hungary Poland Romania 1918–1938 Soviet Union 1945–1991 Ukraine 1991–1992 (Soviet Union member until 1991) Topography The country was of generally irregular terrain. The western area was part of the north-central European uplands. The eastern region was composed of the northern reaches of the Carpathian Mountains and lands of the Danube River basin. Climate The weather is mild winters and mild summers. Influenced by the Atlantic Ocean from the west, the Baltic Sea from the north, and Mediterranean Sea from the south. There is no continental weather. Names 1918–1938: Czechoslovak Republic (abbreviated ČSR), or Czechoslovakia, before the formalization of the name in 1920, also known as Czecho-Slovakia or the Czecho-Slovak state 1938–1939: Czecho-Slovak Republic, or Czecho-Slovakia 1945–1960: Czechoslovak Republic (ČSR), or Czechoslovakia 1960–1990: Czechoslovak Socialist Republic (ČSSR), or Czechoslovakia 1990–1992: Czech and Slovak Federative Republic (ČSFR), or Czechoslovakia History Origins The area was long a part of the Austro-Hungarian Empire until the empire collapsed at the end of World War I. The new state was founded by Tomáš Garrigue Masaryk (1850–1937), who served as its first president from 14 November 1918 to 14 December 1935. He was succeeded by his close ally, Edvard Beneš (1884–1948). The roots of Czech nationalism go back to the 19th century, when philologists and educators, influenced by Romanticism, promoted the Czech language and pride in the Czech people. Nationalism became a mass movement in the second half of the 19th century. Taking advantage of the limited opportunities for participation in political life under Austrian rule, Czech leaders such as historian František Palacký (1798–1876) founded various patriotic, self-help organizations which provided a chance for many of their compatriots to participate in communal life prior to independence. Palacký supported Austro-Slavism and worked for a reorganized and federal Austrian Empire, which would protect the Slavic speaking peoples of Central Europe against Russian and German threats. An advocate of democratic reform and Czech autonomy within Austria-Hungary, Masaryk was elected twice to the Reichsrat (Austrian Parliament), first from 1891 to 1893 for the Young Czech Party, and again from 1907 to 1914 for the Czech Realist Party, which he had founded in 1889 with Karel Kramář and Josef Kaizl. During World War I a number of Czechs and Slovaks, the Czechoslovak Legions, fought with the Allies in France and Italy, while large numbers deserted to Russia in exchange for its support for the independence of Czechoslovakia from the Austrian Empire. With the outbreak of World War I, Masaryk began working for Czech independence in a union with Slovakia. With Edvard Beneš and Milan Rastislav Štefánik, Masaryk visited several Western countries and won support from influential publicists. The Czechoslovak National Council was the main organization that advanced the claims for a Czechoslovak state. First Czechoslovak Republic Formation The Bohemian Kingdom ceased to exist in 1918 when it was incorporated into Czechoslovakia. Czechoslovakia was founded in October 1918, as one of the successor states of the Austro-Hungarian Empire at the end of World War I and as part of the Treaty of Saint-Germain-en-Laye. It consisted of the present day territories of Bohemia, Moravia, Slovakia and Carpathian Ruthenia. Its territory included some of the most industrialized regions of the former Austria-Hungary. Ethnicity The new country was a multi-ethnic state, with Czechs and Slovaks as constituent peoples. The population consisted of Czechs (51%), Slovaks (16%), Germans (22%), Hungarians (5%) and Rusyns (4%). Many of the Germans, Hungarians, Ruthenians and Poles and some Slovaks, felt oppressed because the political elite did not generally allow political autonomy for minority ethnic groups. This policy led to unrest among the non-Czech population, particularly in German-speaking Sudetenland, which initially had proclaimed itself part of the Republic of German-Austria in accordance with the self-determination principle. The state proclaimed the official ideology that there were no separate Czech and Slovak nations, but only one nation of Czechoslovaks (see Czechoslovakism), to the disagreement of Slovaks and other ethnic groups. Once a unified Czechoslovakia was restored after World War II (after the country had been divided during the war), the conflict between the Czechs and the Slovaks surfaced again. The governments of Czechoslovakia and other Central European nations deported ethnic Germans, reducing the presence of minorities in the nation. Most of the Jews had been killed during the war by the Nazis. *Jews identified themselves as Germans or Hungarians (and Jews only by religion not ethnicity), the sum is, therefore, more than 100%. Interwar period During the period between the two world wars Czechoslovakia was a democratic state. The population was generally literate, and contained fewer alienated groups. The influence of these conditions was augmented by the political values of Czechoslovakia's leaders and the policies they adopted. Under Tomas Masaryk, Czech and Slovak politicians promoted progressive social and economic conditions that served to defuse discontent. Foreign minister Beneš became the prime architect of the Czechoslovak-Romanian-Yugoslav alliance (the "Little Entente", 1921–38) directed against Hungarian attempts to reclaim lost areas. Beneš worked closely with France. Far more dangerous was the German element, which after 1933 became allied with the Nazis in Germany. The increasing feeling of inferiority among the Slovaks, who were hostile to the more numerous Czechs, weakened the country in the late 1930s. Many Slovaks supported an extreme nationalist movement and welcomed the puppet Slovak state set up under Hitler's control in 1939. After 1933, Czechoslovakia remained the only democracy in central and eastern Europe. Munich Agreement, and Two-Step German Occupation In September 1938, Adolf Hitler demanded control of the Sudetenland. On 29 September 1938, Britain and France ceded control in the Appeasement at the Munich Conference; France ignored the military alliance it had with Czechoslovakia. During October 1938, Nazi Germany occupied the Sudetenland border region, effectively crippling Czechoslovak defences. The First Vienna Award assigned a strip of southern Slovakia and Carpathian Ruthenia to Hungary. Poland occupied Zaolzie, an area whose population was majority Polish, in October 1938. On 14 March 1939, the remainder ("rump") of Czechoslovakia was dismembered by the proclamation of the Slovak State, the next day the rest of Carpathian Ruthenia was occupied and annexed by Hungary, while the following day the German Protectorate of Bohemia and Moravia was proclaimed. The eventual goal of the German state under Nazi leadership was to eradicate Czech nationality through assimilation, deportation, and extermination of the Czech intelligentsia; the intellectual elites and middle class made up a considerable number of the 200,000 people who passed through concentration camps and the 250,000 who died during German occupation. Under Generalplan Ost, it was assumed that around 50% of Czechs would be fit for Germanization. The Czech intellectual elites were to be removed not only from Czech territories but from Europe completely. The authors of Generalplan Ost believed it would be best if they emigrated overseas, as even in Siberia they were considered a threat to German rule. Just like Jews, Poles, Serbs, and several other nations, Czechs were considered to be untermenschen by the Nazi state. In 1940, in a secret Nazi plan for the Germanization of the Protectorate of Bohemia and Moravia it was declared that those considered to be of racially Mongoloid origin and the Czech intelligentsia were not to be Germanized. The deportation of Jews to concentration camps was organized under the direction of Reinhard Heydrich, and the fortress town of Terezín was made into a ghetto way station for Jewish families. On 4 June 1942 Heydrich died after being wounded by an assassin in Operation Anthropoid. Heydrich's successor, Colonel General Kurt Daluege, ordered mass arrests and executions and the destruction of the villages of Lidice and Ležáky. In 1943 the German war effort was accelerated. Under the authority of Karl Hermann Frank, German minister of state for Bohemia and Moravia, some 350,000 Czech laborers were dispatched to the Reich. Within the protectorate, all non-war-related industry was prohibited. Most of the Czech population obeyed quiescently up until the final months preceding the end of the war, while thousands were involved in the resistance movement. For the Czechs of the Protectorate Bohemia and Moravia, German occupation was a period of brutal oppression. Czech losses resulting from political persecution and deaths in concentration camps totaled between 36,000 and 55,000. The Jewish populations of Bohemia and Moravia (118,000 according to the 1930 census) were virtually annihilated. Many Jews emigrated after 1939; more than 70,000 were killed; 8,000 survived at Terezín. Several thousand Jews managed to live in freedom or in hiding throughout the occupation. Despite the estimated 136,000 deaths at the hands of the Nazi regime, the population in the Reichsprotektorate saw a net increase during the war years of approximately 250,000 in line with an increased birth rate. On 6 May 1945, the third US Army of General Patton entered Pilsen from the south west. On 9 May 1945, Soviet Red Army troops entered Prague. Communist Czechoslovakia After World War II, pre-war Czechoslovakia was re-established, with the exception of Subcarpathian Ruthenia, which was annexed by the Soviet Union and incorporated into the Ukrainian Soviet Socialist Republic. The Beneš decrees were promulgated concerning ethnic Germans (see Potsdam Agreement) and ethnic Hungarians. Under the decrees, citizenship was abrogated for people of German and Hungarian ethnic origin who had accepted |
some branches of artificial intelligence). Computer science focuses on methods involved in design, specification, programming, verification, implementation and testing of human-made computing systems. Fields As a discipline, computer science spans a range of topics from theoretical studies of algorithms and the limits of computation to the practical issues of implementing computing systems in hardware and software. CSAB, formerly called Computing Sciences Accreditation Board—which is made up of representatives of the Association for Computing Machinery (ACM), and the IEEE Computer Society (IEEE CS)—identifies four areas that it considers crucial to the discipline of computer science: theory of computation, algorithms and data structures, programming methodology and languages, and computer elements and architecture. In addition to these four areas, CSAB also identifies fields such as software engineering, artificial intelligence, computer networking and communication, database systems, parallel computation, distributed computation, human–computer interaction, computer graphics, operating systems, and numerical and symbolic computation as being important areas of computer science. Theoretical computer science Theoretical Computer Science is mathematical and abstract in spirit, but it derives its motivation from the practical and everyday computation. Its aim is to understand the nature of computation and, as a consequence of this understanding, provide more efficient methodologies. Theory of computation According to Peter Denning, the fundamental question underlying computer science is, "What can be automated?" Theory of computation is focused on answering fundamental questions about what can be computed and what amount of resources are required to perform those computations. In an effort to answer the first question, computability theory examines which computational problems are solvable on various theoretical models of computation. The second question is addressed by computational complexity theory, which studies the time and space costs associated with different approaches to solving a multitude of computational problems. The famous P = NP? problem, one of the Millennium Prize Problems, is an open problem in the theory of computation. Information and coding theory Information theory, closely related to probability and statistics, is related to the quantification of information. This was developed by Claude Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data. Coding theory is the study of the properties of codes (systems for converting information from one form to another) and their fitness for a specific application. Codes are used for data compression, cryptography, error detection and correction, and more recently also for network coding. Codes are studied for the purpose of designing efficient and reliable data transmission methods. Data structures and algorithms Data structures and algorithms are the studies of commonly used computational methods and their computational efficiency. Programming language theory and formal methods Programming language theory is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and their individual features. It falls within the discipline of computer science, both depending on and affecting mathematics, software engineering, and linguistics. It is an active research area, with numerous dedicated academic journals. Formal methods are a particular kind of mathematically based technique for the specification, development and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design. They form an important theoretical underpinning for software engineering, especially where safety or security is involved. Formal methods are a useful adjunct to software testing since they help avoid errors and can also give a framework for testing. For industrial use, tool support is required. However, the high cost of using formal methods means that they are usually only used in the development of high-integrity and life-critical systems, where safety or security is of utmost importance. Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, and program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification. Computer systems and computational processes Artificial intelligence Artificial intelligence (AI) aims to or is required to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, learning, and communication found in humans and animals. From its origins in cybernetics and in the Dartmouth Conference (1956), artificial intelligence research has been necessarily cross-disciplinary, drawing on areas of expertise such as applied mathematics, symbolic logic, semiotics, electrical engineering, philosophy of mind, neurophysiology, and social intelligence. AI is associated in the popular mind with robotic development, but the main field of practical application has been as an embedded component in areas of software development, which require computational understanding. The starting point in the late 1940s was Alan Turing's question "Can computers think?", and the question remains effectively unanswered, although the Turing test is still used to assess computer output on the scale of human intelligence. But the automation of evaluative and predictive tasks has been increasingly successful as a substitute for human monitoring and intervention in domains of computer application involving complex real-world data. Computer architecture and organization Computer architecture, or digital computer organization, is the conceptual design and fundamental operational structure of a computer system. It focuses largely on the way by which the central processing unit performs internally and accesses addresses in memory. Computer engineers study computational logic and design of computer hardware, from individual processor components, microcontrollers, personal computers to supercomputers and embedded systems. The term “architecture” in computer literature can be traced to the work of Lyle R. Johnson and Frederick P. Brooks, Jr., members of the Machine Organization department in IBM's main research center in 1959. Concurrent, parallel and distributed computing Concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other. A number of mathematical models have been developed for general concurrent computation including Petri nets, process calculi and the Parallel Random Access Machine model. When multiple computers are connected in a network while using concurrency, this is known as a distributed system. Computers within that distributed system have their own private memory, and information can be exchanged to achieve common goals. Computer networks This branch of computer science aims to manage networks between computers worldwide. Computer security and cryptography Computer security is a branch of computer technology with the objective of protecting information from unauthorized access, disruption, or modification while maintaining the accessibility and usability of the system for its intended users. Historical cryptography is the art of writing and deciphering secret messages. Modern cryptography is the scientific study of problems relating to distributed computations that can be attacked. Technologies studied in modern cryptography include symmetric and asymmetric encryption, digital signatures, cryptographic hash functions, key-agreement protocols, blockchain, zero-knowledge proofs, and garbled circuits. Databases and data mining A database is intended to organize, store, and retrieve large amounts of data easily. Digital databases are managed using database management systems to store, create, maintain, and search data, through database models and query languages. Data mining is a process of discovering patterns in large data sets. Computer graphics and visualization Computer graphics is the study of digital visual contents and involves the synthesis and manipulation of image data. The study is connected to many other fields in computer science, including computer vision, image processing, and computational geometry, and is heavily applied in the fields of special effects and video games. Image and sound processing Information can take the form of images, sound, video or other multimedia. Bits of information can be streamed via signals. Its processing is the central notion of informatics, the European view on computing, which studies information processing algorithms independently of the type of information carrier - whether it is electrical, mechanical or biological. This field plays important role in information theory, telecommunications, information engineering and has applications in medical image computing and speech synthesis, among others. What is the lower bound on the complexity of fast Fourier transform algorithms? is one of unsolved problems in theoretical computer science. Applied computer science Computational science, finance and engineering Scientific computing (or computational science) is the field of study concerned with constructing mathematical models and quantitative analysis techniques and using computers to analyze and solve scientific problems. A major usage of scientific computing is simulation of various processes, including computational fluid dynamics, physical, electrical, and electronic systems and circuits, as well as societies and social situations (notably war games) along with their habitats, among many others. Modern computers enable optimization of such designs as complete aircraft. Notable in electrical and electronic circuit design are SPICE, as well as software for physical realization of new (or modified) designs. The latter includes essential design software for integrated circuits. Social computing and human–computer interaction Social computing is an area that is concerned with the intersection of social behavior and computational systems. Human–computer interaction research develops theories, principles, and guidelines for user interface designers. Software engineering Software engineering is the study of designing, implementing, and modifying the software in order to ensure it is of high quality, affordable, maintainable, and fast to build. It is a systematic approach to software design, involving the application of engineering practices to software. Software engineering deals with the organizing and analyzing of software—it doesn't just deal with the creation or manufacture of new software, but its internal arrangement and maintenance. For example software testing, systems engineering, technical debt and software development processes. Discoveries The philosopher of computing Bill Rapaport noted three Great Insights of Computer Science: Gottfried Wilhelm Leibniz's, George Boole's, Alan Turing's, Claude Shannon's, and Samuel Morse's insight: there are only two objects that a computer has to deal with in order to represent "anything". All the information about any computable problem can be represented using only 0 and 1 (or any other bistable pair that can flip-flop between two easily distinguishable states, such as "on/off", "magnetized/de-magnetized", "high-voltage/low-voltage", etc.). Alan Turing's insight: there are only five actions that a computer has to perform in order to do "anything". Every algorithm can be expressed in a language for a computer consisting of only five basic instructions: move left one location; move right one location; read symbol at current location; print 0 at current location; print 1 at current location. Corrado Böhm and Giuseppe Jacopini's insight: there are only three ways of combining these actions (into more complex ones) that are needed in order for a computer to do "anything". Only three rules are needed to combine any set of basic instructions into more complex ones: sequence: first do this, then do that; selection: IF such-and-such is the case, THEN do this, ELSE do that; repetition: WHILE such-and-such is the case, DO this. Note that the three rules of Boehm's and Jacopini's insight can be further simplified with the use of goto (which means it is more elementary than structured programming). Programming paradigms Programming languages can be used to accomplish different tasks in different ways. Common programming paradigms include: Functional programming, a style of building the structure and elements of computer programs that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It is a declarative programming paradigm, which means programming is done with expressions or declarations instead of statements. Imperative programming, a programming paradigm that uses statements that change a program's state. In much the same way that the imperative mood in natural languages expresses commands, an imperative program consists of commands for the computer to perform. Imperative programming focuses on describing how a program operates. Object-oriented programming, a programming paradigm based on the concept of "objects", which may contain data, in the form of fields, often known as attributes; and code, in the form of procedures, often known as methods. A feature of objects is that an object's procedures can access and often modify the data fields of the object with which they are associated. Thus object-oriented computer programs are made out of objects that interact with one another. Service-oriented programming, a programming paradigm that uses "services" as the unit of computer work, to design and implement integrated business applications and mission critical software programs Many languages offer support for multiple paradigms, making the distinction more a matter of style than of technical capabilities. Academia Conferences are important events for computer science research. During these conferences, researchers from the public and private sectors present their recent work and meet. Unlike in most other academic fields, in computer science, the prestige of conference papers is greater than that of journal publications. One proposed explanation for this is the quick development of this relatively new field requires rapid review and distribution of results, a task better handled by conferences than by journals. Education Computer Science, known by its near synonyms, Computing, Computer Studies, has been taught in UK schools since the days of batch processing, mark sensitive cards and paper tape but usually to a select few students. In 1981, the BBC produced a micro-computer and classroom network and Computer Studies became common for GCE O level students (11–16-year-old), and Computer Science to A level students. Its importance was recognised, and it became a compulsory part of the National Curriculum, for Key Stage 3 & 4. In September | program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification. Computer systems and computational processes Artificial intelligence Artificial intelligence (AI) aims to or is required to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, learning, and communication found in humans and animals. From its origins in cybernetics and in the Dartmouth Conference (1956), artificial intelligence research has been necessarily cross-disciplinary, drawing on areas of expertise such as applied mathematics, symbolic logic, semiotics, electrical engineering, philosophy of mind, neurophysiology, and social intelligence. AI is associated in the popular mind with robotic development, but the main field of practical application has been as an embedded component in areas of software development, which require computational understanding. The starting point in the late 1940s was Alan Turing's question "Can computers think?", and the question remains effectively unanswered, although the Turing test is still used to assess computer output on the scale of human intelligence. But the automation of evaluative and predictive tasks has been increasingly successful as a substitute for human monitoring and intervention in domains of computer application involving complex real-world data. Computer architecture and organization Computer architecture, or digital computer organization, is the conceptual design and fundamental operational structure of a computer system. It focuses largely on the way by which the central processing unit performs internally and accesses addresses in memory. Computer engineers study computational logic and design of computer hardware, from individual processor components, microcontrollers, personal computers to supercomputers and embedded systems. The term “architecture” in computer literature can be traced to the work of Lyle R. Johnson and Frederick P. Brooks, Jr., members of the Machine Organization department in IBM's main research center in 1959. Concurrent, parallel and distributed computing Concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other. A number of mathematical models have been developed for general concurrent computation including Petri nets, process calculi and the Parallel Random Access Machine model. When multiple computers are connected in a network while using concurrency, this is known as a distributed system. Computers within that distributed system have their own private memory, and information can be exchanged to achieve common goals. Computer networks This branch of computer science aims to manage networks between computers worldwide. Computer security and cryptography Computer security is a branch of computer technology with the objective of protecting information from unauthorized access, disruption, or modification while maintaining the accessibility and usability of the system for its intended users. Historical cryptography is the art of writing and deciphering secret messages. Modern cryptography is the scientific study of problems relating to distributed computations that can be attacked. Technologies studied in modern cryptography include symmetric and asymmetric encryption, digital signatures, cryptographic hash functions, key-agreement protocols, blockchain, zero-knowledge proofs, and garbled circuits. Databases and data mining A database is intended to organize, store, and retrieve large amounts of data easily. Digital databases are managed using database management systems to store, create, maintain, and search data, through database models and query languages. Data mining is a process of discovering patterns in large data sets. Computer graphics and visualization Computer graphics is the study of digital visual contents and involves the synthesis and manipulation of image data. The study is connected to many other fields in computer science, including computer vision, image processing, and computational geometry, and is heavily applied in the fields of special effects and video games. Image and sound processing Information can take the form of images, sound, video or other multimedia. Bits of information can be streamed via signals. Its processing is the central notion of informatics, the European view on computing, which studies information processing algorithms independently of the type of information carrier - whether it is electrical, mechanical or biological. This field plays important role in information theory, telecommunications, information engineering and has applications in medical image computing and speech synthesis, among others. What is the lower bound on the complexity of fast Fourier transform algorithms? is one of unsolved problems in theoretical computer science. Applied computer science Computational science, finance and engineering Scientific computing (or computational science) is the field of study concerned with constructing mathematical models and quantitative analysis techniques and using computers to analyze and solve scientific problems. A major usage of scientific computing is simulation of various processes, including computational fluid dynamics, physical, electrical, and electronic systems and circuits, as well as societies and social situations (notably war games) along with their habitats, among many others. Modern computers enable optimization of such designs as complete aircraft. Notable in electrical and electronic circuit design are SPICE, as well as software for physical realization of new (or modified) designs. The latter includes essential design software for integrated circuits. Social computing and human–computer interaction Social computing is an area that is concerned with the intersection of social behavior and computational systems. Human–computer interaction research develops theories, principles, and guidelines for user interface designers. Software engineering Software engineering is the study of designing, implementing, and modifying the software in order to ensure it is of high quality, affordable, maintainable, and fast to build. It is a systematic approach to software design, involving the application of engineering practices to software. Software engineering deals with the organizing and analyzing of software—it doesn't just deal with the creation or manufacture of new software, but its internal arrangement and maintenance. For example software testing, systems engineering, technical debt and software development processes. Discoveries The philosopher of computing Bill Rapaport noted three Great Insights of Computer Science: Gottfried Wilhelm Leibniz's, George Boole's, Alan Turing's, Claude Shannon's, and Samuel Morse's insight: there are only two objects that a computer has to deal with in order to represent "anything". All the information about any computable problem can be represented using only 0 and 1 (or any other bistable pair that can flip-flop between two easily distinguishable states, such as "on/off", "magnetized/de-magnetized", "high-voltage/low-voltage", etc.). Alan Turing's insight: there are only five actions that a computer has to perform in order to do "anything". Every algorithm can be expressed in a language for a computer consisting of only five basic instructions: move left one location; move right one location; read symbol at current location; print 0 at current location; print 1 at current location. Corrado Böhm and Giuseppe Jacopini's insight: there are only three ways of combining these actions (into more complex ones) that are needed in order for a computer to do "anything". Only three rules are needed to combine any set of basic instructions into more complex ones: sequence: first do this, then do that; selection: IF such-and-such is the case, THEN do this, ELSE do that; repetition: WHILE such-and-such is the case, DO this. Note that the three rules of Boehm's and Jacopini's insight can be further simplified with the use of goto (which means it is more elementary than structured programming). Programming paradigms Programming languages can be used to accomplish different tasks in different ways. Common programming paradigms include: Functional programming, a style of building the structure and elements of computer programs that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It is a declarative programming paradigm, which means programming is done with expressions or declarations instead of statements. Imperative programming, a programming paradigm that uses statements that change a program's state. In much the same way that the imperative mood in natural languages expresses commands, an imperative program consists of commands for the computer to perform. Imperative programming focuses on describing how a program operates. Object-oriented programming, a programming paradigm based on the concept of "objects", which may contain data, in the form of fields, often known as attributes; and code, in the form of procedures, often known as methods. A feature of objects is that an object's procedures can access and often modify the data fields of the object with which they are associated. Thus object-oriented computer programs are made out of objects that interact with one another. Service-oriented programming, a programming paradigm that uses "services" as the unit of computer work, to design and implement integrated business applications and mission critical software programs Many languages offer support for multiple paradigms, making the distinction more a matter of style than of technical capabilities. Academia Conferences are important events for computer science research. During these conferences, researchers from the public and private sectors present their recent work and meet. Unlike in most other academic fields, in computer science, the prestige of conference papers is greater than that of journal publications. One proposed explanation for this is the quick development of this relatively new field requires rapid review and distribution of results, a task better handled by conferences than by journals. Education Computer Science, known by its near synonyms, Computing, Computer Studies, has been taught in UK schools since the days of batch processing, mark sensitive cards and paper tape but usually to a select few students. In 1981, the BBC produced a micro-computer and classroom network and Computer Studies became common for GCE O level students (11–16-year-old), and Computer Science to A level students. Its importance was recognised, and it became a compulsory part of the National Curriculum, for Key Stage 3 & 4. In September 2014 it became an entitlement for all pupils over the age of 4. In the US, with 14,000 school districts deciding the curriculum, provision was fractured. According to a 2010 report by the Association for Computing Machinery (ACM) and Computer Science Teachers Association (CSTA), only 14 out of 50 states have adopted significant education standards for high school computer science. Israel, New Zealand, and South Korea have included computer science in their national secondary education curricula, and several others are following. See also Computer engineering Computer programming Digital Revolution Information and communications technology Information technology List of computer scientists List of computer science awards List of important publications in computer science List of pioneers in computer science List of unsolved problems in computer science Programming language Software engineering Notes References Further reading Overview "Within more than 70 chapters, every one new or significantly revised, one can find any kind of information and references about computer science one can imagine. […] all in all, there is absolute nothing about Computer Science that can not be found in the 2.5 kilogram-encyclopaedia with its 110 survey articles […]." (Christoph Meinel, Zentralblatt MATH) "[…] this set is the most unique and possibly the most useful to the [theoretical computer science] community, in support both of teaching and research […]. The books can be used by anyone wanting simply to gain an understanding of one of these areas, or by someone desiring to be in research in a topic, or by instructors wishing to find timely information on a subject they are teaching outside their major areas of expertise." (Rocky Ross, SIGACT News) "Since 1976, this has been the definitive reference work on computer, computing, and computer science. […] Alphabetically arranged and classified into broad subject areas, the entries cover hardware, computer systems, information and data, software, the mathematics of computing, theory of computation, methodologies, applications, and computing milieu. The editors have done a commendable job of blending historical perspective and practical reference information. The encyclopedia remains essential for most public and academic library reference collections." (Joe Accardin, Northeastern Illinois Univ., Chicago) Selected literature "Covering a period from 1966 to 1993, its interest lies not only in the content of each of these papers – still timely today – but also in their being put together so that ideas expressed at different times complement each other nicely." (N. Bernard, Zentralblatt MATH) Articles Peter J. Denning. Is computer science science?, Communications of the ACM, April 2005. Peter J. Denning, Great principles in computing curricula, Technical Symposium on Computer Science Education, 2004. Research evaluation for computer science, Informatics Europe report . Shorter journal version: Bertrand Meyer, Christine Choppy, Jan van Leeuwen and Jorgen Staunstrup, Research evaluation for computer science, in Communications of the ACM, vol. 52, no. 4, pp. 31–34, April 2009. Curriculum and classification Association for Computing Machinery. 1998 ACM Computing Classification System. 1998. Joint Task Force of Association for Computing Machinery (ACM), Association for Information Systems (AIS) and IEEE Computer Society (IEEE CS). Computing Curricula 2005: The Overview Report. September 30, 2005. Norman Gibbs, Allen Tucker. "A model curriculum for a liberal arts degree in computer science". Communications of the ACM, Volume 29 Issue 3, March 1986. External links Scholarly Societies in Computer Science What is Computer Science? Best Papers Awards in Computer Science since 1996 Photographs of computer scientists by Bertrand Meyer EECS.berkeley.edu Bibliography and academic search engines CiteSeerx (article): search engine, digital library and repository for scientific and academic papers with a focus on computer and information science. DBLP Computer Science Bibliography (article): computer science bibliography website hosted at Universität Trier, in Germany. The Collection of Computer |
a Mediterranean style of cuisine from Catalonia Catalan Republic Catalan State Places 13178 Catalan, asteroid #13178, named "Catalan" Catalán (crater), a lunar crater named for Miguel Ángel Catalán Çatalan, İvrindi, a village in Balıkesir province, Turkey Çatalan, Karaisalı, a village in Adana Province, Turkey Catalan Bay, Gibraltar Catalan Sea, more commonly known as the Balearic Sea Catalan Mediterranean System, the Catalan Mountains Facilities and structures Çatalan Bridge, Adana, Turkey Çatalan Dam, Adana, Turkey Catalan Batteries, Gibraltar People Catalan, Lord of Monaco (1415–1457), Lord of Monaco from 1454 until 1457 Alfredo Catalán (born 1968), Venezuelan politician Alex Catalán (born 1968), Spanish filmmaker Arnaut Catalan (1219-1253), troubador Diego Catalán (1928-2008), Spanish philologist Emilio Arenales Catalán (1922-1969) Guatemalan politician Eugène Charles Catalan (1814–1894), French and Belgian mathematician Miguel A. Catalán (1894–1957), Spanish spectroscopist Moses Chayyim Catalan (died 1661), Italian rabbi Sergio Catalán (born 1991) Chilean soccer player Mathematics Mathematical concepts named after mathematician Eugène Catalan: Catalan numbers, a sequence of | or related to Catalonia: Catalan language, a Romance language Catalans, an ethnic group formed by the people from, or with origins in, Catalonia Països Catalans, territories where Catalan is spoken Catalan cuisine, a Mediterranean style of cuisine from Catalonia Catalan Republic Catalan State Places 13178 Catalan, asteroid #13178, named "Catalan" Catalán (crater), a lunar crater named for Miguel Ángel Catalán Çatalan, İvrindi, a village in Balıkesir province, Turkey Çatalan, Karaisalı, a village in Adana Province, Turkey Catalan Bay, Gibraltar Catalan Sea, more commonly known as the Balearic Sea Catalan Mediterranean System, the Catalan Mountains Facilities and structures Çatalan Bridge, Adana, Turkey Çatalan Dam, Adana, Turkey Catalan Batteries, Gibraltar People Catalan, Lord of Monaco (1415–1457), Lord of Monaco from 1454 until 1457 |
and are so self-sufficient that the complexity of the entire physical universe evolved from fundamental particles in processes such as stellar evolution, life forms developed in biological evolution, and in the same way the origin of life by natural causes has resulted from these laws. In one form or another, theistic evolution is the view of creation taught at the majority of mainline Protestant seminaries. For Roman Catholics, human evolution is not a matter of religious teaching, and must stand or fall on its own scientific merits. Evolution and the Roman Catholic Church are not in conflict. The Catechism of the Catholic Church comments positively on the theory of evolution, which is neither precluded nor required by the sources of faith, stating that scientific studies "have splendidly enriched our knowledge of the age and dimensions of the cosmos, the development of life-forms and the appearance of man." Roman Catholic schools teach evolution without controversy on the basis that scientific knowledge does not extend beyond the physical, and scientific truth and religious truth cannot be in conflict. Theistic evolution can be described as "creationism" in holding that divine intervention brought about the origin of life or that divine laws govern formation of species, though many creationists (in the strict sense) would deny that the position is creationism at all. In the creation–evolution controversy, its proponents generally take the "evolutionist" side. This sentiment was expressed by Fr. George Coyne, (the Vatican's chief astronomer between 1978 and 2006):...in America, creationism has come to mean some fundamentalistic, literal, scientific interpretation of Genesis. Judaic-Christian faith is radically creationist, but in a totally different sense. It is rooted in a belief that everything depends upon God, or better, all is a gift from God. While supporting the methodological naturalism inherent in modern science, the proponents of theistic evolution reject the implication taken by some atheists that this gives credence to ontological materialism. In fact, many modern philosophers of science, including atheists, refer to the long-standing convention in the scientific method that observable events in nature should be explained by natural causes, with the distinction that it does not assume the actual existence or non-existence of the supernatural. Religious views There are also non-Christian forms of creationism, notably Islamic creationism and Hindu creationism. Baháʼí Faith In the creation myth taught by Bahá'u'lláh, the Baháʼí Faith founder, the universe has "neither beginning nor ending," and that the component elements of the material world have always existed and will always exist. With regard to evolution and the origin of human beings, `Abdu'l-Bahá gave extensive comments on the subject when he addressed western audiences in the beginning of the 20th century. Transcripts of these comments can be found in Some Answered Questions, Paris Talks and The Promulgation of Universal Peace. `Abdu'l-Bahá described the human species as having evolved from a primitive form to modern man, but that the capacity to form human intelligence was always in existence. Buddhism Buddhism denies a creator deity and posits that mundane deities such as Mahabrahma are sometimes misperceived to be a creator. While Buddhism includes belief in divine beings called devas, it holds that they are mortal, limited in their power, and that none of them are creators of the universe. In the Saṃyutta Nikāya, the Buddha also states that the cycle of rebirths stretches back hundreds of thousands of eons, without discernible beginning. Major Buddhist Indian philosophers such as Nagarjuna, Vasubandhu, Dharmakirti and Buddhaghosa, consistently critiqued Creator God views put forth by Hindu thinkers. Christianity , most Christians around the world accepted evolution as the most likely explanation for the origins of species, and did not take a literal view of the Genesis creation narrative. The United States is an exception where belief in religious fundamentalism is much more likely to affect attitudes towards evolution than it is for believers elsewhere. Political partisanship affecting religious belief may be a factor because political partisanship in the US is highly correlated with fundamentalist thinking, unlike in Europe. Most contemporary Christian leaders and scholars from mainstream churches, such as Anglicans and Lutherans, consider that there is no conflict between the spiritual meaning of creation and the science of evolution. According to the former archbishop of Canterbury, Rowan Williams, "...for most of the history of Christianity, and I think this is fair enough, most of the history of the Christianity there's been an awareness that a belief that everything depends on the creative act of God, is quite compatible with a degree of uncertainty or latitude about how precisely that unfolds in creative time." Leaders of the Anglican and Roman Catholic churches have made statements in favor of evolutionary theory, as have scholars such as the physicist John Polkinghorne, who argues that evolution is one of the principles through which God created living beings. Earlier supporters of evolutionary theory include Frederick Temple, Asa Gray and Charles Kingsley who were enthusiastic supporters of Darwin's theories upon their publication, and the French Jesuit priest and geologist Pierre Teilhard de Chardin saw evolution as confirmation of his Christian beliefs, despite condemnation from Church authorities for his more speculative theories. Another example is that of Liberal theology, not providing any creation models, but instead focusing on the symbolism in beliefs of the time of authoring Genesis and the cultural environment. Many Christians and Jews had been considering the idea of the creation history as an allegory (instead of historical) long before the development of Darwin's theory of evolution. For example, Philo, whose works were taken up by early Church writers, wrote that it would be a mistake to think that creation happened in six days, or in any set amount of time. Augustine of the late fourth century who was also a former neoplatonist argued that everything in the universe was created by God at the same moment in time (and not in six days as a literal reading of the Book of Genesis would seem to require); It appears that both Philo and Augustine felt uncomfortable with the idea of a seven-day creation because it detracted from the notion of God's omnipotence. In 1950, Pope Pius XII stated limited support for the idea in his encyclical Humani generis. In 1996, Pope John Paul II stated that "new knowledge has led to the recognition of the theory of evolution as more than a hypothesis," but, referring to previous papal writings, he concluded that "if the human body takes its origin from pre-existent living matter, the spiritual soul is immediately created by God." In the US, Evangelical Christians have continued to believe in a literal Genesis. Members of evangelical Protestant (70%), Mormon (76%) and Jehovah's Witnesses (90%) denominations are the most likely to reject the evolutionary interpretation of the origins of life. Jehovah's Witnesses adhere to a combination of gap creationism and day-age creationism, asserting that scientific evidence about the age of the universe is compatible with the Bible, but that the 'days' after Genesis 1:1 were each thousands of years in length. The historic Christian literal interpretation of creation requires the harmonization of the two creation stories, Genesis 1:1–2:3 and Genesis 2:4–25, for there to be a consistent interpretation. They sometimes seek to ensure that their belief is taught in science classes, mainly in American schools. Opponents reject the claim that the literalistic biblical view meets the criteria required to be considered scientific. Many religious groups teach that God created the Cosmos. From the days of the early Christian Church Fathers there were allegorical interpretations of the Book of Genesis as well as literal aspects. Christian Science, a system of thought and practice derived from the writings of Mary Baker Eddy, interprets the Book of Genesis figuratively rather than literally. It holds that the material world is an illusion, and consequently not created by God: the only real creation is the spiritual realm, of which the material world is a distorted version. Christian Scientists regard the story of the creation in the Book of Genesis as having symbolic rather than literal meaning. According to Christian Science, both creationism and evolution are false from an absolute or "spiritual" point of view, as they both proceed from a (false) belief in the reality of a material universe. However, Christian Scientists do not oppose the teaching of evolution in schools, nor do they demand that alternative accounts be taught: they believe that both material science and literalist theology are concerned with the illusory, mortal and material, rather than the real, immortal and spiritual. With regard to material theories of creation, Eddy showed a preference for Darwin's theory of evolution over others. Hinduism Hindu creationists claim that species of plants and animals are material forms adopted by pure consciousness which live an endless cycle of births and rebirths. Ronald Numbers says that: "Hindu Creationists have insisted on the antiquity of humans, who they believe appeared fully formed as long, perhaps, as trillions of years ago." Hindu creationism is a form of old Earth creationism, according to Hindu creationists the universe may even be older than billions of years. These views are based on the Vedas, the creation myths of which depict an extreme antiquity of the universe and history of the Earth. In Hindu cosmology, time cyclically repeats general events of creation and destruction, with many "first man", each known as Manu, the progenitor of mankind. Each Manu successively reigns over a 306.72 million year period known as a manvantara, each ending with the destruction of mankind followed by a sandhya (period of non-activity) before the next manvantara. 120.53 million years have elapsed in the current manvantara (current mankind) according to calculations on Hindu units of time. The universe is cyclically created at the start and destroyed at the end of a kalpa (day of Brahma), lasting for 4.32 billion years, which is followed by a pralaya (period of dissolution) of equal length. 1.97 billion years have elapsed in the current kalpa (current universe).The universal elements or building blocks (unmanifest matter) exists for a period known as a maha-kalpa, lasting for 311.04 trillion years, which is followed by a maha-pralaya (period of great dissolution) of equal length. 155.52 trillion years have elapsed in the current maha-kalpa. Islam Islamic creationism is the belief that the universe (including humanity) was directly created by God as explained in the Quran. It usually views the Book of Genesis as a corrupted version of God's message. The creation myths in the Quran are vaguer and allow for a wider range of interpretations similar to those in other Abrahamic religions. Islam also has its own school of theistic evolutionism, which holds that mainstream scientific analysis of the origin of the universe is supported by the Quran. Some Muslims believe in evolutionary creation, especially among liberal movements within Islam. Writing for The Boston Globe, Drake Bennett noted: "Without a Book of Genesis to account for ... Muslim creationists have little interest in proving that the age of the Earth is measured in the thousands rather than the billions of years, nor do they show much interest in the problem of the dinosaurs. And the idea that animals might evolve into other animals also tends to be less controversial, in part because there are passages of the Koran that seem to support it. But the issue of whether human beings are the product of evolution is just as fraught among Muslims." However, some Muslims, such as Adnan Oktar (also known as Harun Yahya), do not agree that one species can develop from another. Since the 1980s, Turkey has been a site of strong advocacy for creationism, supported by American adherents. There are several verses in the Qur'an which some modern writers have interpreted as being compatible with the expansion of the universe, Big Bang and Big Crunch theories: Ahmadiyya The Ahmadiyya movement actively promotes evolutionary theory. Ahmadis interpret scripture from the Qur'an to support the concept of macroevolution and give precedence to scientific theories. Furthermore, unlike orthodox Muslims, Ahmadis believe that humans have gradually evolved from different species. Ahmadis regard Adam as being the first Prophet of Godas opposed to him being the first man on Earth. Rather than wholly adopting the theory of natural selection, Ahmadis promote the idea of a "guided evolution," viewing each stage of the evolutionary process as having been selectively woven by God. Mirza Tahir Ahmad, Fourth Caliph of the Ahmadiyya Muslim Community has stated in his magnum opus Revelation, Rationality, Knowledge & Truth (1998) that evolution did occur but only through God being the One who brings it about. It does not occur itself, according to the Ahmadiyya Muslim Community. Judaism For Orthodox Jews who seek to reconcile discrepancies between science and the creation myths in the Bible, the notion that science and the Bible should even be reconciled through traditional scientific means is questioned. To these groups, science is as true as the Torah and if there seems to be a problem, epistemological limits are to blame for apparently irreconcilable points. They point to discrepancies between what is expected and what actually is to demonstrate that things are not always as they appear. They note that even the root word for "world" in the Hebrew language—עולם (Olam)—means hidden—נעלם (Neh-Eh-Lahm). Just as they know from the Torah that God created man and trees and the light on its way from the stars in their observed state, so too can they know that the world was created in its over the six days of Creation that reflects progression to its currently-observed state, with the understanding that physical ways to verify this may eventually be identified. This knowledge has been advanced by Rabbi Dovid Gottlieb, former philosophy professor at Johns Hopkins University. Also, relatively old Kabbalistic sources from well before the scientifically apparent age of the universe was first determined are in close concord with modern scientific estimates of the age of the universe, according to Rabbi Aryeh Kaplan, and based on Sefer Temunah, an early kabbalistic work attributed to the first-century Tanna Nehunya ben HaKanah. Many kabbalists accepted the teachings of the Sefer HaTemunah, including the medieval Jewish scholar Nahmanides, his close student Isaac ben Samuel of Acre, and David ben Solomon ibn Abi Zimra. Other parallels are derived, among other sources, from Nahmanides, who expounds that there was a Neanderthal-like species with which Adam mated (he did this long before Neanderthals had even been discovered scientifically). Reform Judaism does not take the Torah as a literal text, but rather as a symbolic or open-ended work. Some contemporary writers such as Rabbi Gedalyah Nadel have sought to reconcile the discrepancy between the account in the Torah, and scientific findings by arguing that each day referred to in the Bible was not 24 hours, but billions of years long. Others claim that the Earth was created a few thousand years ago, but was deliberately made to look as if it was five billion years old, e.g. by being created with ready made fossils. The best known exponent of this approach being Rabbi Menachem Mendel Schneerson Others state that although the world was physically created in six 24-hour days, the Torah accounts can be interpreted to mean that there was a period of billions of years before the six days of creation. Prevalence Most vocal literalist creationists are from the US, and strict creationist views are much less common in other developed countries. According to a study published in Science, a survey of the US, Turkey, Japan and Europe showed that public acceptance of evolution is most prevalent in Iceland, Denmark and Sweden at 80% of the population. There seems to be no significant correlation between believing in evolution and understanding evolutionary science. Australia A 2009 Nielsen poll showed that 23% of Australians believe "the biblical account of human origins," 42% believe in a "wholly scientific" explanation for the origins of life, while 32% believe in an evolutionary process "guided by God". A 2013 survey conducted by Auspoll and the Australian Academy of Science found that 80% of Australians believe in evolution (70% believe it is currently occurring, 10% believe in evolution but do not think it is currently occurring), 12% were not sure and 9% stated they do not believe in evolution. Brazil A 2011 Ipsos survey found that 47% of responders in Brazil identified themselves as "creationists and believe that human beings were in fact created by a spiritual force such as the God they believe in and do not believe that the origin of man came from evolving from other species such as apes". In 2004, IBOPE conducted a poll in Brazil that asked questions about creationism and the teaching of creationism in schools. When asked if creationism should be taught in schools, 89% of people said that creationism should be taught in schools. When asked if the teaching of creationism should replace the teaching of evolution in schools, 75% of people said that the teaching of creationism should replace the teaching of evolution in schools. Canada A 2012 survey, by Angus Reid Public Opinion revealed that 61 percent of Canadians believe in evolution. The poll asked "Where did human beings come fromdid we start as singular cells millions of year ago and evolve into our present form, or did God create us in his image 10,000 years ago?" In 2019, a Research Co. poll asked people in Canada if creationism "should be part of the school curriculum in their province". 38% of Canadians said that creationism should be part of the school curriculum, 39% of Canadians said that it should not be part of the school curriculum, and 23% of Canadians were undecided. Europe In Europe, literalist creationism is more widely rejected, though regular opinion polls are not available. Most people accept that evolution is the most widely accepted scientific theory as taught in most schools. In countries with a Roman Catholic majority, papal acceptance of evolutionary creationism as worthy of study has essentially ended debate on the matter for many people. In the UK, a 2006 poll on the "origin and development of life", asked participants to choose between three different perspectives on the origin of life: 22% chose creationism, 17% opted for intelligent design, 48% selected evolutionary theory, and the rest did not know. A subsequent 2010 YouGov poll on the correct explanation for the origin of humans found that 9% opted for creationism, 12% intelligent design, 65% evolutionary theory and 13% didn't know. The former Archbishop of Canterbury Rowan Williams, head of the worldwide Anglican Communion, views the idea of teaching creationism in schools as a mistake. In 2009, an Ipsos Mori survey in the United Kingdom found that 54% of Britons agreed with the view: "Evolutionary theories should be taught in science lessons in schools together with other possible perspectives, such as intelligent design and creationism." In Italy, Education Minister Letizia Moratti wanted to retire evolution from the secondary school level; after one week of massive protests, she reversed her opinion. There continues to be scattered and possibly mounting efforts on the part of religious groups throughout Europe to introduce creationism into public education. In response, the Parliamentary Assembly of the Council of Europe has released a draft report titled The dangers of creationism in education on June 8, 2007, reinforced by a further proposal of banning it in schools dated October 4, 2007. Serbia suspended the teaching of evolution for one week in September 2004, under education minister Ljiljana Čolić, only allowing schools to reintroduce evolution into the curriculum if they also taught creationism. "After a deluge of protest from scientists, teachers and opposition parties" says the BBC report, Čolić's deputy made the statement, "I have come here to confirm Charles Darwin is still alive" and announced that the decision was reversed. Čolić resigned after the government said that she had caused "problems that had started to reflect on the work of the entire government." Poland saw a major controversy over creationism in 2006, when the Deputy Education Minister, Mirosław Orzechowski, denounced evolution as "one of many lies" taught in Polish schools. His superior, Minister of Education Roman Giertych, has stated that the theory of evolution would continue to be taught in Polish schools, "as long as most scientists in our country say that it is the right theory." Giertych's father, Member of the European Parliament Maciej Giertych, has opposed the teaching of evolution and has claimed that dinosaurs and humans co-existed. A June 2015 - July 2016 Pew poll of Eastern European countries found that 56% of people from Armenia say that humans and other living things have "Existed in present state since the beginning of time". Armenia is followed by 52% from Bosnia, 42% from Moldova, 37% from Lithuania, 34% from Georgia and Ukraine, 33% from Croatia and Romania, 31% from Bulgaria, 29% from Greece and Serbia, 26% from Russia, 25% from Latvia, 23% from Belarus and Poland, 21% from Estonia and Hungary, and 16% from the Czech Republic. South Africa A 2011 Ipsos survey found that 56% of responders in South Africa identified themselves as "creationists and believe that human beings were in fact created by a spiritual force such as the God they believe in and do not believe that the origin of man came from evolving from other species such as apes". South Korea In 2009, an EBS survey in South Korea found that 63% of people believed that creation and evolution should both be taught | biologically untenable and not supported by the fossil record, as well as rejects the concept of common descent from a last universal common ancestor. Thus the evidence for macroevolution is claimed to be false, but microevolution is accepted as a genetic parameter designed by the Creator into the fabric of genetics to allow for environmental adaptations and survival. Generally, it is viewed by proponents as a middle ground between literal creationism and evolution. Organizations such as Reasons To Believe, founded by Hugh Ross, promote this version of creationism. Progressive creationism can be held in conjunction with hermeneutic approaches to the Genesis creation narrative such as the day-age creationism or framework/metaphoric/poetic views. Philosophic and scientific creationism Creation science Creation science, or initially scientific creationism, is a pseudoscience that emerged in the 1960s with proponents aiming to have young Earth creationist beliefs taught in school science classes as a counter to teaching of evolution. Common features of creation science argument include: creationist cosmologies which accommodate a universe on the order of thousands of years old, criticism of radiometric dating through a technical argument about radiohalos, explanations for the fossil record as a record of the Genesis flood narrative (see flood geology), and explanations for the present diversity as a result of pre-designed genetic variability and partially due to the rapid degradation of the perfect genomes God placed in "created kinds" or "baramins" due to mutations. Neo-creationism Neo-creationism is a pseudoscientific movement which aims to restate creationism in terms more likely to be well received by the public, by policy makers, by educators and by the scientific community. It aims to re-frame the debate over the origins of life in non-religious terms and without appeals to scripture. This comes in response to the 1987 ruling by the United States Supreme Court in Edwards v. Aguillard that creationism is an inherently religious concept and that advocating it as correct or accurate in public-school curricula violates the Establishment Clause of the First Amendment. One of the principal claims of neo-creationism propounds that ostensibly objective orthodox science, with a foundation in naturalism, is actually a dogmatically atheistic religion. Its proponents argue that the scientific method excludes certain explanations of phenomena, particularly where they point towards supernatural elements, thus effectively excluding religious insight from contributing to understanding the universe. This leads to an open and often hostile opposition to what neo-creationists term "Darwinism", which they generally mean to refer to evolution, but which they may extend to include such concepts as abiogenesis, stellar evolution and the Big Bang theory. Unlike their philosophical forebears, neo-creationists largely do not believe in many of the traditional cornerstones of creationism such as a young Earth, or in a dogmatically literal interpretation of the Bible. Intelligent design Intelligent design (ID) is the pseudoscientific view that "certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection." All of its leading proponents are associated with the Discovery Institute, a think tank whose wedge strategy aims to replace the scientific method with "a science consonant with Christian and theistic convictions" which accepts supernatural explanations. It is widely accepted in the scientific and academic communities that intelligent design is a form of creationism, and is sometimes referred to as "intelligent design creationism." ID originated as a re-branding of creation science in an attempt to avoid a series of court decisions ruling out the teaching of creationism in American public schools, and the Discovery Institute has run a series of campaigns to change school curricula. In Australia, where curricula are under the control of state governments rather than local school boards, there was a public outcry when the notion of ID being taught in science classes was raised by the Federal Education Minister Brendan Nelson; the minister quickly conceded that the correct forum for ID, if it were to be taught, is in religious or philosophy classes. In the US, teaching of intelligent design in public schools has been decisively ruled by a federal district court to be in violation of the Establishment Clause of the First Amendment to the United States Constitution. In Kitzmiller v. Dover, the court found that intelligent design is not science and "cannot uncouple itself from its creationist, and thus religious, antecedents," and hence cannot be taught as an alternative to evolution in public school science classrooms under the jurisdiction of that court. This sets a persuasive precedent, based on previous US Supreme Court decisions in Edwards v. Aguillard and Epperson v. Arkansas (1968), and by the application of the Lemon test, that creates a legal hurdle to teaching intelligent design in public school districts in other federal court jurisdictions. Geocentrism In astronomy, the geocentric model (also known as geocentrism, or the Ptolemaic system), is a description of the cosmos where Earth is at the orbital center of all celestial bodies. This model served as the predominant cosmological system in many ancient civilizations such as ancient Greece. As such, they assumed that the Sun, Moon, stars, and naked eye planets circled Earth, including the noteworthy systems of Aristotle (see Aristotelian physics) and Ptolemy. Articles arguing that geocentrism was the biblical perspective appeared in some early creation science newsletters associated with the Creation Research Society pointing to some passages in the Bible, which, when taken literally, indicate that the daily apparent motions of the Sun and the Moon are due to their actual motions around the Earth rather than due to the rotation of the Earth about its axis. For example, where the Sun and Moon are said to stop in the sky, and where the world is described as immobile. Contemporary advocates for such religious beliefs include Robert Sungenis, co-author of the self-published Galileo Was Wrong: The Church Was Right (2006). These people subscribe to the view that a plain reading of the Bible contains an accurate account of the manner in which the universe was created and requires a geocentric worldview. Most contemporary creationist organizations reject such perspectives. Omphalos hypothesis The Omphalos hypothesis is one attempt to reconcile the scientific evidence that the universe is billions of years old with a literal interpretation of the Genesis creation narrative, which implies that the Earth is only a few thousand years old. It is based on the religious belief that the universe was created by a divine being, within the past six to ten thousand years (in keeping with flood geology), and that the presence of objective, verifiable evidence that the universe is older than approximately ten millennia is due to the creator introducing false evidence that makes the universe appear significantly older. The idea was named after the title of an 1857 book, Omphalos by Philip Henry Gosse, in which Gosse argued that in order for the world to be functional God must have created the Earth with mountains and canyons, trees with growth rings, Adam and Eve with fully grown hair, fingernails, and navels (ὀμφαλός omphalos is Greek for "navel"), and all living creatures with fully formed evolutionary features, etc..., and that, therefore, no empirical evidence about the age of the Earth or universe can be taken as reliable. Various supporters of Young Earth creationism have given different explanations for their belief that the universe is filled with false evidence of the universe's age, including a belief that some things needed to be created at a certain age for the ecosystems to function, or their belief that the creator was deliberately planting deceptive evidence. The idea has seen some revival in the 20th century by some modern creationists, who have extended the argument to address the "starlight problem". The idea has been criticised as Last Thursdayism, and on the grounds that it requires a deliberately deceptive creator. Theistic evolution Theistic evolution, or evolutionary creation, is a belief that "the personal God of the Bible created the universe and life through evolutionary processes." According to the American Scientific Affiliation: Through the 19th century the term creationism most commonly referred to direct creation of individual souls, in contrast to traducianism. Following the publication of Vestiges of the Natural History of Creation, there was interest in ideas of Creation by divine law. In particular, the liberal theologian Baden Powell argued that this illustrated the Creator's power better than the idea of miraculous creation, which he thought ridiculous. When On the Origin of Species was published, the cleric Charles Kingsley wrote of evolution as "just as noble a conception of Deity." Darwin's view at the time was of God creating life through the laws of nature, and the book makes several references to "creation," though he later regretted using the term rather than calling it an unknown process. In America, Asa Gray argued that evolution is the secondary effect, or modus operandi, of the first cause, design, and published a pamphlet defending the book in theistic terms, Natural Selection not inconsistent with Natural Theology. Theistic evolution, also called, evolutionary creation, became a popular compromise, and St. George Jackson Mivart was among those accepting evolution but attacking Darwin's naturalistic mechanism. Eventually it was realised that supernatural intervention could not be a scientific explanation, and naturalistic mechanisms such as neo-Lamarckism were favoured as being more compatible with purpose than natural selection. Some theists took the general view that, instead of faith being in opposition to biological evolution, some or all classical religious teachings about Christian God and creation are compatible with some or all of modern scientific theory, including specifically evolution; it is also known as "evolutionary creation." In Evolution versus Creationism, Eugenie Scott and Niles Eldredge state that it is in fact a type of evolution. It generally views evolution as a tool used by God, who is both the first cause and immanent sustainer/upholder of the universe; it is therefore well accepted by people of strong theistic (as opposed to deistic) convictions. Theistic evolution can synthesize with the day-age creationist interpretation of the Genesis creation narrative; however most adherents consider that the first chapters of the Book of Genesis should not be interpreted as a "literal" description, but rather as a literary framework or allegory. From a theistic viewpoint, the underlying laws of nature were designed by God for a purpose, and are so self-sufficient that the complexity of the entire physical universe evolved from fundamental particles in processes such as stellar evolution, life forms developed in biological evolution, and in the same way the origin of life by natural causes has resulted from these laws. In one form or another, theistic evolution is the view of creation taught at the majority of mainline Protestant seminaries. For Roman Catholics, human evolution is not a matter of religious teaching, and must stand or fall on its own scientific merits. Evolution and the Roman Catholic Church are not in conflict. The Catechism of the Catholic Church comments positively on the theory of evolution, which is neither precluded nor required by the sources of faith, stating that scientific studies "have splendidly enriched our knowledge of the age and dimensions of the cosmos, the development of life-forms and the appearance of man." Roman Catholic schools teach evolution without controversy on the basis that scientific knowledge does not extend beyond the physical, and scientific truth and religious truth cannot be in conflict. Theistic evolution can be described as "creationism" in holding that divine intervention brought about the origin of life or that divine laws govern formation of species, though many creationists (in the strict sense) would deny that the position is creationism at all. In the creation–evolution controversy, its proponents generally take the "evolutionist" side. This sentiment was expressed by Fr. George Coyne, (the Vatican's chief astronomer between 1978 and 2006):...in America, creationism has come to mean some fundamentalistic, literal, scientific interpretation of Genesis. Judaic-Christian faith is radically creationist, but in a totally different sense. It is rooted in a belief that everything depends upon God, or better, all is a gift from God. While supporting the methodological naturalism inherent in modern science, the proponents of theistic evolution reject the implication taken by some atheists that this gives credence to ontological materialism. In fact, many modern philosophers of science, including atheists, refer to the long-standing convention in the scientific method that observable events in nature should be explained by natural causes, with the distinction that it does not assume the actual existence or non-existence of the supernatural. Religious views There are also non-Christian forms of creationism, notably Islamic creationism and Hindu creationism. Baháʼí Faith In the creation myth taught by Bahá'u'lláh, the Baháʼí Faith founder, the universe has "neither beginning nor ending," and that the component elements of the material world have always existed and will always exist. With regard to evolution and the origin of human beings, `Abdu'l-Bahá gave extensive comments on the subject when he addressed western audiences in the beginning of the 20th century. Transcripts of these comments can be found in Some Answered Questions, Paris Talks and The Promulgation of Universal Peace. `Abdu'l-Bahá described the human species as having evolved from a primitive form to modern man, but that the capacity to form human intelligence was always in existence. Buddhism Buddhism denies a creator deity and posits that mundane deities such as Mahabrahma are sometimes misperceived to be a creator. While Buddhism includes belief in divine beings called devas, it holds that they are mortal, limited in their power, and that none of them are creators of the universe. In the Saṃyutta Nikāya, the Buddha also states that the cycle of rebirths stretches back hundreds of thousands of eons, without discernible beginning. Major Buddhist Indian philosophers such as Nagarjuna, Vasubandhu, Dharmakirti and Buddhaghosa, consistently critiqued Creator God views put forth by Hindu thinkers. Christianity , most Christians around the world accepted evolution as the most likely explanation for the origins of species, and did not take a literal view of the Genesis creation narrative. The United States is an exception where belief in religious fundamentalism is much more likely to affect attitudes towards evolution than it is for believers elsewhere. Political partisanship affecting religious belief may be a factor because political partisanship in the US is highly correlated with fundamentalist thinking, unlike in Europe. Most contemporary Christian leaders and scholars from mainstream churches, such as Anglicans and Lutherans, consider that there is no conflict between the spiritual meaning of creation and the science of evolution. According to the former archbishop of Canterbury, Rowan Williams, "...for most of the history of Christianity, and I think this is fair enough, most of the history of the Christianity there's been an awareness that a belief that everything depends on the creative act of God, is quite compatible with a degree of uncertainty or latitude about how precisely that unfolds in creative time." Leaders of the Anglican and Roman Catholic churches have made statements in favor of evolutionary theory, as have scholars such as the physicist John Polkinghorne, who argues that evolution is one of the principles through which God created living beings. Earlier supporters of evolutionary theory include Frederick Temple, Asa Gray and Charles Kingsley who were enthusiastic supporters of Darwin's theories upon their publication, and the French Jesuit priest and geologist Pierre Teilhard de Chardin saw evolution as confirmation of his Christian beliefs, despite condemnation from Church authorities for his more speculative theories. Another example is that of Liberal theology, not providing any creation models, but instead focusing on the symbolism in beliefs of the time of authoring Genesis and the cultural environment. Many Christians and Jews had been considering the idea of the creation history as an allegory (instead of historical) long before the development of Darwin's theory of evolution. For example, Philo, whose works were taken up by early Church writers, wrote that it would be a mistake to think that creation happened in six days, or in any set amount of time. Augustine of the late fourth century who was also a former neoplatonist argued that everything in the universe was created by God at the same moment in time (and not in six days as a literal reading of the Book of Genesis would seem to require); It appears that both Philo and Augustine felt uncomfortable with the idea of a seven-day creation because it detracted from the notion of God's omnipotence. In 1950, Pope Pius XII stated limited support for the idea in his encyclical Humani generis. In 1996, Pope John Paul II stated that "new knowledge has led to the recognition of the theory of evolution as more than a hypothesis," but, referring to previous papal writings, he concluded that "if the human body takes its origin from pre-existent living matter, the spiritual soul is immediately created by God." In the US, Evangelical Christians have continued to believe in a literal Genesis. Members of evangelical Protestant (70%), Mormon (76%) and Jehovah's Witnesses (90%) denominations are the most likely to reject the evolutionary interpretation of the origins of life. Jehovah's Witnesses adhere to a combination of gap creationism and day-age creationism, asserting that scientific evidence about the age of the universe is compatible with the Bible, but that the 'days' after Genesis 1:1 were each thousands of years in length. The historic Christian literal interpretation of creation requires the harmonization of the two creation stories, Genesis 1:1–2:3 and Genesis 2:4–25, for there to be a consistent interpretation. They sometimes seek to ensure that their belief is taught in science classes, mainly in American schools. Opponents reject the claim that the literalistic biblical view meets the criteria required to be considered scientific. Many religious groups teach that God created the Cosmos. From the days of the early Christian Church Fathers there were allegorical interpretations of the Book of Genesis as well as literal aspects. Christian Science, a system of thought and practice derived from the writings of Mary Baker Eddy, interprets the Book of Genesis figuratively rather than literally. It holds that the material world is an illusion, and consequently not created by God: the only real creation is the spiritual realm, of which the material world is a distorted version. Christian Scientists regard the story of the creation in the Book of Genesis as having symbolic rather than literal meaning. According to Christian Science, both creationism and evolution |
be perceived as arrogant and incompetent. This resentment at last exploded in a tax revolt on November 1, 1965, in the Guéra Prefecture, causing 500 deaths. The year after saw the birth in Sudan of the National Liberation Front of Chad (FROLINAT), created to militarily oust Tombalbaye and the Southern dominance. It was the start of a bloody civil war. Tombalbaye resorted to calling in French troops; while moderately successful, they were not fully able to quell the insurgency. Proving more fortunate was his choice to break with the French and seek friendly ties with Libyan Brotherly Leader Gaddafi, taking away the rebels' principal source of supplies. But while he had reported some success against the rebels, Tombalbaye started behaving more and more irrationally and brutally, continuously eroding his consensus among the southern elites, which dominated all key positions in the army, the civil service and the ruling party. As a consequence on April 13, 1975, several units of N'Djamena's gendarmerie killed Tombalbaye during a coup. Military rule (1975–1978) The coup d'état that terminated Tombalbaye's government received an enthusiastic response in N'Djamena. The southerner General Félix Malloum emerged early as the chairman of the new junta. The new military leaders were unable to retain for long the popularity that they had gained through their overthrow of Tombalbaye. Malloum proved himself unable to cope with the FROLINAT and at the end decided his only chance was in coopting some of the rebels: in 1978 he allied himself with the insurgent leader Hissène Habré, who entered the government as prime minister. Civil war (1979-1982) Internal dissent within the government led Prime Minister Habré to send his forces against Malloum's national army in the capital in February 1979. Malloum was ousted from the presidency, but the resulting civil war amongst the 11 emergent factions was so widespread that it rendered the central government largely irrelevant. At that point, other African governments decided to intervene. A series of four international conferences held first under Nigerian and then Organization of African Unity (OAU) sponsorship attempted to bring the Chadian factions together. At the fourth conference, held in Lagos, Nigeria, in August 1979, the Lagos Accord was signed. This accord established a transitional government pending national elections. In November 1979, the Transitional Government of National Unity (GUNT) was created with a mandate to govern for 18 months. Goukouni Oueddei, a northerner, was named president; Colonel Kamougué, a southerner, Vice President; and Habré, Minister of Defense. This coalition proved fragile; in January 1980, fighting broke out again between Goukouni's and Habré's forces. With assistance from Libya, Goukouni regained control of the capital and other urban centers by year's end. However, Goukouni's January 1981 statement that Chad and Libya had agreed to work for the realization of complete unity between the two countries generated intense international pressure and Goukouni's subsequent call for the complete withdrawal of external forces. The Habré era (1982–1990) see: Chadian-Libyan conflict Libya's partial withdrawal to the Aozou Strip in northern Chad cleared the way for Habré's forces to enter N’Djamena in June. French troops and an OAU peacekeeping force of 3,500 Nigerian, Senegalese, and Zairian troops (partially funded by the United States) remained neutral during the conflict. Habré continued to face armed opposition on various fronts, and was brutal in his repression of suspected opponents, massacring and torturing many during his rule. In the summer of 1983, GUNT forces launched an offensive against government positions in northern and eastern Chad with heavy Libyan support. In response to Libya's direct intervention, French and Zairian forces intervened to defend Habré, pushing Libyan and rebel forces north of the 16th parallel. In September 1984, the French and the Libyan governments announced an agreement for the mutual withdrawal of their forces from Chad. By the end of the year, all French and Zairian troops were withdrawn. Libya did not honor the withdrawal accord, and its forces continued to occupy the northern third of Chad. Rebel commando groups (Codos) in southern Chad were broken up by government massacres in 1984. In 1985 Habré briefly reconciled with some of his opponents, including the Democratic Front of Chad (FDT) and the Coordinating Action Committee of the Democratic Revolutionary Council. Goukouni also began to rally toward Habré, and with his support Habré successfully expelled Libyan forces from most of Chadian territory. A cease-fire between Chad and Libya held from 1987 to 1988, and negotiations over the next several years led to the 1994 International Court of Justice decision granting Chad sovereignty over the Aouzou strip, effectively ending Libyan occupation. The Idriss Déby era (1990–2021) Rise to power However, rivalry between Hadjerai, Zaghawa and Gorane groups within the government grew in the late 1980s. In April 1989, Idriss Déby, one of Habré's leading generals and a Zaghawa, defected and fled to Darfur in Sudan, from which he mounted a Zaghawa-supported series of attacks on Habré (a Gorane). In December 1990, with Libyan assistance and no opposition from French troops stationed in Chad, Déby's forces successfully marched on N’Djamena. After 3 months of provisional government, Déby's Patriotic Salvation Movement (MPS) approved a national charter on February 28, 1991, with Déby as president. During the next two years, Déby faced at least two coup attempts. Government forces clashed violently with rebel forces, including the Movement for Democracy and Development, MDD, National Revival Committee for Peace and Democracy (CSNPD), Chadian National Front (FNT) and the Western Armed Forces (FAO), near Lake Chad and in southern regions of the country. Earlier French demands for the country to hold a National Conference resulted in the gathering of 750 delegates representing political parties (which were legalized in 1992), the government, trade unions and the army to discuss the creation of a pluralist democratic regime. However, unrest continued, sparked in part by large-scale killings of civilians in southern Chad. The CSNPD, led by Kette Moise and other southern groups entered into a peace agreement with government forces in 1994, which later broke down. Two new groups, the Armed Forces for a Federal Republic (FARF) led by former Kette ally Laokein Barde and the Democratic Front for Renewal (FDR), and a reformulated MDD clashed with government forces from 1994 to 1995. Multiparty elections Talks with political opponents in early 1996 did not go well, but Déby announced his intent to hold presidential elections in June. Déby won the country's first multi-party presidential elections with support in the second round from opposition leader Kebzabo, defeating General Kamougue (leader of the 1975 coup against Tombalbaye). Déby's MPS party won 63 of 125 seats in the January 1997 legislative elections. International observers noted numerous serious irregularities in presidential and legislative election proceedings. By mid-1997 the government signed peace deals with FARF and the MDD leadership and succeeded in cutting off the groups from their rear bases in the Central African Republic and Cameroon. Agreements also were struck with | seats in the January 1997 legislative elections. International observers noted numerous serious irregularities in presidential and legislative election proceedings. By mid-1997 the government signed peace deals with FARF and the MDD leadership and succeeded in cutting off the groups from their rear bases in the Central African Republic and Cameroon. Agreements also were struck with rebels from the National Front of Chad (FNT) and Movement for Social Justice and Democracy in October 1997. However, peace was short-lived, as FARF rebels clashed with government soldiers, finally surrendering to government forces in May 1998. Barde was killed in the fighting, as were hundreds of other southerners, most civilians. Since October 1998, Chadian Movement for Justice and Democracy (MDJT) rebels, led by Youssuf Togoimi until his death in September 2002, have skirmished with government troops in the Tibesti region, resulting in hundreds of civilian, government, and rebel casualties, but little ground won or lost. No active armed opposition has emerged in other parts of Chad, although Kette Moise, following senior postings at the Ministry of Interior, mounted a smallscale local operation near Moundou which was quickly and violently suppressed by government forces in late 2000. Déby, in the mid-1990s, gradually restored basic functions of government and entered into agreements with the World Bank and IMF to carry out substantial economic reforms. Oil exploitation in the southern Doba region began in June 2000, with World Bank Board approval to finance a small portion of a project, the Chad-Cameroon Petroleum Development Project, aimed at transport of Chadian crude through a 1000-km buried pipeline through Cameroon to the Gulf of Guinea. The project established unique mechanisms for World Bank, private sector, government, and civil society collaboration to guarantee that future oil revenues benefit local populations and result in poverty alleviation. Success of the project depended on multiple monitoring efforts to ensure that all parties keep their commitments. These "unique" mechanisms for monitoring and revenue management have faced intense criticism from the beginning. Debt relief was accorded to Chad in May 2001. Déby won a flawed 63% first-round victory in May 2001 presidential elections after legislative elections were postponed until spring 2002. Having accused the government of fraud, six opposition leaders were arrested (twice) and one opposition party activist was killed following the announcement of election results. However, despite claims of government corruption, favoritism of Zaghawas, and abuses by the security forces, opposition party and labor union calls for general strikes and more active demonstrations against the government have been unsuccessful. Despite movement toward democratic reform, power remains in the hands of a northern ethnic oligarchy. In 2003, Chad began receiving refugees from the Darfur region of western Sudan. More than 200,000 refugees fled the fighting between two rebel groups and government-supported militias known as Janjaweed. A number of border incidents led to the Chadian-Sudanese War. Oil producing and military improvement Chad become an oil producer in 2003. In order to avoid resource curse and corruption, elaborate plans sponsored by World Bank were made. This plan ensured transparency in payments, as well as that 80% of money from oil exports would be spent on five priority development sectors, two most important of these being: education and healthcare. However money started getting diverted towards the military even before the civil war broke out. In 2006 when the civil war escalated, Chad abandoned previous economic plans sponsored by World Bank and added "national security" as priority development sector, money from this sector was used to improve the military. During the civil war, more than 600 million dollars were used to buy fighter jets, attack helicopters, and armored personnel carriers. Chad earned between 10 and 11 billion dollars from oil production, and estimated 4 billion dollars were invested in the army. War in the East The war started on December 23, 2005, when the government of Chad declared a state of war with Sudan and called for the citizens of Chad to mobilize themselves against the "common enemy," which the Chadian government sees as the Rally for Democracy and Liberty (RDL) militants, Chadian rebels, backed by the Sudanese government, and Sudanese militiamen. Militants have attacked villages and towns in eastern Chad, stealing cattle, murdering citizens, and burning houses. Over 200,000 refugees from the Darfur region of northwestern Sudan currently claim asylum in eastern Chad. Chadian president Idriss Déby accuses Sudanese President Omar Hasan Ahmad al-Bashir of trying to "destabilize our country, to drive our people into misery, to create disorder and export the war from Darfur to Chad." An attack on the Chadian town of Adre near the Sudanese border led to the deaths of either one hundred rebels, as every news source other than CNN has reported, or three hundred rebels. The Sudanese government was blamed for the attack, which was the second in the region in three days, but Sudanese foreign ministry spokesman Jamal Mohammed Ibrahim denies any Sudanese involvement, "We are not for any escalation with Chad. We technically deny involvement in Chadian internal affairs." This attack was the final straw that led to the declaration of war by Chad and the alleged deployment of the Chadian airforce into Sudanese airspace, which the Chadian government denies. An attack on N'Djamena was defeated on April 13, 2006 in the Battle of N'Djamena. The President on national radio stated that the situation was under control, but residents, diplomats and journalists reportedly heard shots of weapons fire. On November 25, 2006, rebels captured the eastern town of Abeche, capital of the Ouaddaï Region and center for humanitarian aid to the Darfur region in Sudan. On the same day, a separate rebel group Rally of Democratic Forces had captured Biltine. On November 26, 2006, the Chadian government claimed to have recaptured both towns, although rebels still claimed control of Biltine. Government buildings and humanitarian aid offices in Abeche were said to have been looted. The Chadian government denied a warning issued by the French Embassy in N'Djamena that a group of rebels was making its way through the Batha Prefecture in central Chad. Chad insists that both rebel groups are supported by the Sudanese government. International orphanage scandal Nearly 100 children at the center of an international scandal that left them stranded at an orphanage in remote eastern Chad returned home after nearly five months March 14, 2008. The 97 children were taken from their homes in October 2007 by a then-obscure French charity, Zoé's Ark, which claimed they were orphans from Sudan's war-torn Darfur region. Rebel attack on Ndjamena On Friday, February 1, 2008, rebels, an opposition alliance of leaders Mahamat Nouri, a former defense minister, and Timane Erdimi, a nephew of Idriss Déby who was his chief of staff, attacked the Chadian capital of Ndjamena - even surrounding the Presidential Palace. But Idris Deby with government troops fought back. French forces flew in ammunition for Chadian government troops but took no active part in the fighting. UN has said that up to 20,000 people left the region, taking refuge in nearby Cameroon and Nigeria. Hundreds of people were killed, mostly civilians. The rebels accuse Deby of corruption and embezzling millions in oil revenue. While many Chadians may share that assessment, the uprising appears to be a power struggle within the elite that has long controlled Chad. The French government believes that the opposition has regrouped east of the capital. Déby has blamed Sudan for the current unrest in Chad. Regional interventionism During the Déby era, Chad intervened in conflicts in Mali, Central African Republic, Niger and Nigeria. In 2013, Chad sent 2000 men from its military to help France in Operation Serval during the Mali War. Later in the same year Chad sent 850 troops to Central African Republic to help peacekeeping operation MISCA, those troops withdrew in April 2014 after allegations of human rights violations. During the Boko Haram insurgency, Chad multiple times sent troops to assist the fight against Boko Haram in Niger and Nigeria. In August 2018, rebel fighters of the Military Command Council for the Salvation of the Republic (CCMSR) attacked government forces in northern Chad. Chad experienced threats from jihadists fleeing the Libyan conflict. Chad had been an ally of the West in the fight against Islamist militants in West Africa. In January 2019, after 47 years, Chad restored diplomatic relations with Israel. It was announced during a visit to N’Djamena by Israeli Prime Minister Benjamin Netanyahu. After Idriss Déby (2021–present) In April 2021, Chad's army announced that President Idriss Déby had died of his injuries following clashes with rebels in the north of the country. Idriss Deby ruled the country for more than 30 years since 1990. It was also announced that a military |
northwest, the Ennedi Plateau in the northeast, the Ouaddaï Highlands in the east along the border with Sudan, the Guéra Massif in central Chad, and the Mandara Mountains along Chad's southwestern border with Cameroon. The smaller, southern part of the basin falls almost exclusively in Chad. It is delimited in the north by the Guéra Massif, in the south by highlands 250 kilometers south of the border with Central African Republic, and in the southwest by the Mandara Mountains. Lake Chad, located in the southwestern part of the basin at an altitude of 282 meters, surprisingly does not mark the basin's lowest point; instead, this is found in the Bodele and Djourab regions in the north-central and northeastern parts of the country, respectively. This oddity arises because the great stationary dunes (ergs) of the Kanem region create a dam, preventing lake waters from flowing to the basin's lowest point. At various times in the past, and as late as the 1870s, the Bahr el Ghazal Depression, which extends from the northeastern part of the lake to the Djourab, acted as an overflow canal; since independence, climatic conditions have made overflows impossible. North and northeast of Lake Chad, the basin extends for more than 800 kilometers, passing through regions characterized by great rolling dunes separated by very deep depressions. Although vegetation holds the dunes in place in the Kanem region, farther north they are bare and have a fluid, rippling character. From its low point in the Djourab, the basin then rises to the plateaus and peaks of the Tibesti Mountains in the north. The summit of this formation—as well as the highest point in the Sahara Desert—is Emi Koussi, a dormant volcano that reaches 3,414 meters above sea level. The basin's northeastern limit is the Ennedi Plateau, whose limestone bed rises in steps etched by erosion. East of the lake, the basin rises gradually to the Ouaddaï Highlands, which mark Chad's eastern border and also divide the Chad and Nile watersheds. These highland areas are part of the East Saharan montane xeric woodlands ecoregion. Southeast of Lake Chad, the regular contours of the terrain are broken by the Guéra Massif, which divides the basin into its northern and southern parts. South of the lake lie the floodplains of the Chari and Logone rivers, much of which are inundated during the rainy season. Farther south, the basin floor slopes upward, forming a series of low sand and clay plateaus, called koros, which eventually climb to 615 meters above sea level. South of the Chadian border, the koros divide the Lake Chad Basin from the Ubangi-Zaire river system. Water systems Permanent streams do not exist in northern or central Chad. Following infrequent rains in the Ennedi Plateau and Ouaddaï Highlands, water may flow through depressions called enneris and wadis. Often the result of flash floods, such streams usually dry out within a few days as the remaining puddles seep into the sandy clay soil. The most important of these streams is the Batha, which in the rainy season carries water west from the Ouaddaï Highlands and the Guéra Massif to Lake Fitri. Chad's major rivers are the Chari and the Logone and their tributaries, which flow from the southeast into Lake Chad. Both river systems rise in the highlands of Central African Republic and Cameroon, regions that receive more than 1,250 millimeters of rainfall annually. Fed by rivers of Central African Republic, as well as by the Bahr Salamat, Bahr Aouk, and Bahr Sara rivers of southeastern Chad, the Chari River is about 1,200 kilometers long. From its origins near the city of Sarh, the middle course of the Chari makes its way through swampy terrain; the lower Chari is joined by the Logone River near N'Djamena. The Chari's volume varies greatly, from 17 cubic meters per second during the dry season to 340 cubic meters per second during the wettest part of the year. The Logone River is formed by tributaries flowing from Cameroon and Central African Republic. Both shorter and smaller in volume than the Chari, it flows northeast for 960 kilometers; its volume ranges from five to eighty-five cubic meters per second. At N'Djamena the Logone empties into the Chari, and the combined rivers flow together for thirty kilometers through a large delta and into Lake Chad. At the end of the rainy season in the fall, the river overflows its banks and creates a huge floodplain in the delta. The seventh largest lake in the world (and the fourth largest in Africa), Lake Chad is located in the sahelian zone, a region just south of the Sahara Desert. The Chari River contributes 95 percent of Lake Chad's water, an average annual volume of 40 billion cubic meters, 95% of which is lost to evaporation. The size of the lake is determined by rains in the southern highlands bordering the basin and by temperatures in the Sahel. Fluctuations in both cause the lake to change dramatically in size, from 9,800 square kilometers in the dry season to 25,500 at the end of the rainy season. Lake Chad also changes greatly in size from one year to another. In 1870 its maximum area was 28,000 square kilometers. The measurement dropped to 12,700 in 1908. In the 1940s and 1950s, the lake remained small, but it grew again to 26,000 square kilometers in 1963. The droughts of the late 1960s, early 1970s, and mid-1980s caused Lake Chad to shrink once again, however. The only other lakes of importance in Chad are Lake Fitri, in Batha Prefecture, and Lake Iro, in the marshy southeast. Climate The Lake Chad Basin embraces a great range of tropical climates from north to south, although most of these climates tend to be dry. Apart from the far north, most regions are characterized by a | and economic orientation toward the Mediterranean Basin; and West Africa, with its diverse religions and cultures and its history of highly developed states and regional economies. Chad also borders Northeast Africa, oriented toward the Nile Valley and the Red Sea region - and Central or Equatorial Africa, some of whose people have retained classical African religions while others have adopted Christianity, and whose economies were part of the great Congo River system. Although much of Chad's distinctiveness comes from this diversity of influences, since independence the diversity has also been an obstacle to the creation of a national identity. Land Although Chadian society is economically, socially, and culturally fragmented, the country's geography is unified by the Lake Chad Basin. Once a huge inland sea (the Pale-Chadian Sea) whose only remnant is shallow Lake Chad, this vast depression extends west into Nigeria and Niger. The larger, northern portion of the basin is bounded within Chad by the Tibesti Mountains in the northwest, the Ennedi Plateau in the northeast, the Ouaddaï Highlands in the east along the border with Sudan, the Guéra Massif in central Chad, and the Mandara Mountains along Chad's southwestern border with Cameroon. The smaller, southern part of the basin falls almost exclusively in Chad. It is delimited in the north by the Guéra Massif, in the south by highlands 250 kilometers south of the border with Central African Republic, and in the southwest by the Mandara Mountains. Lake Chad, located in the southwestern part of the basin at an altitude of 282 meters, surprisingly does not mark the basin's lowest point; instead, this is found in the Bodele and Djourab regions in the north-central and northeastern parts of the country, respectively. This oddity arises because the great stationary dunes (ergs) of the Kanem region create a dam, preventing lake waters from flowing to the basin's lowest point. At various times in the past, and as late as the 1870s, the Bahr el Ghazal Depression, which extends from the northeastern part of the lake to the Djourab, acted as an overflow canal; since independence, climatic conditions have made overflows impossible. North and northeast of Lake Chad, the basin extends for more than 800 kilometers, passing through regions characterized by great rolling dunes separated by very deep depressions. Although vegetation holds the dunes in place in the Kanem region, farther north they are bare and have a fluid, rippling character. From its low point in the Djourab, the basin then rises to the plateaus and peaks of the Tibesti Mountains in the north. The summit of this formation—as well as the highest point in the Sahara Desert—is Emi Koussi, a dormant volcano that reaches 3,414 meters above sea level. The basin's northeastern limit is the Ennedi Plateau, whose limestone bed rises in steps etched by erosion. East of the lake, the basin rises gradually to the Ouaddaï Highlands, which mark Chad's eastern border and also divide the Chad and Nile watersheds. These highland areas are part of the East Saharan montane xeric woodlands ecoregion. Southeast of Lake Chad, the regular contours of the terrain are broken by the Guéra Massif, which divides the basin into its northern and southern parts. South of the lake lie the floodplains of the Chari and Logone rivers, much of which are inundated during the rainy season. Farther south, the basin floor slopes upward, forming a series of low sand and clay plateaus, called koros, which eventually climb to 615 meters above sea level. South of the Chadian border, the koros divide the Lake Chad Basin from the Ubangi-Zaire river system. Water systems Permanent streams do not exist in northern or central Chad. Following infrequent rains in the Ennedi Plateau and Ouaddaï Highlands, water may flow through depressions called enneris and wadis. Often the result of flash floods, such streams usually dry out within a few days as the remaining puddles seep into the sandy clay soil. The most important of these streams is the Batha, which in the rainy season carries water west from the Ouaddaï Highlands and the Guéra Massif to Lake Fitri. Chad's major rivers are the Chari and the Logone and their tributaries, which flow from the southeast into Lake Chad. Both river systems rise in the highlands of Central African Republic and Cameroon, regions that receive more than 1,250 millimeters of rainfall annually. Fed by rivers of Central African Republic, as well as by the Bahr Salamat, Bahr Aouk, and Bahr Sara rivers of southeastern Chad, the Chari River is about 1,200 kilometers long. From its origins near the city of Sarh, the middle course of the Chari makes its way through swampy terrain; the lower Chari is joined by the Logone River near N'Djamena. The Chari's volume varies greatly, from 17 cubic meters per second during the dry season to 340 cubic meters per second during the wettest part of the year. The Logone River is formed by tributaries flowing from Cameroon and Central African Republic. Both shorter and smaller in volume than the Chari, it flows northeast for 960 kilometers; its volume ranges from five to eighty-five cubic meters per second. At N'Djamena the Logone empties into the Chari, and the combined rivers flow together for thirty kilometers through a large delta and into Lake Chad. At the end of the rainy season in the fall, the river overflows its banks and creates a huge floodplain in the delta. The seventh largest lake in the world (and the fourth largest in Africa), Lake Chad is located in the sahelian zone, a region just south of the Sahara Desert. The Chari River contributes 95 percent of Lake Chad's water, an average annual volume of 40 billion cubic meters, 95% of which is lost to evaporation. The size of the lake is determined by rains in the southern highlands bordering the basin and by temperatures in the Sahel. Fluctuations in both cause the lake to change dramatically in size, from 9,800 square kilometers in the dry season to 25,500 at the end of the rainy season. Lake Chad also changes greatly in size from one year to another. In 1870 its maximum area was 28,000 square kilometers. The measurement dropped to 12,700 in 1908. In the 1940s and 1950s, the lake remained small, but it grew again to 26,000 square kilometers in 1963. The droughts of the late 1960s, early 1970s, and mid-1980s caused Lake Chad to shrink once again, however. The only other lakes of importance in Chad are Lake Fitri, in Batha Prefecture, and Lake Iro, in the marshy southeast. Climate The Lake Chad Basin embraces a great range of tropical climates from north to south, although most of these climates tend to be dry. Apart from the far north, most regions are characterized by a cycle of alternating rainy and dry seasons. In any given year, the duration of each season is determined largely by the positions of two great air masses—a maritime mass over the Atlantic Ocean to the southwest and a much drier continental mass. During the rainy season, winds from the southwest push the moister maritime system north over the African continent where it meets and slips under the continental mass along a front called the "intertropical convergence zone". At the height of the rainy season, the front may reach as |
millions peoples in 2050 and 61 millions peoples in 2100 . Vital statistics Registration of vital events is in Chad not complete. The Population Departement of the United Nations prepared the following estimates. Fertility and births Total Fertility Rate (TFR) (Wanted Fertility Rate) and Crude Birth Rate (CBR): Fertility data as of 2014-2015 (DHS Program): Life expectancy Religions The separation of religion from social structure in Chad represents a false dichotomy, for they are perceived as two sides of the same coin. Three religious traditions coexist in Chad- classical African religions, Islam, and Christianity. None is monolithic. The first tradition includes a variety of ancestor and/or place-oriented religions whose expression is highly specific. Islam, although characterized by an orthodox set of beliefs and observances, also is expressed in diverse ways. Christianity arrived in Chad much more recently with the arrival of Europeans. Its followers are divided into Roman Catholics and Protestants (including several denominations); as with Chadian Islam, Chadian Christianity retains aspects of pre-Christian religious belief. The number of followers of each tradition in Chad is unknown. Estimates made in 1962 suggested that 35 percent of Chadians practiced classical African religions, 55 percent were Muslims, and 10 percent were Christians. In the 1970s and 1980s, this distribution undoubtedly changed. Observers report that Islam has spread among the Hajerai and among other non-Muslim populations of the Saharan and sahelian zones. However, the proportion of Muslims may have fallen because the birthrate among the followers of traditional religions and Christians in southern Chad is thought to be higher than that among Muslims. In addition, the upheavals since the mid-1970s have resulted in the departure of some missionaries; whether or not Chadian Christians have been numerous enough and organized enough to have attracted more converts since that time is unknown. Other demographic statistics Demographic statistics according to the World Population Review in 2019. One birth every 48 seconds One death every 3 minutes One net migrant every 360 minutes Net gain of one person every 1 minutes The following demographic statistics are from the CIA World Factbook. Population 15,833,116 (July 2018 est.) 12,075,985 (2017 est.) Age structure 0-14 years: 48.12% (male 3,856,001 /female 3,763,622) 15-24 years: 19.27% (male 1,532,687 /female 1,518,940) 25-54 years: 26.95% (male 2,044,795 /female 2,222,751) 55-64 years: 3.25% (male 228,930 /female 286,379) 65 years and over: 2.39% (male 164,257 /female 214,754) (2018 est.) Median age total: 15.8 years. Country comparison to the world: 226th male: 15.3 years female: 16.3 years (2018 est.) Total: 17.8 years Male: 16.8 years Female: 18.8 years (2017 est.) Population growth rate 3.23% (2018 est.) Country comparison to the world: 5th Birth rate 43 births/1,000 population (2018 est.) Country comparison to the world: 4th Death rate 10.5 deaths/1,000 population (2018 est.) Country comparison to the world: 26th Net migration rate -3.2 migrant(s)/1,000 population (2017 est.) Country comparison to the world: 176th Total fertility rate 5.9 children born/woman (2018 est.) Country comparison to the world: 4th Mother's mean age at first birth 17.9 years (2014/15 est.) note: median age at first birth among women 25-29 Dependency ratios total dependency ratio: 100.2 (2015 est.) youth dependency ratio: 95.2 (2015 est.) elderly dependency ratio: 4.9 (2015 est.) potential support ratio: 20.3 (2015 est.) Contraceptive prevalence rate 5.7% (2014/15) Urbanization urban population: 23.1% of total population (2018) rate of urbanization: 3.88% annual rate of change (2015-20 est.) Sex ratio At birth: 1.04 male(s)/female Under 15 years: 1.01 male(s)/female 15–64 years: 0.92 male(s)/female 65 years and over: 0.66 male(s)/female Total population: 0.96 male(s)/female (2006 est.) Life expectancy at birth total population: 57.5 years (2018 est.) Country comparison to the world: 214th male: 55.7 years (2018 est.) female: 59.3 years (2018 est.) Total | may have been an important point of dispersal in ancient times. Population According to the total population was in , compared to only 2 429 000 in 1950. The proportion of children below the age of 15 in 2010 was 45.4%, 51.7% was between 15 and 65 years of age, while 2.9% was 65 years or the country is projected to have a population of 34 millions peoples in 2050 and 61 millions peoples in 2100 . Vital statistics Registration of vital events is in Chad not complete. The Population Departement of the United Nations prepared the following estimates. Fertility and births Total Fertility Rate (TFR) (Wanted Fertility Rate) and Crude Birth Rate (CBR): Fertility data as of 2014-2015 (DHS Program): Life expectancy Religions The separation of religion from social structure in Chad represents a false dichotomy, for they are perceived as two sides of the same coin. Three religious traditions coexist in Chad- classical African religions, Islam, and Christianity. None is monolithic. The first tradition includes a variety of ancestor and/or place-oriented religions whose expression is highly specific. Islam, although characterized by an orthodox set of beliefs and observances, also is expressed in diverse ways. Christianity arrived in Chad much more recently with the arrival of Europeans. Its followers are divided into Roman Catholics and Protestants (including several denominations); as with Chadian Islam, Chadian Christianity retains aspects of pre-Christian religious belief. The number of followers of each tradition in Chad is unknown. Estimates made in 1962 suggested that 35 percent of Chadians practiced classical African religions, 55 percent were Muslims, and 10 percent were Christians. In the 1970s and 1980s, this distribution undoubtedly changed. Observers report that Islam has spread among the Hajerai and among other non-Muslim populations of the Saharan and sahelian zones. However, the proportion of Muslims may have fallen because the birthrate among the followers of traditional religions and Christians in southern Chad is thought to be higher than that among Muslims. In addition, the upheavals since the mid-1970s have resulted in the departure of some missionaries; whether or not Chadian Christians have been numerous enough and organized enough to have attracted more converts since that time is unknown. Other demographic statistics Demographic statistics according to the World Population Review in 2019. One birth every 48 seconds One death every 3 minutes One net migrant every 360 minutes Net gain of one person every 1 minutes The following demographic statistics are from the CIA World Factbook. Population 15,833,116 (July 2018 est.) 12,075,985 (2017 est.) Age structure 0-14 years: 48.12% (male 3,856,001 /female 3,763,622) 15-24 years: 19.27% (male 1,532,687 /female 1,518,940) 25-54 years: 26.95% (male 2,044,795 /female 2,222,751) 55-64 years: 3.25% (male 228,930 /female 286,379) 65 years and over: 2.39% (male 164,257 /female 214,754) (2018 est.) Median age total: 15.8 years. Country comparison to the world: 226th male: 15.3 years female: 16.3 years (2018 est.) Total: 17.8 years Male: 16.8 years Female: 18.8 years (2017 est.) Population growth rate 3.23% (2018 est.) Country comparison to the world: 5th Birth rate 43 births/1,000 population (2018 est.) Country |
the government and parliament. Chad is one of the most corrupt countries in the world. In May 2013, security forces in Chad foiled a coup against the President Idriss Deby that had been in preparation for several months. In April 2021, President Déby was injured by the rebel group Front Pour l'Alternance et La Concorde au Tchad (FACT). He succumbed to his injuries on April 20, 2021. His presidency was taken by his family member Mahamat Déby in April of 2021. This resulted in both the National Assembly and Chadian Government being dissolved and replaced with a Transitional Military Council. Executive branch |President |Patriotic Salvation Movement |} Chad's executive branch is headed by the President and dominates the Chadian political system. Following the military overthrow of Hissène Habré in December 1990, Idriss Déby won the presidential elections in 1996 and 2001. The constitutional basis for the government is the 1996 constitution, under which the president was limited to two terms of office until Déby had that provision repealed in 2005. The president has the power to appoint the Council of State (or cabinet), and exercises considerable influence over appointments of judges, generals, provincial officials and heads of Chad's parastatal firms. In cases of grave and immediate threat, the president, in consultation with the National Assembly President and Council of State, may declare a state of emergency. Most of the key advisors for former president Déby were members of the Zaghawa clan, although some southern and opposition personalities were represented in his government. Legislative branch According to the 1996 constitution, the National Assembly deputies are elected by universal suffrage for 4-year terms. The Assembly holds regular sessions twice a year, starting in March and October, and can hold special sessions as necessary and called by the prime minister. Deputies elect a president of the National Assembly every 2 years. Assembly deputies or members of the executive branch may introduce legislation; once passed by the Assembly, the president must take action to either | president has the power to appoint the Council of State (or cabinet), and exercises considerable influence over appointments of judges, generals, provincial officials and heads of Chad's parastatal firms. In cases of grave and immediate threat, the president, in consultation with the National Assembly President and Council of State, may declare a state of emergency. Most of the key advisors for former president Déby were members of the Zaghawa clan, although some southern and opposition personalities were represented in his government. Legislative branch According to the 1996 constitution, the National Assembly deputies are elected by universal suffrage for 4-year terms. The Assembly holds regular sessions twice a year, starting in March and October, and can hold special sessions as necessary and called by the prime minister. Deputies elect a president of the National Assembly every 2 years. Assembly deputies or members of the executive branch may introduce legislation; once passed by the Assembly, the president must take action to either sign or reject the law within 15 days. The National Assembly must approve the prime minister's plan of government and may force the prime minister to resign through a majority vote of no-confidence. However, if the National Assembly rejects the executive branch's program twice in one year, the president may disband the Assembly and call for new legislative elections. In practice, the president exercises considerable influence over the National Assembly through the MPS party structure. Judicial branch Despite the constitution's guarantee of judicial independence from the executive branch, the president names most key judicial officials. The Supreme Court is made up of a chief justice, named by the president, and 15 councilors chosen by the president and National Assembly; appointments are for life. The Constitutional |
agriculture, including the herding of livestock. Of Africa's Francophone countries, Chad benefited least from the 50% devaluation of their currencies in January 1994. Financial aid from the World Bank, the African Development Bank, and other sources is directed largely at the improvement of agriculture, especially livestock production. Because of lack of financing, the development of oil fields near Doba, originally due to finish in 2000, was delayed until 2003. It was finally developed and is now operated by ExxonMobil. In terms of GDP Chad ranks 143rd global. Agriculture Chad produced in 2018: 987 thousand tons of sorghum; 893 thousand tons of peanut butter; 756 thousand tons of millet; 484 thousand tonnes of yam (8th largest producer in the world); 475 thousand tons of sugarcane; 437 thousand tons of maize; 284 thousand tons of cassava; 259 thousand tons of rice; 255 thousand tons of sweet potato; 172 thousand | the improvement of agriculture, especially livestock production. Because of lack of financing, the development of oil fields near Doba, originally due to finish in 2000, was delayed until 2003. It was finally developed and is now operated by ExxonMobil. In terms of GDP Chad ranks 143rd global. Agriculture Chad produced in 2018: 987 thousand tons of sorghum; 893 thousand tons of peanut butter; 756 thousand tons of millet; 484 thousand tonnes of yam (8th largest producer in the world); 475 thousand tons of sugarcane; 437 thousand tons of maize; 284 thousand tons of cassava; 259 thousand tons of rice; 255 thousand tons of sweet potato; 172 thousand tons of sesame seed; 151 thousand tons of bean; 120 thousand tons of cotton; In addition to smaller productions of other agricultural products. Macro-economic trend The following table shows the main economic indicators in 1980–2017. Other statistics GDP: purchasing power parity – $28.62 billion (2017 est.) GDP – real growth rate: |
Television stations: 1 state-owned TV station, Tele Tchad (2007); 1 station (2001). Television sets: 10,000 (1997). Radio is the most important medium of mass communication. State-run Radiodiffusion Nationale Tchadienne operates national and regional radio stations. Around a dozen private radio stations are on the air, despite high licensing fees, some run by religious or other non-profit groups. The BBC World Service (FM 90.6) and Radio France Internationale (RFI) broadcast in the capital, N'Djamena. The only television station, Tele Tchad, is state-owned. State control of many broadcasting outlets allows few dissenting views. Journalists are harassed and attacked. On rare occasions journalists are warned in writing by the High Council for Communication to produce more "responsible" journalism or face fines. Some journalists and publishers practice self-censorship. On 10 October 2012, the High Council on Communications issued a formal warning to La Voix du Paysan, claiming that the station's live | Telephone system: inadequate system of radiotelephone communication stations with high costs and low telephone density; fixed-line connections for less than 1 per 100 persons coupled with mobile-cellular subscribership base of only about 35 per 100 persons (2011). Satellite earth stations: 1 Intelsat (Atlantic Ocean) (2011). Internet Top-level domain: .td Internet users: 230,489 users, 149th in the world; 2.1% of the population, 200th in the world (2012); 168,100 users, 145th in the world (2009); 35,000 users, 167th in the world (2005). Fixed broadband: 18,000 subscriptions, 132nd in the world; 0.2% of the population, 161st in the world (2012). Wireless broadband: Unknown (2012). Internet hosts: 6 hosts, 229th in the world (2012); 9 hosts, 217th in the world (2006). IPv4: 4,096 addresses allocated, less than 0.05% of the world total, 0.4 addresses per 1000 people (2012). Internet censorship and surveillance There are no government restrictions on access to the |
now little used. There is also a route across Sudan, to the Red Sea, but very little trade goes this way. Links with Niger, north of Lake Chad, are practically nonexistent; it is easier to reach Niger via Cameroon and Nigeria. Airports Chad had an estimated 58 airports, only 9 of which had paved runways. In 2015, scheduled airlines in Chad carried approximately 28,332 passengers. Airports with paved runways Statistics on airports with paved runways as of 2017: List of airports with paved runways: Abeche Airport Bol Airport Faya-Largeau Airport Moundou Airport N'Djamena International Airport Sarh Airport Airports - with unpaved runways Statistics on airports with unpaved runways as of 2013: Airline SAGA Airline of Chad - see http://www.airsaga.com Ministry of Transport The Ministry is represented at the regional level by the Regional Delegations, which have jurisdiction over a part of the National Territory as defined by Decree No. 003 / PCE / CTPT / 91. Their organization and responsibilities are defined by Order No. 006 / MTPT / SE / DG / 92. The Regional Delegations are: The Regional Delegation of the Center covering the regions of Batha, Guéra and Salamat with headquarters | though two lines are planned - from the capital to the Sudanese and Cameroonian borders during the wet season, especially in the southern half of the country. In the north, roads are merely tracks across the desert and land mines continue to present a danger. Draft animals (horses, donkeys and camels) remain important in much of the country. Fuel supplies can be erratic, even in the south-west of the country, and are expensive. Elsewhere they are practically non-existent. Railways As of 2011 Chad had no railways. Two lines were planned to Sudan and Cameroon from the capital, with construction expected to start in 2012. No operative lines were listed as at 2019. In 2021, an ADB study was funded for that rail link from Cameroon to Chad. Highways As at 2018 Chad had a total of 44,000 km of roads of which approximately 260 km are paved. Some, but not all of the roads in the capital N'Djamena are paved. Outside of N'Djamena there is one paved road which runs from Massakory in the north, through N'Djamena and then south, through the cities of Guélengdeng, Bongor, Kélo and Moundou, with a short spur leading in the direction of Kousseri, Cameroon, near N'Djamena. Expansion of the road towards Cameroon through Pala and Léré is reportedly in the preparatory stages. Waterways As at 2012, Chari and Logone Rivers were navigable only in wet season (2002). Both flow northwards, from the south of Chad, into Lake Chad. Pipelines Since 2003, a 1,070 km pipeline has been used to export crude oil from the oil fields around Doba to offshore oil-loading facilities on Cameroon's Atlantic coast at Kribi. The CIA World Factbook however cites |
Chadian National Armed Forces (Forces Armées Nationales Tchadiennes—FANT). The Military of Chad was dominated by members of Toubou, Zaghawa, Kanembou, Hadjerai, and Massa ethnic groups during the presidency of Hissène Habré. Later Chadian president Idriss Déby revolted and fled to the Sudan, taking with him many Zaghawa and Hadjerai soldiers in 1989. Chad's armed forces numbered about 36,000 at the end of the Habré regime, but swelled to an estimated 50,000 in the early days of Déby's rule. With French support, a reorganization of the armed forces was initiated early in 1991 with the goal of reducing its numbers and making its ethnic composition reflective of the country as a whole. Neither of these goals was achieved, and the military is still dominated by the Zaghawa. In 2004, the government discovered that many of the soldiers it was paying did not exist and that there were only about 19,000 soldiers in the army, as opposed to the 24,000 that had been previously believed. Government crackdowns against the practice are thought to have been a factor in a failed military mutiny in May 2004. The current conflict, in which the Chadian military is involved, is the civil war against Sudanese-backed rebels. Chad successfully manages to repel the rebel movements, but recently, with some losses (see Battle of N'Djamena (2008)). The army uses its artillery systems and tanks, but well-equipped insurgents have probably managed to destroy over 20 of Chad's 60 T-55 tanks, and probably shot down a Mi-24 Hind gunship, which bombed enemy positions near the border with Sudan. In November 2006 Libya supplied Chad with four Aermacchi SF.260W light attack planes. They are used to strike enemy positions by the Chadian Air Force, but one was shot down by rebels. During the last battle of N'Djamena gunships and tanks have been put to good use, pushing armed militia forces back from the Presidential palace. The battle impacted the highest levels of the army leadership, as Daoud Soumain, its Chief of Staff, was killed. On March 23, 2020 a Chadian army base was ambushed by fighters of the jihadist insurgent group Boko Haram. The army lost 92 servicemen in one day. In response, President Déby launched an operation dubbed "Wrath of Boma". According to Canadian counter terrorism St-Pierre, numerous external operations and rising insecurity in the neighboring countries had recently overstretched the capacities of the Chadian armed forces. After the death of President Idriss Déby on 19 April 2021 in fighting with FACT rebels, his son General Mahamat Idriss Déby was named interim president and head of the armed | Hissène Habré. Later Chadian president Idriss Déby revolted and fled to the Sudan, taking with him many Zaghawa and Hadjerai soldiers in 1989. Chad's armed forces numbered about 36,000 at the end of the Habré regime, but swelled to an estimated 50,000 in the early days of Déby's rule. With French support, a reorganization of the armed forces was initiated early in 1991 with the goal of reducing its numbers and making its ethnic composition reflective of the country as a whole. Neither of these goals was achieved, and the military is still dominated by the Zaghawa. In 2004, the government discovered that many of the soldiers it was paying did not exist and that there were only about 19,000 soldiers in the army, as opposed to the 24,000 that had been previously believed. Government crackdowns against the practice are thought to have been a factor in a failed military mutiny in May 2004. The current conflict, in which the Chadian military is involved, is the civil war against Sudanese-backed rebels. Chad successfully manages to repel the rebel movements, but recently, with some losses (see Battle of N'Djamena (2008)). The army uses its artillery systems and tanks, but well-equipped insurgents have probably managed to destroy over 20 of Chad's 60 T-55 tanks, and probably shot down a Mi-24 Hind gunship, which bombed enemy positions near the border with Sudan. In November 2006 Libya supplied Chad with four Aermacchi SF.260W light attack planes. They are used to strike enemy positions by the Chadian Air Force, but one was shot down by rebels. During the last battle |
vicinity of Lake Chad, the lack of which led to border incidents in the past, has been completed and awaits ratification by Cameroon, Chad, Niger, and Nigeria. Americas Asia Despite centuries-old cultural ties to the Arab World, the Chadian Government maintained few significant ties to Arab states in North Africa or Southwest Asia in the 1980s. Chad has not recognised the State of Israel since former Chadian President François (Ngarta) Tombalbaye broke off relations in September 1972. President Habré hoped to pursue closer relations with Arab states as a potential opportunity to break out of his Chad's post-imperial dependence on France, and to assert Chad's unwillingness to serve as an arena for superpower rivalries. In addition, as a northern Muslim, Habré represented a constituency that favored Afro-Arab solidarity, and he hoped Islam would provide a basis for national unity in the long term. For these reasons, he was expected to seize opportunities during the 1990s to pursue closer ties with the Arab World. In 1988, Chad recognized the State of Palestine, which maintains a mission in N'Djamena. During the 1980s, Arab opinion on the Chadian-Libyan conflict over the Aozou Strip was divided. Several Arab states supported Libyan territorial claims to the Strip, among the most outspoken of which was Algeria, which provided training for anti-Habré forces, although most recruits for its training programs were from Nigeria or Cameroon, recruited and flown to Algeria by Libya. Lebanon's Progressive Socialist Party also sent troops to support Qadhafi's efforts against Chad in 1987. In contrast, numerous other Arab states opposed the Libyan actions, and expressed their desire to see the dispute over the Aozou Strip settled peacefully. By the end of 1987, Algiers and N'Djamena were negotiating to improve relations. In November 2018, President Deby visited Israel and announced his intention to restore diplomatic relations. Chad and Israel re-established diplomatic relations in January 2019. Europe Chad is officially non-aligned but has close relations with France, the former colonial power, which has about 1,200 troops stationed in the capital N'Djamena. It receives economic aid from countries of | these reasons, he was expected to seize opportunities during the 1990s to pursue closer ties with the Arab World. In 1988, Chad recognized the State of Palestine, which maintains a mission in N'Djamena. During the 1980s, Arab opinion on the Chadian-Libyan conflict over the Aozou Strip was divided. Several Arab states supported Libyan territorial claims to the Strip, among the most outspoken of which was Algeria, which provided training for anti-Habré forces, although most recruits for its training programs were from Nigeria or Cameroon, recruited and flown to Algeria by Libya. Lebanon's Progressive Socialist Party also sent troops to support Qadhafi's efforts against Chad in 1987. In contrast, numerous other Arab states opposed the Libyan actions, and expressed their desire to see the dispute over the Aozou Strip settled peacefully. By the end of 1987, Algiers and N'Djamena were negotiating to improve relations. In November 2018, President Deby visited Israel and announced his intention to restore diplomatic relations. Chad and Israel re-established diplomatic relations in January 2019. Europe Chad is officially non-aligned but has close relations with France, the former colonial power, which has about 1,200 troops stationed in the capital N'Djamena. It receives economic aid from countries of the European Community, the United States, and various international organizations. Libya supplies |
Blackstone Commentaries on Living, a series of books by Jiddu Krishnamurti originally published in 1956, 1958 and 1960 Commentary on Job, a sixth-century treatise by Saint Gregory Commentary of Zuo, one of the earliest Chinese works of narrative history, covering the period from 722 to 468 BCE Commentaries, a work attributed to Taautus Other uses Published opinion piece material, in any of several forms: An editorial, written by the editorial staff or board of a newspaper, magazine, or other periodical Column (periodical), a regular feature of such a publication in which usually the same single writer offers advice, observation, or other commentary An op-ed, an opinion piece by an author unaffiliated with the publication Letters to the editor, written by readers of such a publication Posts made in the comments section of an online publication, serving a similar function to paper periodicals' letters to the editor Commentary (philology), a line-by-line or even word-by-word explication (and usually translation) of a text Audio commentary track for DVDs and Blu-Rays – an additional audio track that plays in real-time with the video material, and comments on that video Sports commentary or play-by-play, a running description of | commentaries may refer to: Publications Commentary (magazine), a U.S. public affairs journal, founded in 1945 and formerly published by the American Jewish Committee Caesar's Commentaries (disambiguation), a number of works by or attributed to Julius Caesar Commentaries of Ishodad of Merv, set of ninth-century Syriac treatises on the Bible Commentaries on the Laws of England, a 1769 treatise on the common law of England by Sir William Blackstone Commentaries on Living, a series of books by Jiddu Krishnamurti originally published in 1956, 1958 and 1960 Commentary on Job, a sixth-century treatise by Saint Gregory Commentary of Zuo, one of the earliest Chinese works of narrative history, covering the period from 722 to |
and is the difference in mass density between the colloidal particle and the suspension medium. By rearranging, the sedimentation or creaming velocity is: There is an upper size-limit for the diameter of colloidal particles because particles larger than 1 μm tend to sediment, and thus the substance would no longer be considered a colloidal suspension. The colloidal particles are said to be in sedimentation equilibrium if the rate of sedimentation is equal to the rate of movement from Brownian motion. Preparation There are two principal ways to prepare colloids: Dispersion of large particles or droplets to the colloidal dimensions by milling, spraying, or application of shear (e.g., shaking, mixing, or high shear mixing). Condensation of small dissolved molecules into larger colloidal particles by precipitation, condensation, or redox reactions. Such processes are used in the preparation of colloidal silica or gold. Stabilization The stability of a colloidal system is defined by particles remaining suspended in solution and depends on the interaction forces between the particles. These include electrostatic interactions and van der Waals forces, because they both contribute to the overall free energy of the system. A colloid is stable if the interaction energy due to attractive forces between the colloidal particles is less than kT, where k is the Boltzmann constant and T is the absolute temperature. If this is the case, then the colloidal particles will repel or only weakly attract each other, and the substance will remain a suspension. If the interaction energy is greater than kT, the attractive forces will prevail, and the colloidal particles will begin to clump together. This process is referred to generally as aggregation, but is also referred to as flocculation, coagulation or precipitation. While these terms are often used interchangeably, for some definitions they have slightly different meanings. For example, coagulation can be used to describe irreversible, permanent aggregation where the forces holding the particles together are stronger than any external forces caused by stirring or mixing. Flocculation can be used to describe reversible aggregation involving weaker attractive forces, and the aggregate is usually called a floc. The term precipitation is normally reserved for describing a phase change from a colloid dispersion to a solid (precipitate) when it is subjected to a perturbation. Aggregation causes sedimentation or creaming, therefore the colloid is unstable: if either of these processes occur the colloid will no longer be a suspension. Electrostatic stabilization and steric stabilization are the two main mechanisms for stabilization against aggregation. Electrostatic stabilization is based on the mutual repulsion of like electrical charges. The charge of colloidal particles is structured in an electrical double layer, where the particles are charged on the surface, but then attract counterions (ions of opposite charge) which surround the particle. The electrostatic repulsion between suspended colloidal particles is most readily quantified in terms of the zeta potential. The combined effect of van der Waals attraction and electrostatic repulsion on aggregation is described quantitatively by the DLVO theory. A common method of stabilising a colloid (converting it from a precipitate) is peptization, a process where it is shaken with an electrolyte. Steric stabilization consists absorbing a layer of a polymer or surfactant on the particles to prevent them from getting close in the range of attractive forces. The polymer consists of chains that are attached to the particle surface, and the part of the chain that extends out is soluble in the suspension medium. This technique is used to stabilize colloidal particles in all types of solvents, including organic solvents. A combination of the two mechanisms is also possible (electrosteric stabilization). A method called gel network stabilization represents the principal way to produce colloids stable to both aggregation and sedimentation. The method consists in adding to the colloidal suspension a polymer able to form a gel network. Particle settling is hindered by the stiffness of the polymeric matrix where particles are trapped, and the long polymeric chains can provide a steric or electrosteric stabilization to dispersed particles. Examples of such substances are xanthan and guar gum. Destabilization Destabilization can be accomplished by different methods: Removal of the electrostatic barrier that prevents aggregation of the particles. This can be accomplished by the addition of salt to a suspension to reduce the Debye screening length (the width of the electrical double layer) of the particles. It is also accomplished by changing the pH of a suspension to effectively neutralise the surface charge of the particles in suspension. This removes the repulsive forces that keep colloidal particles separate and allows for aggregation due to van der Waals forces. Minor changes in pH can manifest in significant alteration to the zeta potential. When the magnitude of the zeta potential lies below a certain threshold, typically around ± 5mV, rapid coagulation or aggregation tends to occur. Addition of a charged polymer flocculant. Polymer flocculants can bridge individual colloidal particles by attractive electrostatic interactions. For example, negatively charged colloidal silica or clay particles can be flocculated by the addition of a positively charged polymer. Addition of non-adsorbed polymers called depletants that cause aggregation due to entropic effects. Unstable colloidal suspensions of low-volume fraction form clustered liquid suspensions, wherein individual clusters of particles sediment if they are more dense than the suspension medium, or cream if they are less dense. However, colloidal suspensions of higher-volume fraction form colloidal gels with viscoelastic properties. Viscoelastic colloidal gels, such as bentonite and toothpaste, flow like liquids under shear, but maintain their shape when shear is removed. It is for this reason that toothpaste can be squeezed from a toothpaste tube, but stays on the toothbrush after it is applied. Monitoring stability The most widely used technique to monitor the dispersion state of a product, and to identify and quantify destabilization phenomena, is multiple light scattering coupled with vertical scanning. This method, known as turbidimetry, is based on measuring the fraction of light that, after being sent through the sample, it backscattered by the colloidal particles. The backscattering intensity is directly proportional to the average particle size and volume fraction of the dispersed phase. Therefore, local changes in concentration caused by sedimentation or creaming, and clumping together of particles caused by aggregation, are detected and monitored. These phenomena are associated with unstable colloids. Dynamic light scattering can be used to detect the size of a colloidal particle by measuring how fast they diffuse. This method involves directing laser light towards a colloid. The scattered light will form an interference pattern, and the fluctuation | and the physical modification of form and texture. Some hydrocolloids like starch and casein are useful foods as well as rheology modifiers, others have limited nutritive value, usually providing a source of fiber The term hydrocolloids also refers to a type of dressing designed to lock moisture in the skin and help the natural healing process of skin, in order to reduce scarring, itching and soreness. Components Hydrocolloids contain some type of gel-forming agent, such as sodium carboxymethylcellulose (NaCMC) and gelatin. They are normally combined with some type of sealant, ie polyurethane in order to 'stick' to the skin. Colloid compared with solution A colloid has a dispersed phase and a continuous phase, whereas in a solution, the solute and solvent constitute only one phase. A solute in a solution are individual molecules or ions, whereas colloidal particles are bigger. For example, in a solution of salt in water, the sodium chloride (NaCl) crystal dissolves, and the Na+ and Cl− ions are surrounded by water molecules. However, in a colloid such as milk, the colloidal particles are globules of fat, rather than individual fat molecules. Interaction between particles The following forces play an important role in the interaction of colloid particles: Excluded volume repulsion: This refers to the impossibility of any overlap between hard particles. Electrostatic interaction: Colloidal particles often carry an electrical charge and therefore attract or repel each other. The charge of both the continuous and the dispersed phase, as well as the mobility of the phases are factors affecting this interaction. van der Waals forces: This is due to interaction between two dipoles that are either permanent or induced. Even if the particles do not have a permanent dipole, fluctuations of the electron density gives rise to a temporary dipole in a particle. This temporary dipole induces a dipole in particles nearby. The temporary dipole and the induced dipoles are then attracted to each other. This is known as van der Waals force, and is always present (unless the refractive indexes of the dispersed and continuous phases are matched), is short-range, and is attractive. Steric forces between polymer-covered surfaces or in solutions containing non-adsorbing polymer can modulate interparticle forces, producing an additional steric repulsive force (which is predominantly entropic in origin) or an attractive depletion force between them. Sedimentation velocity The Earth’s gravitational field acts upon colloidal particles. Therefore, if the colloidal particles are denser than the medium of suspension, they will sediment (fall to the bottom), or if they are less dense, they will cream (float to the top). Larger particles also have a greater tendency to sediment because they have smaller Brownian motion to counteract this movement. The sedimentation or creaming velocity is found by equating the Stokes drag force with the gravitational force: where is the Archimedean weight of the colloidal particles, is the viscosity of the suspension medium, is the radius of the colloidal particle, and is the sedimentation or creaming velocity. The mass of the colloidal particle is found using: where is the volume of the colloidal particle, calculated using the volume of a sphere , and is the difference in mass density between the colloidal particle and the suspension medium. By rearranging, the sedimentation or creaming velocity is: There is an upper size-limit for the diameter of colloidal particles because particles larger than 1 μm tend to sediment, and thus the substance would no longer be considered a colloidal suspension. The colloidal particles are said to be in sedimentation equilibrium if the rate of sedimentation is equal to the rate of movement from Brownian motion. Preparation There are two principal ways to prepare colloids: Dispersion of large particles or droplets to the colloidal dimensions by milling, spraying, or application of shear (e.g., shaking, mixing, or high shear mixing). Condensation of small dissolved molecules into larger colloidal particles by precipitation, condensation, or redox reactions. Such processes are used in the preparation of colloidal silica or gold. Stabilization The stability of a colloidal system is defined by particles remaining suspended in solution and depends on the interaction forces between the particles. These include electrostatic interactions and van der Waals forces, because they both contribute to the overall free energy of the system. A colloid is stable if the interaction energy due to attractive forces between the colloidal particles is less than kT, where k is the Boltzmann constant and T is the absolute temperature. If this is the case, then the colloidal particles will repel or only weakly attract each other, and the substance will remain a suspension. If the interaction energy is greater than kT, the attractive forces will prevail, and the colloidal particles will begin to clump together. This process is referred to generally as aggregation, but is also referred to as flocculation, coagulation or precipitation. While these terms are often used interchangeably, for some definitions they have slightly different meanings. For example, coagulation can be used to describe irreversible, permanent aggregation where the forces holding the particles together are stronger than any external forces caused by stirring or mixing. Flocculation can be used to describe reversible aggregation involving weaker attractive forces, and the aggregate is usually called a floc. The term precipitation is normally reserved for describing a phase change from a colloid dispersion to a solid (precipitate) when it is subjected to a perturbation. Aggregation causes sedimentation or creaming, therefore the colloid is unstable: if either of these processes occur the colloid will no longer be a suspension. Electrostatic stabilization and steric stabilization are the two main mechanisms for stabilization against aggregation. Electrostatic stabilization is based on the mutual repulsion of like electrical charges. The charge of colloidal particles is structured in an electrical double layer, where the particles are charged on the surface, but then attract counterions (ions of opposite charge) which surround the particle. The electrostatic repulsion between suspended colloidal particles is most readily quantified in terms of the zeta potential. The combined effect of van der Waals attraction and electrostatic repulsion on aggregation is described quantitatively by the DLVO theory. A common method of stabilising a colloid (converting it from a precipitate) is peptization, a process where it is shaken with an electrolyte. Steric stabilization consists absorbing a layer of a polymer or surfactant on the particles to prevent them from getting close in the range of attractive forces. The polymer consists of chains that are attached to the particle surface, and the part of the chain that extends out is soluble in the suspension medium. This technique is used to stabilize colloidal particles in all types of solvents, including organic solvents. A combination of the two mechanisms is also possible (electrosteric stabilization). A method called gel network stabilization represents the principal way to produce colloids stable to both aggregation and sedimentation. The method consists in adding to the colloidal suspension a polymer able to |
Chinese, the standard form of Mandarin Chinese in Mainland China, similar to forms of Mandarin Chinese in Taiwan and Singapore Varieties of Chinese, topolects grouped under Chinese languages Written Chinese, writing scripts used for Chinese languages Chinese cuisine, styles of food originating from China or their derivatives Geography Chinese Peak (disambiguation) Chinese Camp, California, a tiny hamlet in northern California, the United States Other uses "Chinese Gordon", a nickname of Charles George Gordon (1833–1885), British military commander and administrator "Chinese", song about take out meal by Lily Allen from It's Not Me, It's You See also Chinese citizen (disambiguation) Tang Chinese (disambiguation) | the supra-ethnic concept of the Chinese nation List of ethnic groups in China, people of various ethnicities in contemporary China Han Chinese, the largest ethnic group in the world and the majority ethnic group in Mainland China, Hong Kong, Macau, Taiwan, and Singapore Ethnic minorities in China, people of non-Han Chinese ethnicities in modern China Ethnic groups in Chinese history, people of various ethnicities in historical China Nationals of the People's Republic of China Nationals of the Republic of China Overseas Chinese, Chinese people residing outside the territories of Mainland China, Hong Kong, Macau, and Taiwan Sinitic languages, the major branch of |
wasn't able to bring the stage under control for almost a mile, leaving the robbers with nothing. Paul, who normally rode shotgun, later said he thought the first shot killing Philpot had been meant for him. When Wyatt Earp first arrived in Tombstone in December 1879, he initially took a job as a stagecoach shotgun messenger for Wells Fargo, guarding shipments of silver bullion. When Wyatt Earp was appointed Pima County Deputy Sheriff on July 27, 1881, his brother Morgan Earp took over his job. Historical weapon When Wells, Fargo & Co. began regular stagecoach service from Tipton, Missouri to San Francisco, California in 1858, they issued shotguns to its drivers and guards for defense along the perilous 2,800 mile route. The guard was called a shotgun messenger and they were issued a Coach gun, typically a 10-gauge or 12-gauge, short, double-barreled shotgun. Modern usage More recently, the term has been applied to a game, usually played by groups of friends to determine who rides beside the driver in a car. Typically, this involves claiming the right to ride shotgun by being the first person to call out "shotgun". The game creates an environment that is fair by forgetting and leaving out most seniority except that parents and significant others automatically | threat to the cargo, which was usually a strongbox. Absence of an armed person in that position often signaled that the stage was not carrying a strongbox, but only passengers. Historical examples Tombstone, Arizona Territory On the evening of March 15, 1881, a Kinnear & Company stagecoach carrying US$26,000 in silver bullion () was en route from the boom town of Tombstone, Arizona Territory to Benson, Arizona, the nearest freight terminal. Bob Paul, who had run for Pima County Sheriff and was contesting the election he lost due to ballot-stuffing, was temporarily working once again as the Wells Fargo shotgun messenger. He had taken the reins and driver's seat in Contention City because the usual driver, a well-known and popular man named Eli "Budd" Philpot, was ill. Philpot was riding shotgun. Near Drew's Station, just outside Contention City, a man stepped into the road and commanded them to "Hold!" Three Cowboys attempted to rob the stage. Paul, in the driver's seat, fired his shotgun and emptied his revolver at the robbers, wounding a Cowboy later identified as Bill Leonard in the groin. Philpot, riding shotgun, and passenger Peter Roerig, riding in the rear dickey seat, were both shot and killed. The horses spooked and Paul wasn't able to bring the stage under control for almost a mile, leaving the robbers with nothing. Paul, who normally rode shotgun, later said he thought the first shot killing Philpot had been meant for him. When Wyatt Earp first arrived in Tombstone in December 1879, he initially took a job as a stagecoach shotgun messenger for Wells Fargo, guarding shipments of silver bullion. When Wyatt Earp was appointed Pima County Deputy Sheriff on July 27, 1881, his brother Morgan Earp took over his job. Historical weapon When Wells, Fargo & Co. began regular stagecoach service from Tipton, Missouri to San Francisco, California in 1858, |
food has cooled. This makes it unsafe to reheat cooked food more than once. Cooking increases the digestibility of many foods which are inedible or poisonous when raw. For example, raw cereal grains are hard to digest, while kidney beans are toxic when raw or improperly cooked due to the presence of phytohaemagglutinin, which is inactivated by cooking for at least ten minutes at . Food safety depends on the safe preparation, handling, and storage of food. Food spoilage bacteria proliferate in the "Danger zone" temperature range from , food therefore should not be stored in this temperature range. Washing of hands and surfaces, especially when handling different meats, and keeping raw food separate from cooked food to avoid cross-contamination, are good practices in food preparation. Foods prepared on plastic cutting boards may be less likely to harbor bacteria than wooden ones. Washing and disinfecting cutting boards, especially after use with raw meat, poultry, or seafood, reduces the risk of contamination. Effects on nutritional content of food Proponents of raw foodism argue that cooking food increases the risk of some of the detrimental effects on food or health. They point out that during cooking of vegetables and fruit containing vitamin C, the vitamin elutes into the cooking water and becomes degraded through oxidation. Peeling vegetables can also substantially reduce the vitamin C content, especially in the case of potatoes where most vitamin C is in the skin. However, research has shown that in the specific case of carotenoids a greater proportion is absorbed from cooked vegetables than from raw vegetables. German research in 2003 showed significant benefits in reducing breast cancer risk when large amounts of raw vegetable matter are included in the diet. The authors attribute some of this effect to heat-labile phytonutrients. Sulforaphane, a glucosinolate breakdown product, which may be found in vegetables such as broccoli, has been shown to be protective against prostate cancer, however, much of it is destroyed when the vegetable is boiled. Although there has been some basic research on how sulforaphane might exert beneficial effects in vivo, there is no high-quality evidence for its efficacy against human diseases The USDA has studied retention data for 16 vitamins, 8 minerals, and alcohol for approximately 290 foods for various cooking methods. Carcinogens In a human epidemiological analysis by Richard Doll and Richard Peto in 1981, diet was estimated to cause a large percentage of cancers. Studies suggest that around 32% of cancer deaths may be avoidable by changes to the diet. Some of these cancers may be caused by carcinogens in food generated during the cooking process, although it is often difficult to identify the specific components in diet that serve to increase cancer risk. Many foods, such as beefsteak and broccoli, contain low concentrations of both carcinogens and anticarcinogens. Several studies published since 1990 indicate that cooking meat at high temperature creates heterocyclic amines (HCAs), which are thought to increase cancer risk in humans. Researchers at the National Cancer Institute found that human subjects who ate beef rare or medium-rare had less than one third the risk of stomach cancer than those who ate beef medium-well or well-done. While avoiding meat or eating meat raw may be the only ways to avoid HCAs in meat fully, the National Cancer Institute states that cooking meat below creates "negligible amounts" of HCAs. Also, microwaving meat before cooking may reduce HCAs by 90% by reducing the time needed for the meat to be cooked at high heat. Nitrosamines are found in some food, and may be produced by some cooking processes from proteins or from nitrites used as food preservatives; cured meat such as bacon has been found to be carcinogenic, with links to colon cancer. Ascorbate, which is added to cured meat, however, reduces nitrosamine formation. Research has shown that grilling, barbecuing and smoking meat and fish increases levels of carcinogenic polycyclic aromatic hydrocarbons (PAH). In Europe, grilled meat and smoked fish generally only contribute a small proportion of dietary PAH intake since they are a minor component of diet – most intake comes from cereals, oils and fats. However, in the US, grilled/barbecued meat is the second highest contributor of the mean daily intake of a known PAH carcinogen benzo[a]pyrene at 21% after 'bread, cereal and grain' at 29%. Baking, grilling or broiling food, especially starchy foods, until a toasted crust is formed generates significant concentrations of acrylamide, a known carcinogen from animal studies; its potential to cause cancer in humans at normal exposures is uncertain. Public health authorities recommend reducing the risk by avoiding overly browning starchy foods or meats when frying, baking, toasting or roasting them. Other health issues Cooking dairy products may reduce a protective effect against colon cancer. Researchers at the University of Toronto suggest that ingesting uncooked or unpasteurized dairy products (see also Raw milk) may reduce the risk of colorectal cancer. Mice and rats fed uncooked sucrose, casein, and beef tallow had one-third to one-fifth the incidence of microadenomas as the mice and rats fed the same ingredients cooked. This claim, however, is contentious. According to the Food and Drug Administration of the United States, health benefits claimed by raw milk advocates do not exist. "The small quantities of antibodies in milk are not absorbed in the human intestinal tract," says Barbara Ingham, PhD, associate professor and extension food scientist at the University of Wisconsin-Madison. "There is no scientific evidence that raw milk contains an anti-arthritis factor or that it enhances resistance to other diseases." Heating sugars with proteins or fats can produce advanced glycation end products ("glycotoxins"). Deep fried food in restaurants may contain high level of trans fat, which is known to increase levels of low-density lipoprotein that in turn may increase risk of heart diseases and other conditions. However, many fast food chains have now switched to trans-fat-free alternatives for deep-frying. Scientific aspects The scientific study of cooking has become known as molecular gastronomy. This is a subdiscipline of food science concerning the physical and chemical transformations that occur during cooking. Important contributions have been made by scientists, chefs and authors such as Hervé This (chemist), Nicholas Kurti (physicist), Peter Barham (physicist), Harold McGee | boards may be less likely to harbor bacteria than wooden ones. Washing and disinfecting cutting boards, especially after use with raw meat, poultry, or seafood, reduces the risk of contamination. Effects on nutritional content of food Proponents of raw foodism argue that cooking food increases the risk of some of the detrimental effects on food or health. They point out that during cooking of vegetables and fruit containing vitamin C, the vitamin elutes into the cooking water and becomes degraded through oxidation. Peeling vegetables can also substantially reduce the vitamin C content, especially in the case of potatoes where most vitamin C is in the skin. However, research has shown that in the specific case of carotenoids a greater proportion is absorbed from cooked vegetables than from raw vegetables. German research in 2003 showed significant benefits in reducing breast cancer risk when large amounts of raw vegetable matter are included in the diet. The authors attribute some of this effect to heat-labile phytonutrients. Sulforaphane, a glucosinolate breakdown product, which may be found in vegetables such as broccoli, has been shown to be protective against prostate cancer, however, much of it is destroyed when the vegetable is boiled. Although there has been some basic research on how sulforaphane might exert beneficial effects in vivo, there is no high-quality evidence for its efficacy against human diseases The USDA has studied retention data for 16 vitamins, 8 minerals, and alcohol for approximately 290 foods for various cooking methods. Carcinogens In a human epidemiological analysis by Richard Doll and Richard Peto in 1981, diet was estimated to cause a large percentage of cancers. Studies suggest that around 32% of cancer deaths may be avoidable by changes to the diet. Some of these cancers may be caused by carcinogens in food generated during the cooking process, although it is often difficult to identify the specific components in diet that serve to increase cancer risk. Many foods, such as beefsteak and broccoli, contain low concentrations of both carcinogens and anticarcinogens. Several studies published since 1990 indicate that cooking meat at high temperature creates heterocyclic amines (HCAs), which are thought to increase cancer risk in humans. Researchers at the National Cancer Institute found that human subjects who ate beef rare or medium-rare had less than one third the risk of stomach cancer than those who ate beef medium-well or well-done. While avoiding meat or eating meat raw may be the only ways to avoid HCAs in meat fully, the National Cancer Institute states that cooking meat below creates "negligible amounts" of HCAs. Also, microwaving meat before cooking may reduce HCAs by 90% by reducing the time needed for the meat to be cooked at high heat. Nitrosamines are found in some food, and may be produced by some cooking processes from proteins or from nitrites used as food preservatives; cured meat such as bacon has been found to be carcinogenic, with links to colon cancer. Ascorbate, which is added to cured meat, however, reduces nitrosamine formation. Research has shown that grilling, barbecuing and smoking meat and fish increases levels of carcinogenic polycyclic aromatic hydrocarbons (PAH). In Europe, grilled meat and smoked fish generally only contribute a small proportion of dietary PAH intake since they are a minor component of diet – most intake comes from cereals, oils and fats. However, in the US, grilled/barbecued meat is the second highest contributor of the mean daily intake of a known PAH carcinogen benzo[a]pyrene at 21% after 'bread, cereal and grain' at 29%. Baking, grilling or broiling food, especially starchy foods, until a toasted crust is formed generates significant concentrations of acrylamide, a known carcinogen from animal studies; its potential to cause cancer in humans at normal exposures is uncertain. Public health authorities recommend reducing the risk by avoiding overly browning starchy foods or meats when frying, baking, toasting or roasting them. Other health issues Cooking dairy products may reduce a protective effect against colon cancer. Researchers at the University of Toronto suggest that ingesting uncooked or unpasteurized dairy products (see also Raw milk) may reduce the risk of colorectal cancer. Mice and rats fed uncooked sucrose, casein, and beef tallow had one-third to one-fifth the incidence of microadenomas as the mice and rats fed the same ingredients cooked. This claim, however, is contentious. According to the Food and Drug Administration of the United States, health benefits claimed by raw milk advocates do not exist. "The small quantities of antibodies in milk are not absorbed in the human intestinal tract," says Barbara Ingham, PhD, associate professor and extension food scientist at the University of Wisconsin-Madison. "There is no scientific evidence that raw milk contains an anti-arthritis factor or that it enhances resistance to other diseases." Heating sugars with proteins or fats can produce advanced glycation end products ("glycotoxins"). Deep fried food in restaurants may contain high level of trans fat, which is known to increase levels of low-density lipoprotein that in turn may increase risk of heart diseases and other conditions. However, many fast food chains have now switched to trans-fat-free alternatives for deep-frying. Scientific aspects The scientific study of cooking has become known as molecular gastronomy. This is a subdiscipline of food science concerning the physical and chemical transformations that occur during cooking. Important contributions have been made by scientists, chefs and authors such as Hervé This (chemist), Nicholas Kurti (physicist), Peter Barham (physicist), Harold McGee (author), Shirley Corriher (biochemist, author), Robert Wolke (chemist, author.) It is different for the application of scientific knowledge to cooking, that is "molecular cooking"( (for the technique) or "molecular cuisine" (for a culinary style), for which chefs such as Raymond Blanc, Philippe and Christian Conticini, Ferran Adria, Heston Blumenthal, Pierre Gagnaire (chef). Chemical processes central to cooking include hydrolysis (in particular beta elimination of pectins, during the thermal treatment of plant tissues), pyrolysis, glycation reactions wrongly named Maillard reactions . Cooking foods with heat depends on many factors — the specific heat of an object, thermal conductivity, and perhaps most significantly the difference in temperature between the two objects. Thermal diffusivity is the combination of specific heat, conductivity and density that determines how long it will take for the food to reach a certain temperature. Home-cooking and commercial cooking Home cooking has traditionally been a process carried out informally in a home or around a communal fire, and can be enjoyed by all members of the family, although in many cultures women bear primary responsibility. Cooking is also often carried out outside of personal quarters, for example at restaurants, or schools. Bakeries were one of the earliest forms of cooking outside the home, and bakeries in the past often offered the cooking of pots of food provided by their customers as an additional service. In the present day, factory food preparation has become common, with many "ready-to-eat" foods being prepared and cooked in factories and home cooks using a mixture of scratch made, and factory made foods together to make a meal. The nutritional value of including more commercially prepared foods has been found to be inferior to home-made foods. Home-cooked meals tend to be healthier with fewer calories, and less saturated fat, cholesterol and sodium on |
game is based on the play of multiple rounds, or tricks, in each of which each player plays a single card from their hand, and based on the values of played cards one player wins or "takes" the trick. The specific object varies with each game and can include taking as many tricks as possible, taking as many scoring cards within the tricks won as possible, taking as few tricks (or as few penalty cards) as possible, taking a particular trick in the hand, or taking an exact number of tricks. Bridge, Whist and Spades, and the various Tarot card games, are popular examples. Matching games The object of a matching (or sometimes "melding") game is to acquire particular groups of matching cards before an opponent can do so. In Rummy, this is done through drawing and discarding, and the groups are called melds. Mahjong is a very similar game played with tiles instead of cards. Non-Rummy examples of match-type games generally fall into the "fishing" genre and include the children's games Go Fish and Old Maid. Shedding games In a shedding game, players start with a hand of cards, and the object of the game is to be the first player to discard all cards from one's hand. Common shedding games include Crazy Eights (commercialized by Mattel as Uno) and Daihinmin. Some matching-type games are also shedding-type games; some variants of Rummy such as Paskahousu, Phase 10, Rummikub, the bluffing game I Doubt It, and the children's games Musta Maija and Old Maid, fall into both categories. Catch and collect games The object of an accumulating game is to acquire all cards in the deck. Examples include most War type games, and games involving slapping a discard pile such as Slapjack. Egyptian Ratscrew has both of these features. Fishing games In fishing games, cards from the hand are played against cards in a layout on the table, capturing table cards if they match. Fishing games are popular in many nations, including China, where there are many diverse fishing games. Scopa is considered one of the national card games of Italy. Cassino is the only fishing game to be widely played in English-speaking countries. Zwicker has been described as a "simpler and jollier version of Cassino", played in Germany. Seep is a classic Indian fishing card game mainly popular in northern parts of India. Tablanet (tablić) is fishing-style game popular in Balkans. Comparing games Comparing card games are those where hand values are compared to determine the winner, also known as "vying" or "showdown" games. Poker, blackjack, and baccarat are examples of comparing card games. As seen, nearly all of these games are designed as gambling games. Solitaire (Patience) games Solitaire games are designed to be played by one player. Most games begin with a specific layout of cards, called a tableau, and the object is then either to construct a more elaborate final layout, or to clear the tableau and/or the draw pile or stock by moving all cards to one or more "discard" or "foundation" piles. Drinking card games Drinking card games are drinking games using cards, in which the object in playing the game is either to drink or to force others to drink. Many games are simply ordinary card games with the establishment of "drinking rules"; President, for instance, is virtually identical to Daihinmin but with additional rules governing drinking. Poker can also be played using a number of drinks as the wager. Another game often played as a drinking game is Toepen, quite popular in the Netherlands. Some card games are designed specifically to be played as drinking games. Multi-genre games Many card games borrow elements from more than one type. The most common combination is matching and shedding, as in some variants of Rummy, Old Maid, and Go Fish. However, many multi-genre games involve different stages of play for each hand. The most common multi-stage combination is a "trick-and-meld" game, such as Pinochle or Belote. Other multi-stage, multi-genre games include Poke, Gleek, Skitgubbe, and Tichu. Collectible card games (CCGs) Collectible card games (CCG) are proprietary playing card games. CCGs are games of strategy between two or more players. Each player has their own deck constructed from a very large pool of unique cards in the commercial market. The cards have different effects, costs, and art. New card sets are released periodically and sold as starter decks or booster packs. Obtaining the different cards makes the game a collectible card game, and cards are sold or traded on the secondary market. Magic: The Gathering, Pokémon, and Yu-Gi-Oh! are well-known collectible card games. Casino or gambling card games These games revolve around wagers of money. Though virtually any game in which there are winning and losing outcomes can be wagered on, these games are specifically designed to make the betting process a strategic part of the game. Some of these games involve players betting against each other, such as poker, while in others, like blackjack, players wager against the house. Poker games Poker is a family of gambling games in which players bet into a pool, called the pot, the value of which changes as the game progresses that the value of the hand they carry will beat all others according to the ranking system. Variants largely differ on how cards are dealt and the methods by which players can improve a hand. For many reasons, including its age and its popularity among Western militaries, it is one of the most universally known card games in existence. Other card games Many other card games have been designed and published on a commercial or amateur basis. In some cases, the game uses the standard 52-card deck, but the object is unique. In Eleusis, for example, players play single cards, and are told whether the play was legal or illegal, in an attempt to discover the underlying rules made up by the dealer. Most of these games however typically use a specially made deck of cards designed specifically for the game (or variations of it). The decks are thus usually proprietary, but may be created by the game's players. Uno, Phase 10, Set, and 1000 Blank White Cards are popular dedicated-deck card games; 1000 Blank White Cards is unique in that the cards for the game are designed by the players of the game while playing it; there is no commercially available deck advertised as such. Simulation card games A deck of either customised dedicated cards or a standard deck of playing cards with assigned meanings is used to simulate the actions of another activity, for example card football. Fictional card games Many games, including card games, are fabricated by science fiction authors and screenwriters to distance a culture depicted in the story from present-day Western culture. They are commonly used as filler to depict background activities in an atmosphere like a bar or rec room, but sometimes the drama revolves around the play of the game. Some of these games become real card games as the holder of the intellectual property develops and markets a suitable deck and ruleset for the game, while others, such as "Exploding Snap" from the Harry Potter franchise, lack sufficient descriptions of rules, or depend on cards or other hardware that are infeasible or physically impossible. Typical structure of card games Number and association of players Any specific card game imposes restrictions on the number of players. The most significant dividing lines run between one-player games and two-player games, and between two-player games and multi-player games. Card games for one player are known as solitaire or patience card games. (See list of solitaire card games.) Generally speaking, they are in many ways special and atypical, although some of them have given rise to two- or multi-player games such as Spite and Malice. In card games for two players, usually not all cards are distributed to the players, as they would otherwise have perfect information about the game state. Two-player games have always been immensely popular and include some of the most significant card games such as piquet, bezique, sixty-six, klaberjass, gin rummy and cribbage. Many multi-player games started as two-player games that were adapted to a greater number of players. For such adaptations a number of non-obvious choices must be made beginning with the choice of a game orientation. One way of extending a two-player game to more players is by building two teams of equal size. A common case is four players in two fixed partnerships, sitting crosswise as in whist and contract bridge. Partners sit opposite to each other and cannot see each other's hands. If communication between the partners is allowed at all, then it is usually restricted to a specific list of permitted signs and signals. 17th-century French partnership games such as triomphe were special in that partners sat next to each other and were allowed to communicate freely so long as they did not exchange cards or play out of order. Another way of extending a two-player game to more players is as a cut-throat game, in which all players fight on their own, and win or lose alone. Most cut-throat card games are round games, i.e. they can be played by any number of players starting from two or three, so long as there are enough cards for all. For some of the most interesting games such as ombre, tarot and skat, the associations between players change from hand to hand. Ultimately players all play on their own, but for each hand, some game mechanism divides the players into two teams. Most typically these are solo games, i.e. games in which one player becomes the soloist and has to achieve some objective against the others, who form a team and win or lose all their points jointly. But in games for more than three players, there may also be a mechanism that selects two players who then have to play against the others. Direction of play The players of a card game normally form a circle around a table or other space that can hold cards. The game orientation or direction of play, which is only relevant for three or more players, can be either clockwise or counterclockwise. It is the direction in which various roles in the game proceed. (In real-time card games, there may be no need for a direction of play.) Most regions have a traditional direction of play, such as: Counterclockwise in most of Asia and in Latin America. Clockwise in North America and Australia. Europe is roughly divided into a clockwise area in the north and a counterclockwise area in the south. The boundary runs between England, Ireland, Netherlands, Germany, Austria (mostly), Slovakia, Ukraine and Russia (clockwise) and France, Switzerland, Spain, Italy, Slovenia, Balkans, Hungary, Romania, Bulgaria, Greece and Turkey (counterclockwise). Games that originate in a region with a strong preference are often initially played in the original direction, even in regions that prefer the opposite direction. For games that have official rules and are played in tournaments, the direction of play is often prescribed in those rules. Determining who deals Most games have some form of asymmetry between players. The roles of players are normally expressed in terms of the dealer, i.e. the player whose task it is to shuffle the cards and distribute them to the players. Being the dealer can be a (minor or major) advantage or disadvantage, depending on the game. Therefore, after each played hand, the deal normally passes to the next player according to the game orientation. As it can still be an advantage or disadvantage to be the first dealer, there are some standard methods for determining who is the first dealer. A common method is by cutting, which works as follows. One player shuffles the deck and places it on the table. Each player lifts a packet of cards from the top, reveals its bottom card, and returns it to the deck. The player who reveals the highest (or lowest) card becomes dealer. In the case of a tie, the process is repeated by the tied players. For some games such as whist this process of cutting is part of the official rules, and the hierarchy of cards for the purpose of cutting (which need not be the same as that used otherwise in the game) is also specified. But in general, any method can be used, | these games are designed as gambling games. Solitaire (Patience) games Solitaire games are designed to be played by one player. Most games begin with a specific layout of cards, called a tableau, and the object is then either to construct a more elaborate final layout, or to clear the tableau and/or the draw pile or stock by moving all cards to one or more "discard" or "foundation" piles. Drinking card games Drinking card games are drinking games using cards, in which the object in playing the game is either to drink or to force others to drink. Many games are simply ordinary card games with the establishment of "drinking rules"; President, for instance, is virtually identical to Daihinmin but with additional rules governing drinking. Poker can also be played using a number of drinks as the wager. Another game often played as a drinking game is Toepen, quite popular in the Netherlands. Some card games are designed specifically to be played as drinking games. Multi-genre games Many card games borrow elements from more than one type. The most common combination is matching and shedding, as in some variants of Rummy, Old Maid, and Go Fish. However, many multi-genre games involve different stages of play for each hand. The most common multi-stage combination is a "trick-and-meld" game, such as Pinochle or Belote. Other multi-stage, multi-genre games include Poke, Gleek, Skitgubbe, and Tichu. Collectible card games (CCGs) Collectible card games (CCG) are proprietary playing card games. CCGs are games of strategy between two or more players. Each player has their own deck constructed from a very large pool of unique cards in the commercial market. The cards have different effects, costs, and art. New card sets are released periodically and sold as starter decks or booster packs. Obtaining the different cards makes the game a collectible card game, and cards are sold or traded on the secondary market. Magic: The Gathering, Pokémon, and Yu-Gi-Oh! are well-known collectible card games. Casino or gambling card games These games revolve around wagers of money. Though virtually any game in which there are winning and losing outcomes can be wagered on, these games are specifically designed to make the betting process a strategic part of the game. Some of these games involve players betting against each other, such as poker, while in others, like blackjack, players wager against the house. Poker games Poker is a family of gambling games in which players bet into a pool, called the pot, the value of which changes as the game progresses that the value of the hand they carry will beat all others according to the ranking system. Variants largely differ on how cards are dealt and the methods by which players can improve a hand. For many reasons, including its age and its popularity among Western militaries, it is one of the most universally known card games in existence. Other card games Many other card games have been designed and published on a commercial or amateur basis. In some cases, the game uses the standard 52-card deck, but the object is unique. In Eleusis, for example, players play single cards, and are told whether the play was legal or illegal, in an attempt to discover the underlying rules made up by the dealer. Most of these games however typically use a specially made deck of cards designed specifically for the game (or variations of it). The decks are thus usually proprietary, but may be created by the game's players. Uno, Phase 10, Set, and 1000 Blank White Cards are popular dedicated-deck card games; 1000 Blank White Cards is unique in that the cards for the game are designed by the players of the game while playing it; there is no commercially available deck advertised as such. Simulation card games A deck of either customised dedicated cards or a standard deck of playing cards with assigned meanings is used to simulate the actions of another activity, for example card football. Fictional card games Many games, including card games, are fabricated by science fiction authors and screenwriters to distance a culture depicted in the story from present-day Western culture. They are commonly used as filler to depict background activities in an atmosphere like a bar or rec room, but sometimes the drama revolves around the play of the game. Some of these games become real card games as the holder of the intellectual property develops and markets a suitable deck and ruleset for the game, while others, such as "Exploding Snap" from the Harry Potter franchise, lack sufficient descriptions of rules, or depend on cards or other hardware that are infeasible or physically impossible. Typical structure of card games Number and association of players Any specific card game imposes restrictions on the number of players. The most significant dividing lines run between one-player games and two-player games, and between two-player games and multi-player games. Card games for one player are known as solitaire or patience card games. (See list of solitaire card games.) Generally speaking, they are in many ways special and atypical, although some of them have given rise to two- or multi-player games such as Spite and Malice. In card games for two players, usually not all cards are distributed to the players, as they would otherwise have perfect information about the game state. Two-player games have always been immensely popular and include some of the most significant card games such as piquet, bezique, sixty-six, klaberjass, gin rummy and cribbage. Many multi-player games started as two-player games that were adapted to a greater number of players. For such adaptations a number of non-obvious choices must be made beginning with the choice of a game orientation. One way of extending a two-player game to more players is by building two teams of equal size. A common case is four players in two fixed partnerships, sitting crosswise as in whist and contract bridge. Partners sit opposite to each other and cannot see each other's hands. If communication between the partners is allowed at all, then it is usually restricted to a specific list of permitted signs and signals. 17th-century French partnership games such as triomphe were special in that partners sat next to each other and were allowed to communicate freely so long as they did not exchange cards or play out of order. Another way of extending a two-player game to more players is as a cut-throat game, in which all players fight on their own, and win or lose alone. Most cut-throat card games are round games, i.e. they can be played by any number of players starting from two or three, so long as there are enough cards for all. For some of the most interesting games such as ombre, tarot and skat, the associations between players change from hand to hand. Ultimately players all play on their own, but for each hand, some game mechanism divides the players into two teams. Most typically these are solo games, i.e. games in which one player becomes the soloist and has to achieve some objective against the others, who form a team and win or lose all their points jointly. But in games for more than three players, there may also be a mechanism that selects two players who then have to play against the others. Direction of play The players of a card game normally form a circle around a table or other space that can hold cards. The game orientation or direction of play, which is only relevant for three or more players, can be either clockwise or counterclockwise. It is the direction in which various roles in the game proceed. (In real-time card games, there may be no need for a direction of play.) Most regions have a traditional direction of play, such as: Counterclockwise in most of Asia and in Latin America. Clockwise in North America and Australia. Europe is roughly divided into a clockwise area in the north and a counterclockwise area in the south. The boundary runs between England, Ireland, Netherlands, Germany, Austria (mostly), Slovakia, Ukraine and Russia (clockwise) and France, Switzerland, Spain, Italy, Slovenia, Balkans, Hungary, Romania, Bulgaria, Greece and Turkey (counterclockwise). Games that originate in a region with a strong preference are often initially played in the original direction, even in regions that prefer the opposite direction. For games that have official rules and are played in tournaments, the direction of play is often prescribed in those rules. Determining who deals Most games have some form of asymmetry between players. The roles of players are normally expressed in terms of the dealer, i.e. the player whose task it is to shuffle the cards and distribute them to the players. Being the dealer can be a (minor or major) advantage or disadvantage, depending on the game. Therefore, after each played hand, the deal normally passes to the next player according to the game orientation. As it can still be an advantage or disadvantage to be the first dealer, there are some standard methods for determining who is the first dealer. A common method is by cutting, which works as follows. One player shuffles the deck and places it on the table. Each player lifts a packet of cards from the top, reveals its bottom card, and returns it to the deck. The player who reveals the highest (or lowest) card becomes dealer. In the case of a tie, the process is repeated by the tied players. For some games such as whist this process of cutting is part of the official rules, and the hierarchy of cards for the purpose of cutting (which need not be the same as that used otherwise in the game) is also specified. But in general, any method can be used, such as tossing a coin in case of a two-player game, drawing cards until one player draws an ace, or rolling dice. Hands, rounds and games A hand is a unit of the game that begins with the dealer shuffling and dealing the cards as described below, and ends with the players scoring and the next dealer being determined. The set of cards that each player receives and holds in his or her hands is also known as that player's hand. The hand is over when the players have finished playing their hands. Most often this occurs when one player (or all) has no cards left. The player who sits after the dealer in the direction of play is known as eldest hand (or in two-player games as elder hand) or forehand. A game round consists of as many hands as there are players. After each hand, the deal is passed on in the direction of play, i.e. the previous eldest hand becomes the new dealer. Normally players score points after each hand. A game may consist of a fixed number of rounds. Alternatively it can be played for a fixed number of points. In this case it is over with the hand in which a player reaches the target score. Shuffling Shuffling is the process of bringing the cards of a pack into a random order. There are a large number of techniques with various advantages and disadvantages. Riffle shuffling is a method in which the deck is divided into two roughly equal-sized halves that are bent and then released, so that the cards interlace. Repeating this process several times randomizes the deck well, but the method is harder to learn than some others and may damage the cards. The overhand shuffle and the Hindu shuffle are two techniques that work by taking batches of cards from the top of the deck and reassembling them in the opposite order. They are easier to learn but must be repeated more often. A method suitable for small children consists in spreading the cards on a large surface and moving them around before picking up the deck again. This is also the most common method for shuffling tiles such as dominoes. For casino games that are played for large sums it is vital that the cards be properly randomized, but for many games this is less critical, and in fact player experience can suffer when the cards are shuffled too well. The official skat rules stipulate that the cards are shuffled well, but according to a decision of the German skat court, a one-handed player should ask another player to do the shuffling, rather than use a shuffling machine, as it would shuffle the cards too well. French belote rules go so far as to prescribe that the deck never be shuffled between hands. Deal The dealer takes all of the cards in the pack, arranges them so that they are in a uniform stack, and shuffles them. In strict play, the dealer then offers the deck to the previous player (in the sense of the game direction) for cutting. If the deal is clockwise, this is the player to the dealer's right; if counterclockwise, it is the player to the dealer's left. The invitation to cut is made by placing the pack, face downward, on the table near the player who is to cut: who then lifts the upper portion of the pack clear of the lower portion and places it alongside. (Normally the two portions have about equal size. Strict rules often indicate that each portion must contain a certain minimum number of cards, such as three or five.) The formerly lower portion is then replaced on top of the formerly upper portion. Instead of cutting, one may also knock on the deck to indicate that one trusts the dealer to have shuffled fairly. The actual deal (distribution of cards) is done in the direction of play, beginning with eldest hand. The dealer holds the pack, face down, in one hand, and removes cards from the top of it with his or her other hand to distribute to the players, placing them face down on the table in front of the players to whom they are dealt. The cards may be dealt one at a time, or in batches of more than one card; and either the entire pack or a determined number of cards are dealt out. The undealt cards, if any, are left face down in the middle of the table, forming the stock (also called the talon, widow, skat or kitty depending on the game and region). Throughout the shuffle, cut, and deal, the dealer should prevent the players from seeing the faces of any of the cards. The players should not try to see any of the faces. Should a player accidentally see a card, other than one's own, proper etiquette would be to admit this. It is also dishonest to try to see cards as they are dealt, or to take advantage of having seen a card. Should a card accidentally become exposed, (visible to all), any player can demand a redeal (all the cards are gathered up, and the shuffle, cut, and deal are repeated) or that the card be replaced randomly into the deck ("burning" it) and a replacement dealt from the top to the player who was to receive the revealed card. When the deal is complete, all players pick up their cards, or "hand", and hold them in such a way that the faces can be seen by the holder of the cards but not the other players, or vice versa depending on the game. It is helpful to fan one's cards out so that if they have corner indices all their values can be seen at once. In most games, it is also useful to sort one's hand, rearranging the cards in a way appropriate to the game. For example, in a trick-taking game it may be easier to have all one's cards of the same suit together, whereas in a rummy game one might sort them by rank or by potential combinations. Rules A new card game starts in a small way, either as someone's invention, or as a modification of an existing game. Those playing it may agree to change the rules |
patterns of Berlin wool work of the mid-nineteenth century. Besides designs created expressly for cross-stitch, there are software programs that convert a photograph or a fine art image into a chart suitable for stitching. One example of this is in the cross-stitched reproduction of the Sistine Chapel charted and stitched by Joanna Lopianowski-Roberts. There are many cross-stitching "guilds" and groups across the United States and Europe which offer classes, collaborate on large projects, stitch for charity, and provide other ways for local cross-stitchers to get to know one another. Individually owned local needlework shops (LNS) often have stitching nights at their shops, or host weekend stitching retreats. Today, cotton floss is the most common embroidery thread. It is a thread made of mercerized cotton, composed of six strands that are only loosely twisted together and easily separable. While there are other manufacturers, the two most-commonly used (and oldest) brands are DMC and Anchor, both of which have been manufacturing embroidery floss since the 1800s. Other materials used are pearl (or perle) cotton, Danish flower thread, silk and Rayon. Different wool threads, metallic threads or other novelty threads are also used, sometimes for the whole work, but often for accents and embellishments. Hand-dyed cross-stitch floss is created just as the name implies—it is dyed by hand. Because of this, there are variations in the amount of color throughout the thread. Some variations can be subtle, while some can be a huge contrast. Some also have more than one color per thread, which in the right project, creates amazing results. Cross-stitch is widely used in traditional Palestinian dressmaking. Related stitches and forms of embroidery The cross-stitch can be executed partially such as in quarter-, half-, and three-quarter-stitches. A single straight stitch, done in the form of backstitching, is often used as an outline, to add detail or definition. There are many stitches which are related structurally to cross-stitch. The best known are Italian cross-stitch (as seen in Assisi embroidery), long-armed cross-stitch, and Montenegrin stitch. Italian cross-stitch and Montenegrin stitch are reversible, meaning the work looks the same on both sides. These styles have a slightly different look than ordinary cross-stitch. These more difficult stitches are rarely used in mainstream embroidery, but they are still used to recreate historical pieces of embroidery or by the creative and adventurous stitcher. The double cross-stitch, also known as a Leviathan stitch or Smyrna cross-stitch, combines a cross-stitch with an upright cross-stitch. Berlin wool work and similar petit point stitchery resembles the heavily shaded, opulent styles of cross-stitch, and sometimes also used charted patterns on paper. Cross-stitch is often combined with other popular forms of embroidery, such as Hardanger embroidery or blackwork embroidery. Cross-stitch may also be combined with other work, such as canvaswork or drawn thread work. Beadwork and other embellishments such as paillettes, charms, small buttons and specialty threads of various kinds may also be used. Cross stitch can often used in needlepoint. Recent trends for cross stitch Cross-stitch has become increasingly popular with the younger generation of Europe in recent years. Retailers such as John Lewis experienced a 17% rise in sales of haberdashery products between 2009 and 2010. Hobbycraft, a chain of stores selling craft supplies, also enjoyed an 11% increase in sales over the year to February 22, 2009. Knitting and cross-stitching have become more popular hobbies for a younger market, in contrast to its traditional reputation as a hobby for retirees. Sewing and craft groups such as Stitch and Bitch London have resurrected the idea of the traditional craft club. At Clothes Show Live 2010 there was a new area called "Sknitch" promoting modern sewing, knitting and embroidery. In a departure from the traditional designs associated with cross-stitch, there is a current trend for more postmodern or tongue-in-cheek designs featuring retro images or contemporary sayings. It is linked to a concept known as 'subversive cross-stitch', which involves more risque designs, often fusing the traditional sampler style with sayings designed to shock or be incongruous with the old-fashioned image of cross-stitch. Stitching designs on other materials can be accomplished by using waste canvas. This is a temporary gridded canvas similar to regular canvas used for embroidery that is held together by a water-soluble glue, which is removed after completion of stitch design. Other crafters have taken to cross-stitching on all manner of gridded objects as well including old kitchen strainers or chain-link fences. Traditionally, it is believed that cross stitch is a woman's craft. But lately there are men who are also addicted to this. Cross-stitch and Feminism In the 21st century, an emphasis on feminist design has emerged within cross-stitch communities. Some cross-stitchers have commented on the way that the practice of embroidery makes them feel connected to the women who practised it before them. There is a push for all embroidery, including cross-stitch, to be respected as a significant art form. Cross-stitch and | fabric and hang them on the wall for decoration. Cross-stitch is also often used to make greeting cards, pillowtops, or as inserts for box tops, coasters and trivets. Multicoloured, shaded, painting-like patterns as we know them today are a fairly modern development, deriving from similar shaded patterns of Berlin wool work of the mid-nineteenth century. Besides designs created expressly for cross-stitch, there are software programs that convert a photograph or a fine art image into a chart suitable for stitching. One example of this is in the cross-stitched reproduction of the Sistine Chapel charted and stitched by Joanna Lopianowski-Roberts. There are many cross-stitching "guilds" and groups across the United States and Europe which offer classes, collaborate on large projects, stitch for charity, and provide other ways for local cross-stitchers to get to know one another. Individually owned local needlework shops (LNS) often have stitching nights at their shops, or host weekend stitching retreats. Today, cotton floss is the most common embroidery thread. It is a thread made of mercerized cotton, composed of six strands that are only loosely twisted together and easily separable. While there are other manufacturers, the two most-commonly used (and oldest) brands are DMC and Anchor, both of which have been manufacturing embroidery floss since the 1800s. Other materials used are pearl (or perle) cotton, Danish flower thread, silk and Rayon. Different wool threads, metallic threads or other novelty threads are also used, sometimes for the whole work, but often for accents and embellishments. Hand-dyed cross-stitch floss is created just as the name implies—it is dyed by hand. Because of this, there are variations in the amount of color throughout the thread. Some variations can be subtle, while some can be a huge contrast. Some also have more than one color per thread, which in the right project, creates amazing results. Cross-stitch is widely used in traditional Palestinian dressmaking. Related stitches and forms of embroidery The cross-stitch can be executed partially such as in quarter-, half-, and three-quarter-stitches. A single straight stitch, done in the form of backstitching, is often used as an outline, to add detail or definition. There are many stitches which are related structurally to cross-stitch. The best known are Italian cross-stitch (as seen in Assisi embroidery), long-armed cross-stitch, and Montenegrin stitch. Italian cross-stitch and Montenegrin stitch are reversible, meaning the work looks the same on both sides. These styles have a slightly different look than ordinary cross-stitch. These more difficult stitches are rarely used in mainstream embroidery, but they are still used to recreate historical pieces of embroidery or by the creative and adventurous stitcher. The double cross-stitch, also known as a Leviathan stitch or Smyrna cross-stitch, combines a cross-stitch with an upright cross-stitch. Berlin wool work and similar petit point stitchery resembles the heavily shaded, opulent styles of cross-stitch, and sometimes also used charted patterns on paper. Cross-stitch is often combined with other popular forms of embroidery, such as Hardanger embroidery or blackwork embroidery. Cross-stitch may also be combined with other work, such as canvaswork or drawn thread work. Beadwork and other embellishments such as paillettes, charms, small buttons and specialty threads of various kinds may also be used. Cross stitch can often used in needlepoint. Recent trends for cross stitch Cross-stitch has become increasingly popular with the younger generation of Europe in recent years. Retailers such as John Lewis experienced a 17% rise in sales of haberdashery products between 2009 and 2010. Hobbycraft, a chain of stores selling craft supplies, also enjoyed |
amount wagered for a winning wager. The house edge or vigorish is defined as the casino profit expressed as the percentage of the player's original bet. (In games such as blackjack or Spanish 21, the final bet may be several times the original bet, if the player double and splits.) In American roulette, there are two "zeroes" (0, 00) and 36 non-zero numbers (18 red and 18 black). This leads to a higher house edge compared to European roulette. The chances of a player, who bets 1 unit on red, winning are 18/38 and his chances of losing 1 unit are 20/38. The player's expected value is EV = (18/38 × 1) + (20/38 × (−1)) = 18/38 − 20/38 = −2/38 = −5.26%. Therefore, the house edge is 5.26%. After 10 spins, betting 1 unit per spin, the average house profit will be 10 × 1 × 5.26% = 0.53 units. European roulette wheels have only one "zero" and therefore the house advantage (ignoring the en prison rule) is equal to 1/37 = 2.7%. The house edge of casino games varies greatly with the game, with some games having an edge as low as 0.3%. Keno can have house edges up to 25%, slot machines having up to 15%. The calculation of the roulette house edge is a trivial exercise; for other games, this is not usually the case. Combinatorial analysis and/or computer simulation is necessary to complete the task. In games that have a skill element, such as blackjack or Spanish 21, the house edge is defined as the house advantage from optimal play (without the use of advanced techniques such as card counting), on the first hand of the shoe (the container that holds the cards). The set of the optimal plays for all possible hands is known as "basic strategy" and is highly dependent on the specific rules and even the number of decks used. Traditionally, the majority of casinos have | known as croupiers or dealers. Random number games are based upon the selection of random numbers, either from a computerized random number generator or from other gaming equipment. Random number games may be played at a table or through the purchase of paper tickets or cards, such as keno or bingo. Some casino games combine multiple of the above aspects; for example, roulette is a table game conducted by a dealer, which involves random numbers. Casinos may also offer other types of gaming, such as hosting poker games or tournaments, where players compete against each other. Common casino games Notable games that are commonly found at casinos include: Table games Baccarat Blackjack Craps Roulette Poker (Texas hold'em, Five-card draw, Omaha hold'em) Big Six wheel Pool Gaming machines Pachinko Slot machine Video lottery terminal Video poker Random numbers Bingo Keno House advantage Casino games typically provide a predictable long-term advantage to the casino, or "house", while offering the players the possibility of a short-term gain that in some cases can be large. Some casino games have a skill element, where the players' decisions have an impact on the results. Players possessing sufficient skills to eliminate the inherent long-term disadvantage (the house edge or vigorish) in a casino game are referred to as advantage players. The players' disadvantage is a result of the casino not paying winning wagers according to the game's "true odds", which are the payouts that would be expected considering the odds of a wager either winning or losing. For example, if a game is played by wagering on the number that would result from the roll of one die, true odds would be 6 times the amount wagered since there is a 1 in 6 chance of any single number appearing, assuming that the player gets the original amount wagered back. However, the casino may only pay 4 times the amount wagered for a winning wager. The house edge or vigorish is defined as the casino profit expressed as the percentage of the player's original bet. (In games such as blackjack or Spanish 21, the final bet may be several times the original bet, if the player double and splits.) In American roulette, there are two "zeroes" (0, 00) and 36 non-zero numbers (18 red and 18 black). This leads to a higher house edge compared to European roulette. The chances of a player, who bets 1 unit on red, winning are 18/38 and his chances of losing 1 unit are 20/38. The player's expected value is EV = (18/38 × 1) + (20/38 × (−1)) = 18/38 − 20/38 = −2/38 = −5.26%. Therefore, the house edge is 5.26%. |
game With the introduction of smartphones and tablet computers standardized on the iOS and Android operating systems, mobile gaming has become a significant platform. These games may utilize unique features of mobile devices that are not necessary present on other platforms, such as accelerometers, global positing information and camera devices to support augmented reality gameplay. Cloud gaming Cloud gaming requires a minimal hardware device, such as a basic computer, console, laptop, mobile phone or even a dedicated hardware device connected to a display with good Internet connectivity that connects to hardware systems by the cloud gaming provider. The game is computed and rendered on the remote hardware, using a number of predictive methods to reduce the network latency between player input and output on their display device. For example, the Xbox Cloud Gaming and PlayStation Now platforms use dedicated custom server blade hardware in cloud computing centers. Virtual reality Virtual reality (VR) games generally require players to use a special head-mounted unit that provides stereoscopic screens and motion tracking to immerse a player within virtual environment that responds to their head movements. Some VR systems include control units for the player's hands as to provide a direct way to interact with the virtual world. VR systems generally require a separate computer, console, or other processing device that couples with the head-mounted unit. Emulation An emulator enables games from a console or otherwise different system to be run in a type of virtual machine on a modern system, simulating the hardware of the original and allows old games to be played. While emulators themselves have been found to be legal in United States case law, the act of obtaining the game software that one does not already own may violate copyrights. However, there are some official releases of emulated software from game manufacturers, such as Nintendo with its Virtual Console or Nintendo Switch Online offerings. Backward compatibility Backward compatibility is similar in nature to emulation in that older games can be played on newer platforms, but typically directly though hardware and build-in software within the platform. For example, the PlayStation 2 is capable of playing original PlayStation games simply by inserting the original game media into the newer console, while Nintendo's Wii could play Nintendo GameCube titles as well in the same manner. Game media Early arcade games, home consoles, and handheld games were dedicated hardware units with the game's logic built into the electronic componentry of the hardware. Since then, most video game platforms are considered programmable, having means to read and play multiple games distributed on different types of media or formats. Physical formats include ROM cartridges, magnetic storage including magnetic tape data storage and floppy discs, optical media formats including CD-ROM and DVDs, and flash memory cards. Furthermore digital distribution over the Internet or other communication methods as well as cloud gaming alleviate the need for any physical media. In some cases, the media serves as the direct read-only memory for the game, or it may be the form of installation media that is used to write the main assets to the player's platform's local storage for faster loading periods and later updates. Games can be extended with new content and software patches through either expansion packs which are typically available as physical media, or as downloadable content nominally available via digital distribution. These can be offered freely or can be used to monetize a game following its initial release. Several games offer players the ability to create user-generated content to share with others to play. Other games, mostly those on personal computers, can be extended with user-created modifications or mods that alter or add onto the game; these often are unofficial and were developed by players from reverse engineering of the game, but other games provide official support for modding the game. Input device Video game can use several types of input devices to translate human actions to a game. Most common are the use of game controllers like gamepads and joysticks for most consoles, and as accessories for personal computer systems along keyboard and mouse controls. Common controls on the most recent controllers include face buttons, shoulder triggers, analog sticks, and directional pads ("d-pads"). Consoles typically include standard controllers which are shipped or bundled with the console itself, while peripheral controllers are available as a separate purchase from the console manufacturer or third-party vendors. Similar control sets are built into handheld consoles and onto arcade cabinets. Newer technology improvements have incorporated additional technology into the controller or the game platform, such as touchscreens and motion detection sensors that give more options for how the player interacts with the game. Specialized controllers may be used for certain genres of games, including racing wheels, light guns and dance pads. Digital cameras and motion detection can capture movements of the player as input into the game, which can, in some cases, effectively eliminate the control, and on other systems such as virtual reality, are used to enhance immersion into the game. Display and output By definition, all video games are intended to output graphics to an external video display, such as cathode-ray tube televisions, newer liquid-crystal display (LCD) televisions and built-in screens, projectors or computer monitors, depending on the type of platform the game is played on. Features such as color depth, refresh rate, frame rate, and screen resolution are a combination of the limitations of the game platform and display device and the program efficiency of the game itself. The game's output can range from fixed displays using LED or LCD elements, text-based games, two-dimensional and three-dimensional graphics, and augmented reality displays. The game's graphics are often accompanied by sound produced by internal speakers on the game platform or external speakers attached to the platform, as directed by the game's programming. This often will include sound effects tied to the player's actions to provide audio feedback, as well as background music for the game. Some platforms support additional feedback mechanics to the player that a game can take advantage of. This is most commonly haptic technology built into the game controller, such as causing the controller to shake in the player's hands to simulate a shaking earthquake occurring in game. Classifications Video games are frequently classified by a number of factors related to how one plays them. Genre A video game, like most other forms of media, may be categorized into genres. However, unlike film or television which use visual or narrative elements, video games are generally categorized into genres based on their gameplay interaction, since this is the primary means which one interacts with a video game. The narrative setting does not impact gameplay; a shooter game is still a shooter game, regardless of whether it takes place in a fantasy world or in outer space. An exception is the horror game genre, used for games that are based on narrative elements of horror fiction, the supernatural, and psychological horror. Genre names are normally self-describing in terms of the type of gameplay, such as action game, role playing game, or shoot 'em up, though some genres have derivations from influential works that have defined that genre, such as roguelikes from Rogue, Grand Theft Auto clones from Grand Theft Auto III, and battle royale games from the film Battle Royale. The names may shift over time as players, developers and the media come up with new terms; for example, first-person shooters were originally called "Doom clones" based on the 1993 game. A hierarchy of game genres exist, with top-level genres like "shooter game" and "action game" that broadly capture the game's main gameplay style, and several subgenres of specific implementation, such as within the shooter game first-person shooter and third-person shooter. Some cross-genre types also exist that fall until multiple top-level genres such as action-adventure game. Mode A video game's mode describes how many players can use the game at the same type. This is primarily distinguished by single-player video games and multiplayer video games. Within the latter category, multiplayer games can be played in a variety of ways, including locally at the same device, on separate devices connected through a local network such as LAN parties, or online via separate Internet connections. Most multiplayer games are based on competitive gameplay, but many offer cooperative and team-based options as well as asymmetric gameplay. Online games use server structures that can also enable massively multiplayer online games (MMOs) to support hundreds of players at the same time. A small number of video games are zero-player games, in which the player has very limited interaction with the game itself. These are most commonly simulation games where the player may establish a starting state and then let the game proceed on its own, watching the results as a passive observer, such as with many computerized simulations of Conway's Game of Life. Intent Most video games are created for entertainment purposes, a category otherwise called "core games". There are a subset of games developed for additional purposes beyond entertainment. These include: Casual games Casual games are designed for ease of accessibility, simple to understand gameplay and quick to grasp rule sets, and aimed at mass market audience, as opposed to a hardcore game. They frequently support the ability to jump in and out of play on demand, such as during commuting or lunch breaks. Numerous browser and mobile games fall into the casual game area, and casual games often are from genres with low intensity game elements such as match three, hidden object, time management, and puzzle games. Causal games frequently use social-network game mechanics, where players can enlist the help of friends on their social media networks for extra turns or moves each day. Popular casual games include Tetris and Candy Crush Saga. More recent, starting in the late 2010s, are hyper-casual games which use even more simplistic rules for short but infinitely replayable games, such as Flappy Bird. Educational games Education software has been used in homes and classrooms to help teach children and students, and video games have been similarly adapted for these reasons, all designed to provide a form of interactivity and entertainment tied to game design elements. There are a variety of differences in their designs and how they educate the user. These are broadly split between edutainment games that tend to focus on the entertainment value and rote learning but are unlikely to engage in critical thinking, and educational video games that are geared towards problem solving through motivation and positive reinforcement while downplaying the entertainment value. Examples of educational games include The Oregon Trail and the Carmen Sandiego series. Further, games not initially developed for educational purposes have found their way into the classroom after release, such as that feature open worlds or virtual sandboxes like Minecraft, or offer critical thinking skills through puzzle video games like SpaceChem. Serious games Further extending from educational games, serious games are those where the entertainment factor may be augmented, overshadowed, or even eliminated by other purposes for the game. Game design is used to reinforce the non-entertainment purpose of the game, such as using video game technology for the game's interactive world, or gamification for reinforcement training. Educational games are a form of serious games, but other types of serious games include fitness games that incorporate significant physical exercise to help keep the player fit (such as Wii Fit), flight simulators that simulate piloting commercial and military aircraft (such as Microsoft Flight Simulator), advergames that are built around the advertising of a product (such as Pepsiman), and newsgames aimed at conveying a specific advocacy message (such as NarcoGuerra). Art game Though video games have been considered an art form on their own, games may be developed to try to purposely communicate a story or message, using the medium as a work of art. These art or arthouse games are designed to generate emotion and empathy from the player by challenging societal norms and offering critique through the interactivity of the video game medium. They may not have any type of win condition and are designed to let the player explore through the game world and scenarios. Most art games are indie games in nature, designed based on personal experiences or stories through a single developer or small team. Examples of art games include Passage, Flower, and That Dragon, Cancer. Content rating Video games can be subject to national and international content rating requirements. Like with film content ratings, video game ratings typing identify the target age group that the national or regional ratings board believes is appropriate for the player, ranging from all-ages, to a teenager-or-older, to mature, to the infrequent adult-only games. Most content review is based on the level of violence, both in the type of violence and how graphic it may be represented, and sexual content, but other themes such as drug and alcohol use and gambling that can influence children may also be identified. A primary identifier based on a minimum age is used by nearly all systems, along with additional descriptors to identify specific content that players and parents should be aware of. The regulations vary from country to country but generally are voluntary systems upheld by vendor practices, with penalty and fines issued by the ratings body on the video game publisher for misuse of the ratings. Among the major content rating systems include: Entertainment Software Rating Board (ESRB) that oversees games released in the United States. ESRB ratings are voluntary and rated along a E (Everyone), E10+ (Everyone 10 and older), T (Teen), M (Mature), and AO (Adults Only). Attempts to mandate video games ratings in the U.S. subsequently led to the landmark Supreme Court case, Brown v. Entertainment Merchants Association in 2011 which ruled video games were a protected form of art, a key victory for the video game industry. Pan European Game Information (PEGI) covering the United Kingdom, most of the European Union and other European countries, replacing previous national-based systems. The PEGI system uses content rated based on minimum recommended ages, which include 3+, 8+, 12+, 16+, and 18+. Australian Classification Board (ACB) oversees the ratings of games and other works in Australia, using ratings of G (General), PG (Parental Guidance), M (Mature), MA15+ (Mature Accompanied), R18+ (Restricted), and X (Restricted for pornographic material). ACB can also deny to give a rating to game (RC – Refused Classification). The ACB's ratings are enforceable by law, and importantly, games cannot be imported or purchased digitally in Australia if they have failed to gain a rating or were given the RC rating, leading to a number of notable banned games. Computer Entertainment Rating Organization (CERO) rates games for Japan. Their ratings include A (all ages), B (12 and older), C (15 and over), D (17 and over), and Z (18 and over). Additionally, the major content system provides have worked to create the International Age Rating Coalition (IARC), a means to streamline and align the content ratings system between different region, so that a publisher would only need to complete the content ratings review for one provider, and use the IARC transition to affirm the content rating for all other regions. Certain nations have even more restrictive rules related to political or ideological content. Within Germany, until 2018, the Unterhaltungssoftware Selbstkontrolle (Entertainment Software Self-Regulation) would refuse to classify, and thus allow sale, of any game depicting Nazi imagery, and thus often requiring developers to replace such imagery with fictional ones. This ruling was relaxed in 2018 to allow for such imagery for "social adequacy" purposes that applied to other works of art. China's video game segment is mostly isolated from the rest of the world due to the government's censorship, and all games published there must adhere to strict government review, disallowing content such as smearing the image of the Chinese Communist Party. Foreign games published in China often require modification by developers and publishers to meet these requirements. Development Video game development and authorship, much like any other form of entertainment, is frequently a cross-disciplinary field. Video game developers, as employees within this industry are commonly referred, primarily include programmers and graphic designers. Over the years this has expanded to include almost every type of skill that one might see prevalent in the creation of any movie or television program, including sound designers, musicians, and other technicians; as well as skills that are specific to video games, such as the game designer. All of these are managed by producers. In the early days of the industry, it was more common for a single person to manage all of the roles needed to create a video game. As platforms have become more complex and powerful in the type of material they can present, larger teams have been needed to generate all of the art, programming, cinematography, and more. This is not to say that the age of the "one-man shop" is gone, as this is still sometimes found in the casual gaming and handheld markets, where smaller games are prevalent due to technical limitations such as limited RAM or lack of dedicated 3D graphics rendering capabilities on the target platform (e.g., some PDAs). Video games are programmed like any other piece of computer software. Prior to the mid-1970s, arcade and home consoles were programmed by assembling discrete electro-mechanical components on circuit boards, which limited games to relatively simple logic. By 1975, low-cost microprocessors were available at volume to be used for video game hardware, which allowed game developers to program more detailed games, widening the scope of what was possible. Ongoing improvements in computer hardware technology has expanded what has become possible to create in video games, coupled with convergence of common hardware between console, computer, and arcade platforms to simplify the development process. Today, game developers have a number of commercial and open source tools available for use to make games, often which are across multiple platforms to support portability, or may still opt to create their own for more specialized features and direct control of the game. Today, many games are built around a game engine that handles the bulk of the game's logic, gameplay, and rendering. These engines can be augmented with specialized engines for specific features, such as a physics engine that simulates the physics of objects in real-time. A variety of middleware exists to help developers to access other features, such as for playback of videos within games, network-oriented code for games that communicate via online services, matchmaking for online games, and similar features. These features can be used from a developers' programming language of choice, or they may opt to also use game development kits that minimize the amount of direct programming they have to do but can also limit the amount of customization they can add into a game. Like all software, video games usually undergo quality testing before release to assure there are no bugs or glitches in the product, though frequently developers will release patches and updates. With the growth of the size of development teams in the industry, the problem of cost has increased. Development studios need the best talent, while publishers reduce costs to maintain profitability on their investment. Typically, a video game console development team ranges from 5 to 50 people, and some exceed 100. In May 2009, Assassin's Creed II was reported to have a development staff of 450. The growth of team size combined with greater pressure to get completed projects into the market to begin recouping production costs has led to a greater occurrence of missed deadlines, rushed games and the release of unfinished products. While amateur and hobbyist game programming had existed since the late 1970s with the introduction of home computers, a newer trend since the mid-2000s is indie game development. Indie games are made by small teams outside any direct publisher control, their games being smaller in scope than those from the larger "AAA" game studios, and are often experiment in gameplay and art style. Indie game development are aided by larger availability of digital distribution, including the newer mobile gaming marker, and readily-available and low-cost development tools for these platforms. Game theory and studies | target age group that the national or regional ratings board believes is appropriate for the player, ranging from all-ages, to a teenager-or-older, to mature, to the infrequent adult-only games. Most content review is based on the level of violence, both in the type of violence and how graphic it may be represented, and sexual content, but other themes such as drug and alcohol use and gambling that can influence children may also be identified. A primary identifier based on a minimum age is used by nearly all systems, along with additional descriptors to identify specific content that players and parents should be aware of. The regulations vary from country to country but generally are voluntary systems upheld by vendor practices, with penalty and fines issued by the ratings body on the video game publisher for misuse of the ratings. Among the major content rating systems include: Entertainment Software Rating Board (ESRB) that oversees games released in the United States. ESRB ratings are voluntary and rated along a E (Everyone), E10+ (Everyone 10 and older), T (Teen), M (Mature), and AO (Adults Only). Attempts to mandate video games ratings in the U.S. subsequently led to the landmark Supreme Court case, Brown v. Entertainment Merchants Association in 2011 which ruled video games were a protected form of art, a key victory for the video game industry. Pan European Game Information (PEGI) covering the United Kingdom, most of the European Union and other European countries, replacing previous national-based systems. The PEGI system uses content rated based on minimum recommended ages, which include 3+, 8+, 12+, 16+, and 18+. Australian Classification Board (ACB) oversees the ratings of games and other works in Australia, using ratings of G (General), PG (Parental Guidance), M (Mature), MA15+ (Mature Accompanied), R18+ (Restricted), and X (Restricted for pornographic material). ACB can also deny to give a rating to game (RC – Refused Classification). The ACB's ratings are enforceable by law, and importantly, games cannot be imported or purchased digitally in Australia if they have failed to gain a rating or were given the RC rating, leading to a number of notable banned games. Computer Entertainment Rating Organization (CERO) rates games for Japan. Their ratings include A (all ages), B (12 and older), C (15 and over), D (17 and over), and Z (18 and over). Additionally, the major content system provides have worked to create the International Age Rating Coalition (IARC), a means to streamline and align the content ratings system between different region, so that a publisher would only need to complete the content ratings review for one provider, and use the IARC transition to affirm the content rating for all other regions. Certain nations have even more restrictive rules related to political or ideological content. Within Germany, until 2018, the Unterhaltungssoftware Selbstkontrolle (Entertainment Software Self-Regulation) would refuse to classify, and thus allow sale, of any game depicting Nazi imagery, and thus often requiring developers to replace such imagery with fictional ones. This ruling was relaxed in 2018 to allow for such imagery for "social adequacy" purposes that applied to other works of art. China's video game segment is mostly isolated from the rest of the world due to the government's censorship, and all games published there must adhere to strict government review, disallowing content such as smearing the image of the Chinese Communist Party. Foreign games published in China often require modification by developers and publishers to meet these requirements. Development Video game development and authorship, much like any other form of entertainment, is frequently a cross-disciplinary field. Video game developers, as employees within this industry are commonly referred, primarily include programmers and graphic designers. Over the years this has expanded to include almost every type of skill that one might see prevalent in the creation of any movie or television program, including sound designers, musicians, and other technicians; as well as skills that are specific to video games, such as the game designer. All of these are managed by producers. In the early days of the industry, it was more common for a single person to manage all of the roles needed to create a video game. As platforms have become more complex and powerful in the type of material they can present, larger teams have been needed to generate all of the art, programming, cinematography, and more. This is not to say that the age of the "one-man shop" is gone, as this is still sometimes found in the casual gaming and handheld markets, where smaller games are prevalent due to technical limitations such as limited RAM or lack of dedicated 3D graphics rendering capabilities on the target platform (e.g., some PDAs). Video games are programmed like any other piece of computer software. Prior to the mid-1970s, arcade and home consoles were programmed by assembling discrete electro-mechanical components on circuit boards, which limited games to relatively simple logic. By 1975, low-cost microprocessors were available at volume to be used for video game hardware, which allowed game developers to program more detailed games, widening the scope of what was possible. Ongoing improvements in computer hardware technology has expanded what has become possible to create in video games, coupled with convergence of common hardware between console, computer, and arcade platforms to simplify the development process. Today, game developers have a number of commercial and open source tools available for use to make games, often which are across multiple platforms to support portability, or may still opt to create their own for more specialized features and direct control of the game. Today, many games are built around a game engine that handles the bulk of the game's logic, gameplay, and rendering. These engines can be augmented with specialized engines for specific features, such as a physics engine that simulates the physics of objects in real-time. A variety of middleware exists to help developers to access other features, such as for playback of videos within games, network-oriented code for games that communicate via online services, matchmaking for online games, and similar features. These features can be used from a developers' programming language of choice, or they may opt to also use game development kits that minimize the amount of direct programming they have to do but can also limit the amount of customization they can add into a game. Like all software, video games usually undergo quality testing before release to assure there are no bugs or glitches in the product, though frequently developers will release patches and updates. With the growth of the size of development teams in the industry, the problem of cost has increased. Development studios need the best talent, while publishers reduce costs to maintain profitability on their investment. Typically, a video game console development team ranges from 5 to 50 people, and some exceed 100. In May 2009, Assassin's Creed II was reported to have a development staff of 450. The growth of team size combined with greater pressure to get completed projects into the market to begin recouping production costs has led to a greater occurrence of missed deadlines, rushed games and the release of unfinished products. While amateur and hobbyist game programming had existed since the late 1970s with the introduction of home computers, a newer trend since the mid-2000s is indie game development. Indie games are made by small teams outside any direct publisher control, their games being smaller in scope than those from the larger "AAA" game studios, and are often experiment in gameplay and art style. Indie game development are aided by larger availability of digital distribution, including the newer mobile gaming marker, and readily-available and low-cost development tools for these platforms. Game theory and studies Although departments of computer science have been studying the technical aspects of video games for years, theories that examine games as an artistic medium are a relatively recent development in the humanities. The two most visible schools in this emerging field are ludology and narratology. Narrativists approach video games in the context of what Janet Murray calls "Cyberdrama". That is to say, their major concern is with video games as a storytelling medium, one that arises out of interactive fiction. Murray puts video games in the context of the Holodeck, a fictional piece of technology from Star Trek, arguing for the video game as a medium in which the player is allowed to become another person, and to act out in another world. This image of video games received early widespread popular support, and forms the basis of films such as Tron, eXistenZ and The Last Starfighter. Ludologists break sharply and radically from this idea. They argue that a video game is first and foremost a game, which must be understood in terms of its rules, interface, and the concept of play that it deploys. Espen J. Aarseth argues that, although games certainly have plots, characters, and aspects of traditional narratives, these aspects are incidental to gameplay. For example, Aarseth is critical of the widespread attention that narrativists have given to the heroine of the game Tomb Raider, saying that "the dimensions of Lara Croft's body, already analyzed to death by film theorists, are irrelevant to me as a player, because a different-looking body would not make me play differently... When I play, I don't even see her body, but see through it and past it." Simply put, ludologists reject traditional theories of art because they claim that the artistic and socially relevant qualities of a video game are primarily determined by the underlying set of rules, demands, and expectations imposed on the player. While many games rely on emergent principles, video games commonly present simulated story worlds where emergent behavior occurs within the context of the game. The term "emergent narrative" has been used to describe how, in a simulated environment, storyline can be created simply by "what happens to the player." However, emergent behavior is not limited to sophisticated games. In general, any place where event-driven instructions occur for AI in a game, emergent behavior will exist. For instance, take a racing game in which cars are programmed to avoid crashing, and they encounter an obstacle in the track: the cars might then maneuver to avoid the obstacle causing the cars behind them to slow and/or maneuver to accommodate the cars in front of them and the obstacle. The programmer never wrote code to specifically create a traffic jam, yet one now exists in the game. Intellectual property for video games Most commonly, video games are protected by copyright, though both patents and trademarks have been used as well. Though local copyright regulations vary to the degree of protection, video games qualify as copyrighted visual-audio works, and enjoy cross-country protection under the Berne Convention. This typically only applies to the underlying code, as well as to the artistic aspects of the game such as its writing, art assets, and music. Gameplay itself is generally not considered copyrightable; in the United States among other countries, video games are considered to fall into the idea–expression distinction in that it is how the game is presented and expressed to the player that can be copyrighted, but not the underlying principles of the game. Because gameplay is normally ineligible for copyright, gameplay ideas in popular games are often replicated and built upon in other games. At times, this repurposing of gameplay can be seen as beneficial and a fundamental part of how the industry has grown by building on the ideas of others. For example Doom (1993) and Grand Theft Auto III (2001) introduced gameplay that created popular new game genres, the first-person shooter and the Grand Theft Auto clone, respectively, in the few years after their release. However, at times and more frequently at the onset of the industry, developers would intentionally create video game clones of successful games and game hardware with few changes, which led to the flooded arcade and dedicated home console market around 1978. Cloning is also a major issue with countries that do not have strong intellectual property protection laws, such as within China. The lax oversight by China's government and the difficulty for foreign companies to take Chinese entities to court had enabled China to support a large grey market of cloned hardware and software systems. The industry remains challenged to distinguish between creating new games based on refinements of past successful games to create a new type of gameplay, and intentionally creating a clone of a game that may simply swap out art assets. Industry History The early history of the video game industry, following the first game hardware releases and through 1983, had little structure. Video games quickly took off during the golden age of arcade video games from the late 1970s to early 1980s, but the newfound industry was mainly composed of game developers with little business experience. This led to numerous companies forming simply to create clones of popular games to try to capitalize on the market. Due to loss of publishing control and oversaturation of the market, the North American market crashed in 1983, dropping from revenues of around in 1983 to by 1985. Many of the North American companies created in the prior years closed down. Japan's growing game industry was briefly shocked by this crash but had sufficient longevity to withstand the short-term effects, and Nintendo helped to revitalize the industry with the release of the Nintendo Entertainment System in North America in 1985. Along with it, Nintendo established a number of core industrial practices to prevent unlicensed game development and control game distribution on their platform, methods that continue to be used by console manufacturers today. The industry remained more conservative following the 1983 crash, forming around the concept of publisher-developer dichotomies, and by the 2000s, leading to the industry centralizing around low-risk, triple-A games and studios with large development budgets of at least or more. The advent of the Internet brought digital distribution as a viable means to distribute games, and contributed to the growth of more riskier, experimental independent game development as an alternative to triple-A games in the late 2000s and which has continued to grow as a significant portion of the video game industry. Industry roles Video games have a large network effect that draw on many different sectors that tie into the larger video game industry. While video game developers are a significant portion of the industry, other key participants in the market include: Publishers: Companies generally that oversee bringing the game from the developer to market. This often includes performing the marketing, public relations, and advertising of the game. Publishers frequently pay the developers ahead of time to make their games and will be involved in critical decisions about the direction of the game's progress, and then pay the developers additional royalties or bonuses based on sales performances. Other smaller, boutique publishers may simply offer to perform the publishing of a game for a small fee and a portion of the sales, and otherwise leave the developer with the creative freedom to proceed. A range of other publisher-developer relationships exist between these points. Distributors: Publishers often are able to produce their own game media and take the role of distributor, but there are also third-party distributors that can mass-produce game media and distribute to retailers. Digital storefronts like Steam and the iOS App Store also serve as distributors and retailers in the digital space. Retailers: Physical storefronts, which include large online retailers, department and electronic stores, and specialty video game stores, sell games, consoles, and other accessories to consumers. This has also including a trade-in market in certain regions, allowing players to turn in used games for partial refunds or credit towards other games. However, with the uprising of digital marketplaces and e-commerce revolution, retailers have been performing worse than in the past. Hardware manufacturers: The video game console manufacturers produce console hardware, often through a value chain system that include numerous component suppliers and contract manufacturer that assemble the consoles. Further, these console manufacturers typically require a license to develop for their platform and may control the production of some games, such as Nintendo does with the use of game cartridges for its systems. In exchange, the manufacturers may help promote games for their system and may seek console exclusivity for certain games. For games on personal computers, a number of manufacturers are devoted to high-performance "gaming computer" hardware, particularly in the graphics card area; several of the same companies overlap with component supplies for consoles. A range of third-party manufacturers also exist to provide equipment and gear for consoles post-sale, such as additional controllers for console or carrying cases and gear for handheld devices. Journalism: While journalism around video games used to be primarily print-based, and focused more on post-release reviews and gameplay strategy, the Internet has brought a more proactive press that use web journalism, covering games in the months prior to release as well as beyond, helping to build excitement for games ahead of release. Influencers: With the rising importance of social media, video game companies have found that the opinions of influencers using streaming media to play through their games has had a significant impact on game sales, and have turned to use influencers alongside traditional journalism as a means to build up attention to their game before release. Esports: Esports is a major function of several multiplayer games with numerous professional leagues established since the 2000s, with large viewership numbers, particularly out of southeast Asia since the 2010s. Trade and advocacy groups: Trade groups like the Entertainment Software Association were established to provide a common voice for the industry in response to governmental and other advocacy concerns. They frequently set up the major trade events and conventions for the industry such as E3. Gamers: The players and consumers of video games, broadly. While their representation in the industry is primarily seen through game sales, many companies follow gamers' comments on social media or on user reviews and engage with them to work to improve their products in addition to other feedback from other parts of the industry. Demographics of the larger player community also impact parts of the market; while once dominated by younger men, the market shifted in the mid-2010s towards women and older players who generally preferred mobile and causal games, leading to further growth in those sectors. Major regional markets The industry itself grew out from both the United States and Japan in the 1970s and 1980s before having a larger worldwide contribution. Today, the video game industry is predominately led by major companies in North America (primarily the United States and Canada), Western Europe, and southeast Asia including Japan, South Korea, and China. Hardware production remains an area dominated by Asian companies either directly involved in hardware design or part of the production process, but digital distribution and indie game development of the late 2000s has allowed game developers to flourish nearly anywhere and diversify the field. Game sales According to the market research firm Newzoo, the global video game industry drew estimated revenues of over in 2020. Mobile games accounted for the bulk of this, with a 48% share of the market, followed by console games at 28% and personal computer games at 23%. Sales of different types of games vary widely between countries due to local preferences. Japanese consumers tend to purchase much more handheld games than console games and especially PC games, with a strong preference for games catering to local tastes. Another key difference is that, though having declined in the West, arcade games remain an important sector of the Japanese gaming industry. In South Korea, computer games are generally preferred over console games, especially MMORPG games and real-time strategy games. Computer games are also popular in China. Effects on society Culture Video game culture is a worldwide new media subculture formed around video games and game playing. As computer and video games have increased in popularity over time, they have had a significant influence on popular culture. Video game culture has also evolved over time hand in hand with internet culture as well as the increasing popularity of mobile games. Many people who play video games identify as gamers, which can mean anything from someone who enjoys games to someone who is passionate about it. As video games become more social with multiplayer and online capability, gamers find themselves in growing social networks. Gaming can both be entertainment as well as competition, as a new trend known as electronic sports is becoming more widely accepted. In the 2010s, video games and discussions of video game trends and topics can be seen in social media, politics, television, film and music. The COVID-19 pandemic during 2020-2021 gave further visibility to video games as a pastime to enjoy with friends and family online as a means of social distancing. Since the mid-2000s there has been debate whether video games qualify as art, primarily as the form's interactivity interfered with the artistic intent of the work and that they are designed for commercial appeal. A significant debate on the matter came after film critic Roger Ebert published an essay "Video Games can never be art", which challenged the industry to prove him and other critics wrong. The view that video games were an art form was cemented in 2011 when the U.S. Supreme Court ruled in the landmark case Brown v. Entertainment Merchants Association that video games were a protected form of speech with artistic merit. Since then, video game developers have come to use the form more for artistic expression, including the development of art games, and the cultural heritage of video games as works of arts, beyond their technical capabilities, have been part of major museum exhibits, including The Art of Video Games at the Smithsonian American Art Museum and toured at other museums from 2012 to 2016. Video games will inspire sequels and other video games within the same franchise, but also have influenced works outside of the video game medium. Numerous television shows (both animated and live-action), films, comics and novels have been created based on existing video game franchises. Because video games are an interactive medium there has been trouble in converting them to these passive forms of media, and typically such works have been critically panned or treated as children's media. For example, until 2019, no video game film had ever been received a "Fresh" rating on Rotten Tomatoes, but the releases of Detective Pikachu (2019) and Sonic the Hedgehog (2020), both receiving "Fresh" ratings, shows signs of the film industry having found an approach to adapt video games for the large screen. That said, some early video game-based films have been highly successful at the box office, such as 1995's Mortal Kombat and 2001's Lara Croft: Tomb Raider. More recently since the 2000s, there has also become a larger appreciation of video game music, which ranges from chiptunes composed for limited sound-output devices on early computers and consoles, to fully-scored compositions for most modern games. Such music has frequently served as a platform for covers and remixes, and concerts featuring video game soundtracks performed by bands or orchestras, such as Video Games Live, have also become popular. Video games also frequently incorporate licensed music, particularly in the area of rhythm games, furthering the depth of which video games and music can work together. Further, video games can serve as a virtual environment under full control of a producer to create new works. With the capability to render 3D actors and settings in real-time, a new type of work machinima (short for "machine cinema") grew out from using video game engines to craft narratives. As video game engines gain higher fidelity, they have also become part of the tools used in more traditional filmmaking. Unreal Engine has been used as a backbone by Industrial Light & Magic for their StageCraft technology for shows like The Mandalorian. Separately, video games are also frequently used as part of the promotion and marketing for other media, such as for films, anime, and comics. However, these licensed games in the 1990s and 2000s often had a reputation for poor quality, developed without any input from the intellectual property rights owners, and several of them are considered among lists of games with notably negative reception, such as Superman 64. More recently, with these licensed games being developed by triple-A studios or through studios directly connected to the licensed property owner, there has been a significant improvement in the quality of these games, with an early trendsetting example of Batman: Arkham Asylum. Beneficial uses Besides their entertainment value, appropriately-designed video games have been seen to provide value in education across several ages and comprehension levels. Learning principles found in video games have been identified as possible techniques with which to reform the U.S. education system. It has been noticed that gamers adopt an attitude while playing that is of such high concentration, they do not realize they are learning, and that if the same attitude could be adopted at school, education would enjoy significant benefits. Students are found to be "learning by doing" while playing video games while fostering creative thinking. Video games are also believed to be beneficial to the mind and body. It has been shown that action video game players have better hand–eye coordination and visuo-motor skills, such as their resistance to distraction, their sensitivity to information in the peripheral vision and their ability to count briefly presented objects, than nonplayers. Researchers found that such enhanced abilities could be acquired by training with action games, involving challenges that switch attention between different locations, but not with games requiring concentration on single objects. A 2018 systematic review found evidence that video gaming training had positive effects on cognitive and emotional skills in the adult population, especially with young adults. A 2019 systematic review also added support for the claim that video games are beneficial to the brain, although the beneficial effects of video gaming on the brain differed by video games types. Organisers of video gaming events, such as the organisers of the D-Lux video game festival in Dumfries, Scotland, have emphasised the positive aspects video games can have on mental health. Organisers, mental health workers and mental health nurses at the event emphasised the relationships and friendships that can be built around video games and how playing games can help people learn about others as a precursor to discussing the person's mental health. A study in 2020 from Oxford University also suggested that playing video games can be a benefit to a person's mental health. The report of 3,274 gamers, all over the age of 18, focused on the games Animal Crossing: New Horizons and Plants vs Zombies: Battle for Neighborville and used actual play-time data. The report found that those that played more games tended to report greater "wellbeing". Also in 2020, computer science professor Regan Mandryk of the University of Saskatchewan said her research also showed that video games can have health benefits such as reducing stress and improving mental health. The university's research studied all age groups – "from pre-literate children through to older adults living in long term care homes" – with a main focus on 18 to 55-year-olds. A study of gamers attitudes towards gaming which was reported about in 2018 found that millennials use video games as a key strategy for coping with stress. In the study of 1,000 gamers, 55% said that it "helps them to unwind and relieve stress ... and half said they see the value in gaming as a method of escapism to help them deal with daily work pressures". Controversies Video games have had controversy since the 1970s. Parents and children's advocates have raised concerns that violent video games can influence young players into performing those violent acts in real life, and events such as the Columbine High School massacre in 1999 in which the perpetrators specifically alluded to using video games to plot out their attack, raised further fears. Medical experts and mental health professionals have also raised concerned that video games may be addictive, and the World Health Organization has included "gaming disorder" in the 11th revision of its International Statistical Classification of Diseases. Other health experts, including the American Psychiatric Association, have stated that there is insufficient evidence that video games can create violent tendencies or lead to addictive behavior, though agree that video games typically use a compulsion loop in their core design that can create dopamine that can help reinforce the desire to continue to play through that compulsion loop and potentially lead into violent or addictive behavior. Even with case law establishing that video games qualify as a protected art form, there has been pressure on the video game industry to keep their products in check to avoid over-excessive violence particularly for games aimed at younger children. The potential addictive behavior around games, coupled with increased used of post-sale monetization of video games, has also raised concern among parents, advocates, and government officials about gambling tendencies that may come from video games, such as controversy around the use of loot boxes in many high-profile games. Numerous other controversies around video games and its industry have arisen over the years, among the more notable incidents include the 1993 United States Congressional hearings on violent games like Mortal Kombat which lead to the formation of the ESRB ratings system, numerous legal actions taken by attorney Jack Thompson over violent games such as Grand Theft Auto III and Manhunt from 2003 to 2007, the outrage over the "No Russian" level from Call of Duty: Modern Warfare 2 in 2009 which allowed the player to shoot a number of innocent non-player characters at an airport, and the Gamergate harassment campaign in 2014 that highlighted misogamy from a portion of the player demographic. The industry as a whole has also dealt with issues related to gender, racial, and LGBTQ+ discrimination and mischaracterization of these minority groups in video games. A further issue in the industry is related to working conditions, as development studios and publishers frequently use "crunch time", required extended working hours, in the weeks and months ahead of a game's release to assure on-time delivery. Collecting and preservation Players of video games often maintain collections of games. More recently there has been interest in retrogaming, focusing on games from the first decades. Games in retail packaging in good shape have become collectors items for the early days of the industry, with some rare publications having gone for over as of 2020. Separately, there is also concern about the preservation of video games, as both game media and the hardware to play them degrade over time. Further, many of the game developers and publishers from the first decades no longer exist, so records of their games have disappeared. Archivists and preservations have worked within the scope of copyright law to save these games as part of the cultural history of the industry. There are many video game museums around the world, including the National Videogame Museum in Frisco, Texas, which serves as the largest museum wholly dedicated to the display and preservation of the industry's most important artifacts. Europe hosts video game museums such as the Computer Games Museum in Berlin and the Museum of Soviet Arcade Machines in Moscow and Saint-Petersburg. The Museum of Art and Digital Entertainment in Oakland, California is a dedicated video game museum focusing on playable exhibits of console and computer games. |
the disappearance of distinctive Ediacaran fossils (Namacalathus, Cloudina). Nevertheless, there are arguments that the dated horizon in Oman does not correspond to the Ediacaran-Cambrian boundary, but represents a facies change from marine to evaporite-dominated strata – which would mean that dates from other sections, ranging from 544 or 542 Ma, are more suitable. Paleogeography Plate reconstructions suggest a global supercontinent, Pannotia, was in the process of breaking up early in the Cambrian, with Laurentia (North America), Baltica, and Siberia having separated from the main supercontinent of Gondwana to form isolated land masses. Most continental land was clustered in the Southern Hemisphere at this time, but was drifting north. Large, high-velocity rotational movement of Gondwana appears to have occurred in the Early Cambrian. With a lack of sea ice – the great glaciers of the Marinoan Snowball Earth were long melted – the sea level was high, which led to large areas of the continents being flooded in warm, shallow seas ideal for sea life. The sea levels fluctuated somewhat, suggesting there were "ice ages", associated with pulses of expansion and contraction of a south polar ice cap. In Baltoscandia a Lower Cambrian transgression transformed large swathes of the Sub-Cambrian peneplain into an epicontinental sea. Climate The Earth was generally cold during the early Cambrian, probably due to the ancient continent of Gondwana covering the South Pole and cutting off polar ocean currents. However, average temperatures were 7 degrees Celsius higher than today. There were likely polar ice caps and a series of glaciations, as the planet was still recovering from an earlier Snowball Earth. It became warmer towards the end of the period; the glaciers receded and eventually disappeared, and sea levels rose dramatically. This trend would continue into the Ordovician Period. Flora The Cambrian flora was little different from the Ediacaran. The principle taxa were the marine macroalgae Fuxianospira, Sinocylindra, and Marpolia. No calcareous macroalgae are known from the period. No land plant (embryophyte) fossils are known from the Cambrian. However, biofilms and microbial mats were well developed on Cambrian tidal flats and beaches 500 mya., and microbes forming microbial Earth ecosystems, comparable with modern soil crust of desert regions, contributing to soil formation. Oceanic life The Cambrian explosion was a period of rapid multicellular growth. Most animal life during the Cambrian was aquatic. Trilobites were once assumed to be the dominant life form at that time, but this has proven to be incorrect. Arthropods were by far the most dominant animals in the ocean, but trilobites were only a minor part of the total arthropod diversity. What made them so apparently abundant was their heavy armor reinforced by calcium carbonate (CaCO3), which fossilized far more easily than the fragile chitinous exoskeletons of other arthropods, leaving numerous preserved remains. The period marked a steep change in the diversity and composition of Earth's biosphere. The Ediacaran biota suffered a mass extinction at the start of the Cambrian Period, which corresponded with an increase in the abundance and complexity of burrowing behaviour. This behaviour had a profound and irreversible effect on the substrate which transformed the seabed ecosystems. Before the Cambrian, the sea floor was covered by microbial mats. By the end of the Cambrian, burrowing animals had destroyed the mats in many areas through bioturbation. As a consequence, many of those organisms that were dependent on the mats became extinct, while the other species adapted to the changed environment that now offered new ecological niches. Around the same time there was a seemingly rapid appearance of representatives of all the mineralized phyla except the Bryozoa, which appeared in the Lower Ordovician. However, many of those phyla were represented only by stem-group forms; and since mineralized phyla generally have a benthic origin, they may not be a good proxy for (more abundant) non-mineralized phyla. While the early Cambrian showed such diversification that it has been named the Cambrian Explosion, this changed later in the period, when there occurred a sharp drop in biodiversity. About 515 million years ago, the number of species going extinct exceeded the number of new species appearing. Five million years later, the number of genera had dropped from an earlier peak of about 600 to just 450. Also, the speciation rate in many groups was reduced to between a fifth and a third of previous levels. 500 million years ago, oxygen levels fell dramatically in the oceans, leading to hypoxia, while the level of poisonous hydrogen sulfide simultaneously increased, causing another extinction. The later half of Cambrian was surprisingly barren and showed evidence of several rapid extinction events; the | Series", although the two geologists disagreed for a while on the appropriate categorization. The Cambrian is unique in its unusually high proportion of sedimentary deposits, sites of exceptional preservation where "soft" parts of organisms are preserved as well as their more resistant shells. As a result, our understanding of the Cambrian biology surpasses that of some later periods. The Cambrian marked a profound change in life on Earth; prior to the Cambrian, the majority of living organisms on the whole were small, unicellular and simple (Ediacaran fauna being notable exceptions). Complex, multicellular organisms gradually became more common in the millions of years immediately preceding the Cambrian, but it was not until this period that mineralized—hence readily fossilized—organisms became common. The rapid diversification of life forms in the Cambrian, known as the Cambrian explosion, produced the first representatives of all modern animal phyla. Phylogenetic analyses has supported the view that before the Cambrian radiation, in the Cryogenian or Tonian, animals (metazoans) evolved monophyletically from a single common ancestor: flagellated colonial protists similar to modern choanoflagellates. Although diverse life forms prospered in the oceans, the land is thought to have been comparatively barren—with nothing more complex than a microbial soil crust and a few molluscs and arthropods (albeit not terrestrial) that emerged to browse on the microbial biofilm. By the end of the Cambrian, myriapods, arachnids, and hexapods would start adapting to the land, along with the first plants. Most of the continents were probably dry and rocky due to a lack of vegetation. Shallow seas flanked the margins of several continents created during the breakup of the supercontinent Pannotia. The seas were relatively warm, and polar ice was absent for much of the period. Stratigraphy The Cambrian Period followed the Ediacaran Period and was followed by the Ordovician Period. The base of the Cambrian lies atop a complex assemblage of trace fossils known as the Treptichnus pedum assemblage. The use of Treptichnus pedum, a reference ichnofossil to mark the lower boundary of the Cambrian, is problematic because very similar trace fossils belonging to the Treptichnids group are found well below T. pedum in Namibia, Spain and Newfoundland, and possibly in the western USA. The stratigraphic range of T. pedum overlaps the range of the Ediacaran fossils in Namibia, and probably in Spain. Subdivisions The Cambrian is divided into four epochs (series) and ten ages (stages). Currently only three series and six stages are named and have a GSSP (an internationally agreed-upon stratigraphic reference point). Because the international stratigraphic subdivision is not yet complete, many local subdivisions are still widely used. In some of these subdivisions the Cambrian is divided into three epochs with locally differing names – the Early Cambrian (Caerfai or Waucoban, mya), Middle Cambrian (St Davids or Albertan, mya) and Furongian ( mya; also known as Late Cambrian, Merioneth or Croixan). Trilobite zones allow biostratigraphic correlation in the Cambrian. Rocks of these epochs are referred to as |
(length of) time. Quality (poion, of what kind or description) – examples: white, black, grammatical, hot, sweet, curved, straight. Relation (pros ti, toward something) – examples: double, half, large, master, knowledge. Place (pou, where) – examples: in a marketplace, in the Lyceum Time (pote, when) – examples: yesterday, last year Position, posture, attitude (keisthai, to lie) – examples: sitting, lying, standing State, condition (echein, to have or be) – examples: shod, armed Action (poiein, to make or do) – examples: to lance, to heat, to cool (something) Affection, passion (paschein, to suffer or undergo) – examples: to be lanced, to be heated, to be cooled Plotinus Plotinus in writing his Enneads around AD 250 recorded that "philosophy at a very early age investigated the number and character of the existents ... some found ten, others less .... to some the genera were the first principles, to others only a generic classification of existents". He realised that some categories were reducible to others saying "why are not Beauty, Goodness and the virtues, Knowledge and Intelligence included among the primary genera?" He concluded that such transcendental categories and even the categories of Aristotle were in some way posterior to the three Eleatic categories first recorded in Plato's dialogue Parmenides and which comprised the following three coupled terms: Unity/Plurality Motion/Stability Identity/Difference Plotinus called these "the hearth of reality" deriving from them not only the three categories of Quantity, Motion and Quality but also what came to be known as "the three moments of the Neoplatonic world process": First, there existed the "One", and his view that "the origin of things is a contemplation" The Second "is certainly an activity ... a secondary phase ... life streaming from life ... energy running through the universe" The Third is some kind of Intelligence concerning which he wrote "Activity is prior to Intellection ... and self knowledge" Plotinus likened the three to the centre, the radii and the circumference of a circle, and clearly thought that the principles underlying the categories were the first principles of creation. "From a single root all being multiplies". Similar ideas were to be introduced into Early Christian thought by, for example, Gregory of Nazianzus who summed it up saying "Therefore, Unity, having from all eternity arrived by motion at duality, came to rest in trinity". Kant In the Critique of Pure Reason (1781), Immanuel Kant argued that the categories are part of our own mental structure and consist of a set of a priori concepts through which we interpret the world around us. These concepts correspond to twelve logical functions of the understanding which we use to make judgements and there are therefore two tables given in the Critique, one of the Judgements and a corresponding one for the Categories. To give an example, the logical function behind our reasoning from ground to consequence (based on the Hypothetical relation) underlies our understanding of the world in terms of cause and effect (the Causal relation). In each table the number twelve arises from, firstly, an initial division into two: the Mathematical and the Dynamical; a second division of each of these headings into a further two: Quantity and Quality, and Relation and Modality respectively; and, thirdly, each of these then divides into a further three subheadings as follows. Table of Judgements Mathematical Quantity Universal Particular Singular Quality Affirmative Negative Infinite Dynamical Relation Categorical Hypothetical Disjunctive Modality Problematic Assertoric Apodictic Table of Categories Mathematical Quantity Unity Plurality Totality Quality Reality Negation Limitation Dynamical Relation Inherence and Subsistence (substance and accident) Causality and Dependence (cause and effect) Community (reciprocity) Modality Possibility Existence Necessity Criticism of Kant's system followed, firstly, by Arthur Schopenhauer, who amongst other things was unhappy with the term "Community", and declared that the tables "do open violence to truth, treating it as nature was treated by old-fashioned gardeners", and secondly, by W.T.Stace who in his book The Philosophy of Hegel suggested that in order to make Kant's structure completely symmetrical a third category would need to be added to the Mathematical and the Dynamical. This, he said, Hegel was to do with his category of Notion. Hegel G.W.F. Hegel in his Science of Logic (1812) attempted to provide a more comprehensive system of categories than Kant and developed a structure that was almost entirely triadic. So important were the categories to Hegel that he claimed "the first principle of the world, the Absolute, is a system of categories ... the categories must be the reason of which the world is a consequent". Using his own logical method of combination, later to be called the Hegelian dialectic, of arguing from thesis through antithesis to synthesis, he arrived, as shown in W.T.Stace's work cited, at a hierarchy of some 270 categories. The three very highest categories were Logic, Nature and Spirit. The three highest categories of Logic, however, he called Being, Essence and Notion which he explained as follows: Being was differentiated from Nothing by containing with it the concept of the "Other", an initial internal division that can be compared with Kant's category of Disjunction. Stace called the category of Being the sphere of common sense containing concepts such as consciousness, sensation, quantity, quality and measure. Essence. The "Other" separates itself from the "One" by a kind of motion, reflected in Hegel's first synthesis of "Becoming". For Stace this category represented the sphere of science containing within it firstly, the thing, its form and properties; secondly, cause, effect and reciprocity, and thirdly, the principles of classification, identity and difference. Notion. Having passed over into the "Other" there is an | two: the Mathematical and the Dynamical; a second division of each of these headings into a further two: Quantity and Quality, and Relation and Modality respectively; and, thirdly, each of these then divides into a further three subheadings as follows. Table of Judgements Mathematical Quantity Universal Particular Singular Quality Affirmative Negative Infinite Dynamical Relation Categorical Hypothetical Disjunctive Modality Problematic Assertoric Apodictic Table of Categories Mathematical Quantity Unity Plurality Totality Quality Reality Negation Limitation Dynamical Relation Inherence and Subsistence (substance and accident) Causality and Dependence (cause and effect) Community (reciprocity) Modality Possibility Existence Necessity Criticism of Kant's system followed, firstly, by Arthur Schopenhauer, who amongst other things was unhappy with the term "Community", and declared that the tables "do open violence to truth, treating it as nature was treated by old-fashioned gardeners", and secondly, by W.T.Stace who in his book The Philosophy of Hegel suggested that in order to make Kant's structure completely symmetrical a third category would need to be added to the Mathematical and the Dynamical. This, he said, Hegel was to do with his category of Notion. Hegel G.W.F. Hegel in his Science of Logic (1812) attempted to provide a more comprehensive system of categories than Kant and developed a structure that was almost entirely triadic. So important were the categories to Hegel that he claimed "the first principle of the world, the Absolute, is a system of categories ... the categories must be the reason of which the world is a consequent". Using his own logical method of combination, later to be called the Hegelian dialectic, of arguing from thesis through antithesis to synthesis, he arrived, as shown in W.T.Stace's work cited, at a hierarchy of some 270 categories. The three very highest categories were Logic, Nature and Spirit. The three highest categories of Logic, however, he called Being, Essence and Notion which he explained as follows: Being was differentiated from Nothing by containing with it the concept of the "Other", an initial internal division that can be compared with Kant's category of Disjunction. Stace called the category of Being the sphere of common sense containing concepts such as consciousness, sensation, quantity, quality and measure. Essence. The "Other" separates itself from the "One" by a kind of motion, reflected in Hegel's first synthesis of "Becoming". For Stace this category represented the sphere of science containing within it firstly, the thing, its form and properties; secondly, cause, effect and reciprocity, and thirdly, the principles of classification, identity and difference. Notion. Having passed over into the "Other" there is an almost Neoplatonic return into a higher unity that in embracing the "One" and the "Other" enables them to be considered together through their inherent qualities. This according to Stace is the sphere of philosophy proper where we find not only the three types of logical proposition: Disjunctive, Hypothetical and Categorical but also the three transcendental concepts of Beauty, Goodness and Truth. Schopenhauer's category that corresponded with Notion was that of Idea, which in his "Four-Fold Root of Sufficient Reason" he complemented with the category of the Will. The title of his major work was "The World as Will and Idea". The two other complementary categories, reflecting one of Hegel's initial divisions, were those of Being and Becoming. At around the same time, Goethe was developing his colour theories in the Farbenlehre of 1810, and introduced similar principles of combination and complementation, symbolising, for Goethe, "the primordial relations which belong both to nature and vision". Hegel in his Science of Logic accordingly asks us to see his system not as a tree but as a circle. Peirce Charles Sanders Peirce, who had read Kant and Hegel closely, and who also had some knowledge of Aristotle, proposed a system of merely three phenomenological categories: Firstness, Secondness, and Thirdness, which he repeatedly invoked in his subsequent writings. Like Hegel, C.S.Peirce attempted to develop a system of categories from a single indisputable principle, in Peirce's case the notion that in the first instance he could only be aware of his own ideas. "It seems that the true categories of consciousness are first, feeling ... second, a sense of resistance ... and third, synthetic consciousness, or thought". Elsewhere he called the three primary categories: Quality, Reaction and Meaning, and even Firstness, Secondness and Thirdness, saying, "perhaps it is not right to call these categories conceptions, they are so intangible that they are rather tones or tints upon conceptions": Firstness (Quality): "The first is predominant in feeling ... we must think of a quality without parts, e.g. the colour of magenta ... When I say it is a quality I do not mean that it "inheres" in a subject ... The whole content of consciousness is made up of qualities of feeling, as truly as the whole of space is made up of points, or the whole of time by instants". Secondness (Reaction): "This is present even in such a rudimentary fragment of experience as a simple feeling ... an action and reaction between our soul and the stimulus ... The idea of second is predominant in the ideas of causation and of statical force ... the real is active; we acknowledge it by calling it the actual". Thirdness (Meaning): "Thirdness is essentially of a general nature ... ideas in which thirdness predominate [include] the idea of a sign or representation ... Every genuine triadic relation involves meaning ... the idea of meaning is irreducible to those of quality and reaction ... synthetical consciousness is the consciousness of a third or medium". Although Peirce's three categories correspond to the three concepts of relation given in Kant's tables, the sequence is now reversed and follows that given by Hegel, and indeed before Hegel of the three moments of the world-process given by Plotinus. Later, Peirce gave a mathematical reason for there being three categories in that although monadic, dyadic and triadic nodes are irreducible, every node of a higher valency is reducible to a "compound of triadic relations". Ferdinand de Saussure, who was developing "semiology" in France just as Peirce was developing "semiotics" in the US, likened each term of a proposition to "the centre of a constellation, the point where other coordinate terms, the sum of which is indefinite, converge". Others Edmund Husserl (1962, 2000) wrote extensively about categorial systems as part of his phenomenology. For Gilbert Ryle (1949), a category (in particular a "category mistake") is an important semantic concept, but one having only loose affinities to an ontological category. Contemporary systems of categories have been proposed by John G. Bennett (The Dramatic Universe, 4 vols., 1956–65), Wilfrid Sellars (1974), Reinhardt Grossmann (1983, 1992), Johansson (1989), Hoffman and Rosenkrantz (1994), Roderick Chisholm (1996), Barry Smith (ontologist) (2003), and Jonathan Lowe (2006). See also Categories (Aristotle) Categories (Peirce) Categories (Stoic) Category (Kant) Metaphysics Modal logic Ontology Schema (Kant) Similarity (philosophy) References Selected bibliography Aristotle, 1953. Metaphysics. Ross, W. D., trans. Oxford University Press. --------, 2004. Categories, Edghill, E. M., trans. Uni. of Adelaide library. John G. Bennett, 1956–1965. The Dramatic Universe. London, Hodder & Stoughton. Gustav Bergmann, 1992. New Foundations of Ontology. Madison: Uni. of Wisconsin Press. Browning, Douglas, 1990. Ontology and the Practical Arena. Pennsylvania State Uni. Butchvarov, Panayot, 1979. Being qua Being: A Theory of Identity, Existence, and Predication. Indiana Uni. Press. Roderick Chisholm, 1996. A Realistic Theory of Categories. Cambridge Uni. Press. Feibleman, James Kern, 1951. Ontology. The Johns Hopkins Press (reprinted 1968, Greenwood Press, Publishers, New York). Grossmann, Reinhardt, 1983. The Categorial Structure of the World. Indiana Uni. Press. Grossmann, Reinhardt, 1992. The Existence of the World: An Introduction to Ontology. Routledge. Haaparanta, Leila and Koskinen, Heikki J., 2012. Categories of Being: Essays |
need for water content by 15–30%. Superplasticizers lead to retarding effects. Pumping aids improve pumpability, thicken the paste and reduce separation and bleeding. Retarders slow the hydration of concrete and are used in large or difficult pours where partial setting is undesirable before completion of the pour. Typical polyol retarders are sugar, sucrose, sodium gluconate, glucose, citric acid, and tartaric acid. Mineral admixtures and blended cements Inorganic materials that have pozzolanic or latent hydraulic properties, these very fine-grained materials are added to the concrete mix to improve the properties of concrete (mineral admixtures), or as a replacement for Portland cement (blended cements). Products which incorporate limestone, fly ash, blast furnace slag, and other useful materials with pozzolanic properties into the mix, are being tested and used. These developments are ever growing in relevance to minimize the impacts caused by cement use, notorious for being one of the largest producers (at about 5 to 10%) of global greenhouse gas emissions. The use of alternative materials also is capable of lowering costs, improving concrete properties, and recycling wastes, the latest being relevant for Circular Economy aspects of the construction industry, whose demand is ever growing with greater impacts on raw material extraction, waste generation and landfill practices. Fly ash: A by-product of coal-fired electric generating plants, it is used to partially replace Portland cement (by up to 60% by mass). The properties of fly ash depend on the type of coal burnt. In general, siliceous fly ash is pozzolanic, while calcareous fly ash has latent hydraulic properties. Ground granulated blast furnace slag (GGBFS or GGBS): A by-product of steel production is used to partially replace Portland cement (by up to 80% by mass). It has latent hydraulic properties. Silica fume: A by-product of the production of silicon and ferrosilicon alloys. Silica fume is similar to fly ash, but has a particle size 100 times smaller. This results in a higher surface-to-volume ratio and a much faster pozzolanic reaction. Silica fume is used to increase strength and durability of concrete, but generally requires the use of superplasticizers for workability. High reactivity Metakaolin (HRM): Metakaolin produces concrete with strength and durability similar to concrete made with silica fume. While silica fume is usually dark gray or black in color, high-reactivity metakaolin is usually bright white in color, making it the preferred choice for architectural concrete where appearance is important. Carbon nanofibers can be added to concrete to enhance compressive strength and gain a higher Young’s modulus, and also to improve the electrical properties required for strain monitoring, damage evaluation and self-health monitoring of concrete. Carbon fiber has many advantages in terms of mechanical and electrical properties (e.g., higher strength) and self-monitoring behavior due to the high tensile strength and high conductivity. Carbon products have been added to make concrete electrically conductive, for deicing purposes. Production Concrete production is the process of mixing together the various ingredients—water, aggregate, cement, and any additives—to produce concrete. Concrete production is time-sensitive. Once the ingredients are mixed, workers must put the concrete in place before it hardens. In modern usage, most concrete production takes place in a large type of industrial facility called a concrete plant, or often a batch plant. In general usage, concrete plants come in two main types, ready mix plants and central mix plants. A ready-mix plant mixes all the ingredients except water, while a central mix plant mixes all the ingredients including water. A central-mix plant offers more accurate control of the concrete quality through better measurements of the amount of water added, but must be placed closer to the work site where the concrete will be used, since hydration begins at the plant. A concrete plant consists of large storage hoppers for various reactive ingredients like cement, storage for bulk ingredients like aggregate and water, mechanisms for the addition of various additives and amendments, machinery to accurately weigh, move, and mix some or all of those ingredients, and facilities to dispense the mixed concrete, often to a concrete mixer truck. Modern concrete is usually prepared as a viscous fluid, so that it may be poured into forms, which are containers erected in the field to give the concrete its desired shape. Concrete formwork can be prepared in several ways, such as slip forming and steel plate construction. Alternatively, concrete can be mixed into dryer, non-fluid forms and used in factory settings to manufacture precast concrete products. A wide variety of equipment is used for processing concrete, from hand tools to heavy industrial machinery. Whichever equipment builders use, however, the objective is to produce the desired building material; ingredients must be properly mixed, placed, shaped, and retained within time constraints. Any interruption in pouring the concrete can cause the initially placed material to begin to set before the next batch is added on top. This creates a horizontal plane of weakness called a cold joint between the two batches. Once the mix is where it should be, the curing process must be controlled to ensure that the concrete attains the desired attributes. During concrete preparation, various technical details may affect the quality and nature of the product. Design mix Design mix ratios are decided by an engineer after analyzing the properties of the specific ingredients being used. Instead of using a 'nominal mix' of 1 part cement, 2 parts sand, and 4 parts aggregate (the second example from above), a civil engineer will custom-design a concrete mix to exactly meet the requirements of the site and conditions, setting material ratios and often designing an admixture package to fine-tune the properties or increase the performance envelope of the mix. Design-mix concrete can have very broad specifications that cannot be met with more basic nominal mixes, but the involvement of the engineer often increases the cost of the concrete mix. Concrete Mixes are primarily divided into nominal mix, standard mix and design mix. Nominal mix ratios are given in volume of . Nominal mixes are a simple, fast way of getting a basic idea of the properties of the finished concrete without having to perform testing in advance. Various governing bodies (such as British Standards) define nominal mix ratios into a number of grades, usually ranging from lower compressive strength to higher compressive strength. The grades usually indicate the 28-day cube strength. Mixing Thorough mixing is essential to produce uniform, high-quality concrete. has shown that the mixing of cement and water into a paste before combining these materials with aggregates can increase the compressive strength of the resulting concrete. The paste is generally mixed in a , shear-type mixer at a w/cm (water to cement ratio) of 0.30 to 0.45 by mass. The cement paste premix may include admixtures such as accelerators or retarders, superplasticizers, pigments, or silica fume. The premixed paste is then blended with aggregates and any remaining batch water and final mixing is completed in conventional concrete mixing equipment. Sample analysis - Workability Workability is the ability of a fresh (plastic) concrete mix to fill the form/mold properly with the desired work (pouring, pumping, spreading, tamping, vibration) and without reducing the concrete's quality. Workability depends on water content, aggregate (shape and size distribution), cementitious content and age (level of hydration) and can be modified by adding chemical admixtures, like superplasticizer. Raising the water content or adding chemical admixtures increases concrete workability. Excessive water leads to increased bleeding or segregation of aggregates (when the cement and aggregates start to separate), with the resulting concrete having reduced quality. The use of an aggregate blend with an undesirable gradation can result in a very harsh mix design with a very low slump, which cannot readily be made more workable by addition of reasonable amounts of water. An undesirable gradation can mean using a large aggregate that is too large for the size of the formwork, or which has too few smaller aggregate grades to serve to fill the gaps between the larger grades, or using too little or too much sand for the same reason, or using too little water, or too much cement, or even using jagged crushed stone instead of smoother round aggregate such as pebbles. Any combination of these factors and others may result in a mix which is too harsh, i.e., which does not flow or spread out smoothly, is difficult to get into the formwork, and which is difficult to surface finish. Workability can be measured by the concrete slump test, a simple measure of the plasticity of a fresh batch of concrete following the ASTM C 143 or EN 12350-2 test standards. Slump is normally measured by filling an "Abrams cone" with a sample from a fresh batch of concrete. The cone is placed with the wide end down onto a level, non-absorptive surface. It is then filled in three layers of equal volume, with each layer being tamped with a steel rod to consolidate the layer. When the cone is carefully lifted off, the enclosed material slumps a certain amount, owing to gravity. A relatively dry sample slumps very little, having a slump value of one or two inches (25 or 50 mm) out of . A relatively wet concrete sample may slump as much as eight inches. Workability can also be measured by the flow table test. Slump can be increased by addition of chemical admixtures such as plasticizer or superplasticizer without changing the water-cement ratio. Some other admixtures, especially air-entraining admixture, can increase the slump of a mix. High-flow concrete, like self-consolidating concrete, is tested by other flow-measuring methods. One of these methods includes placing the cone on the narrow end and observing how the mix flows through the cone while it is gradually lifted. After mixing, concrete is a fluid and can be pumped to the location where needed. Curing Concrete must be kept moist during curing in order to achieve optimal strength and durability. During curing hydration occurs, allowing calcium-silicate hydrate (C-S-H) to form. Over 90% of a mix's final strength is typically reached within four weeks, with the remaining 10% achieved over years or even decades. The conversion of calcium hydroxide in the concrete into calcium carbonate from absorption of CO2 over several decades further strengthens the concrete and makes it more resistant to damage. This carbonation reaction, however, lowers the pH of the cement pore solution and can corrode the reinforcement bars. Hydration and hardening of concrete during the first three days is critical. Abnormally fast drying and shrinkage due to factors such as evaporation from wind during placement may lead to increased tensile stresses at a time when it has not yet gained sufficient strength, resulting in greater shrinkage cracking. The early strength of the concrete can be increased if it is kept damp during the curing process. Minimizing stress prior to curing minimizes cracking. High-early-strength concrete is designed to hydrate faster, often by increased use of cement that increases shrinkage and cracking. The strength of concrete changes (increases) for up to three years. It depends on cross-section dimension of elements and conditions of structure exploitation. Addition of short-cut polymer fibers can improve (reduce) shrinkage-induced stresses during curing and increase early and ultimate compression strength. Properly curing concrete leads to increased strength and lower permeability and avoids cracking where the surface dries out prematurely. Care must also be taken to avoid freezing or overheating due to the exothermic setting of cement. Improper curing can cause scaling, reduced strength, poor abrasion resistance and cracking. Techniques During the curing period, concrete is ideally maintained at controlled temperature and humidity. To ensure full hydration during curing, concrete slabs are often sprayed with "curing compounds" that create a water-retaining film over the concrete. Typical films are made of wax or related hydrophobic compounds. After the concrete is sufficiently cured, the film is allowed to abrade from the concrete through normal use. Traditional conditions for curing involve spraying or ponding the concrete surface with water. The adjacent picture shows one of many ways to achieve this, ponding—submerging setting concrete in water and wrapping in plastic to prevent dehydration. Additional common curing methods include wet burlap and plastic sheeting covering the fresh concrete. For higher-strength applications, accelerated curing techniques may be applied to the concrete. A common technique involves heating the poured concrete with steam, which serves to both keep it damp and raise the temperature so that the hydration process proceeds more quickly and more thoroughly. Alternative types Asphalt Asphalt concrete (commonly called asphalt, blacktop, or pavement in North America, and tarmac, bitumen macadam, or rolled asphalt in the United Kingdom and the Republic of Ireland) is a composite material commonly used to surface roads, parking lots, airports, as well as the core of embankment dams. Asphalt mixtures have been used in pavement construction since the beginning of the twentieth century. It consists of mineral aggregate bound together with asphalt, laid in layers, and compacted. The process was refined and enhanced by Belgian inventor and U.S. immigrant Edward De Smedt. The terms asphalt (or asphaltic) concrete, bituminous asphalt concrete, and bituminous mixture are typically used only in engineering and construction documents, which define concrete as any composite material composed of mineral aggregate adhered with a binder. The abbreviation, AC, is sometimes used for asphalt concrete but can also denote asphalt content or asphalt cement, referring to the liquid asphalt portion of the composite material. Concretene Concretene is very similar to concrete except that during the cement-mixing process, a small amount of graphene (< 0.5% by weight) is added. Microbial Bacteria such as Bacillus pasteurii, Bacillus pseudofirmus, Bacillus cohnii, Sporosarcina pasteuri, and Arthrobacter crystallopoietes increase the compression strength of concrete through their biomass. Not all bacteria increase the strength of concrete significantly with their biomass. Bacillus sp. CT-5. can reduce corrosion of reinforcement in reinforced concrete by up to four times. Sporosarcina pasteurii reduces water and chloride permeability. B. pasteurii increases resistance to acid. Bacillus pasteurii and B. sphaericuscan induce calcium carbonate precipitation in the surface of cracks, adding compression strength. Nanoconcrete Nanoconcrete (also spelled "nano concrete"' or "nano-concrete") is a class of materials that contains Portland cement particles that are no greater than 100 μm and particles of silica no greater than 500 μm, which fill voids that would otherwise occur in normal concrete, thereby substantially increasing the material's strength. It is widely used in foot and highway bridges where high flexural and compressive strength are indicated. Pervious Pervious concrete is a mix of specially graded coarse aggregate, cement, water, and little-to-no fine aggregates. This concrete is also known as "no-fines" or porous concrete. Mixing the ingredients in a carefully controlled process creates a paste that coats and bonds the aggregate particles. The hardened concrete contains interconnected air voids totaling approximately 15 to 25 percent. Water runs through the voids in the pavement to the soil underneath. Air entrainment admixtures are often used in freeze-thaw climates to minimize the possibility of frost damage. Pervious concrete also permits rainwater to filter through roads and parking lots, to recharge aquifers, instead of contributing to runoff and flooding. Polymer Polymer concretes are mixtures of aggregate and any of various polymers and may be reinforced. The cement is costlier than lime-based cements, but polymer concretes nevertheless have advantages; they have significant tensile strength even without reinforcement, and they are largely impervious to water. Polymer concretes are frequently used for the repair and construction of other applications, such as drains. Volcanic Volcanic concrete substitutes volcanic rock for the limestone that is burned to form clinker. It consumes a similar amount of energy, but does not directly emit carbon as a byproduct. Volcanic rock/ash are used as supplementary cementitious materials in concrete to improve the resistance to sulfate, chloride and alkali silica reaction due to pore refinement. Also, they are generally cost effective in comparison to other aggregates, good for semi and light weight concretes, and good for thermal and acoustic insulation. Pyroclastic materials, such as pumice, scoria, and ashes are formed from cooling magma during explosive volcanic eruptions. They are used as supplementary cementitious materials (SCM) or as aggregates for cements and concretes. They have been extensively used since ancient times to produce materials for building applications. For example, pumice and other volcanic glasses were added as a natural pozzolanic material for mortars and plasters during the construction of the Villa San Marco in the Roman period (89 B.C. – 79 A.D.), which remain one of the best-preserved otium villae of the Bay of Naples in Italy. Waste light Waste light is form of polymer modified concrete. The specific polymer admixture allows the replacement of all the traditional aggregates (gravel, sand, stone) by any mixture of solid waste materials in the grain size of 3–10 mm to form a low-compressive-strength (3-20 N/mm2) product for road and building construction. One cubic meter of waste light concrete contains 1.1-1.3 m3 of shredded waste and no other aggregates. Properties Concrete has relatively high compressive strength, but much lower tensile strength. Therefore, it is usually reinforced with materials that are strong in tension (often steel). The elasticity of concrete is relatively constant at low stress levels but starts decreasing at higher stress levels as matrix cracking develops. Concrete has a very low coefficient of thermal expansion and shrinks as it matures. All concrete structures crack to some extent, due to shrinkage and tension. Concrete that is subjected to long-duration forces is prone to creep. Tests can be performed to ensure that the properties of concrete correspond to specifications for the application. The ingredients affect the strengths of the material. Concrete strength values are usually specified as the lower-bound compressive strength of either a cylindrical or cubic specimen as determined by standard test procedures. The strengths of concrete is dictated by its function. Very low-strength— or less—concrete may be used when the concrete must be lightweight. Lightweight concrete is often achieved by adding air, foams, or lightweight aggregates, with the side effect that the strength is reduced. For most routine uses, concrete is often used. concrete is readily commercially available as a more durable, although more expensive, option. Higher-strength concrete is often used for larger civil projects. Strengths above are often used for specific building elements. For example, the lower floor columns of high-rise concrete buildings may use concrete of or more, to keep the size of the columns small. Bridges may use long beams of high-strength concrete to lower the number of spans required. Occasionally, other structural needs may require high-strength concrete. If a structure must be very rigid, concrete of very high strength may be specified, even much stronger than is required to bear the service loads. Strengths as high as have been used commercially for these reasons. Energy efficiency Energy requirements for transportation of concrete are low because it is produced locally from local resources, typically manufactured within 100 kilometers of the job site. Similarly, relatively little energy is used in producing and combining the raw materials (although large amounts of CO2 are produced by the chemical reactions in cement manufacture). The overall embodied energy of concrete at roughly 1 to 1.5 megajoules per kilogram is therefore lower than for most structural and construction materials. Once in place, concrete offers great energy efficiency over the lifetime of a building. Concrete walls leak air far less than those made of wood frames. Air leakage accounts for a large percentage of energy loss from a home. The thermal mass properties of concrete increase the efficiency of both residential and commercial buildings. By storing and releasing the energy needed for heating or cooling, concrete's thermal mass delivers year-round benefits by reducing temperature swings inside and minimizing heating and cooling costs. While insulation reduces energy loss through the building envelope, thermal mass uses walls to store and release energy. Modern concrete wall systems use both external insulation and thermal mass to create an energy-efficient building. Insulating concrete forms (ICFs) are hollow blocks or panels made of either insulating foam or rastra that are stacked to form the shape of the walls of a building and then filled with reinforced concrete to create the structure. Fire safety Concrete buildings are more resistant to fire than those constructed using steel frames, since concrete has lower heat conductivity than steel and can thus last longer under the same fire conditions. Concrete is sometimes used as a fire protection for steel frames, for the same effect as above. Concrete as a fire shield, for example Fondu fyre, can also be used in extreme environments like a missile launch pad. Options for non-combustible construction include floors, ceilings and roofs made of cast-in-place and hollow-core precast concrete. For walls, concrete masonry technology and Insulating Concrete Forms (ICFs) are additional options. ICFs are hollow blocks or panels made of fireproof insulating foam that are stacked to form the shape of the walls of a building and then filled with reinforced concrete to create the structure. Concrete also provides good resistance against externally applied forces such as high winds, hurricanes, and tornadoes owing to its lateral stiffness, which results in minimal horizontal movement. However, this stiffness can work against certain types of concrete structures, particularly where a relatively higher flexing structure is required to resist more extreme forces. Earthquake safety As discussed above, concrete is very strong in compression, but weak in tension. Larger earthquakes can generate very large shear loads on structures. These shear loads subject the structure to both tensile and compressional loads. Concrete structures without reinforcement, like other unreinforced masonry structures, can fail during severe earthquake shaking. Unreinforced masonry structures constitute one of the largest earthquake risks globally. These risks can be reduced through seismic retrofitting of at-risk buildings, (e.g. school buildings in Istanbul, Turkey). Construction with concrete Concrete is one of the most durable building materials. It provides superior fire resistance compared with wooden construction and gains strength over time. Structures made of concrete can have a long service life. Concrete is used more than any other artificial material in the world. As of 2006, about 7.5 billion cubic meters of concrete are made each year, more than one cubic meter for every person on Earth. Reinforced concrete The use of reinforcement, in the form of iron was introduced in the 1850s by French industrialist François Coignet, and it was not until the 1880s that German civil engineer G. A. Wayss used steel as reinforcement. Concrete is a relatively brittle material that is strong under compression but less in tension. Plain, unreinforced concrete is unsuitable for many structures as it is relatively poor at withstanding stresses induced by vibrations, wind loading, and so on. Hence, to increase its overall strength, steel rods, wires, mesh or cables can be embedded in concrete before it is set. This reinforcement, often known as rebar, resists tensile forces. Reinforced concrete (RC) is a versatile composite and one of the most widely used materials in modern construction. It is made up of different constituent materials with very different properties that complement each other. In the case of reinforced concrete, the component materials are almost always concrete and steel. These two materials form a strong bond together and are able to resist a variety of applied forces, effectively acting as a single structural element. Reinforced concrete can be precast or cast-in-place (in situ) concrete, and is used in a wide range of applications | concrete of or more, to keep the size of the columns small. Bridges may use long beams of high-strength concrete to lower the number of spans required. Occasionally, other structural needs may require high-strength concrete. If a structure must be very rigid, concrete of very high strength may be specified, even much stronger than is required to bear the service loads. Strengths as high as have been used commercially for these reasons. Energy efficiency Energy requirements for transportation of concrete are low because it is produced locally from local resources, typically manufactured within 100 kilometers of the job site. Similarly, relatively little energy is used in producing and combining the raw materials (although large amounts of CO2 are produced by the chemical reactions in cement manufacture). The overall embodied energy of concrete at roughly 1 to 1.5 megajoules per kilogram is therefore lower than for most structural and construction materials. Once in place, concrete offers great energy efficiency over the lifetime of a building. Concrete walls leak air far less than those made of wood frames. Air leakage accounts for a large percentage of energy loss from a home. The thermal mass properties of concrete increase the efficiency of both residential and commercial buildings. By storing and releasing the energy needed for heating or cooling, concrete's thermal mass delivers year-round benefits by reducing temperature swings inside and minimizing heating and cooling costs. While insulation reduces energy loss through the building envelope, thermal mass uses walls to store and release energy. Modern concrete wall systems use both external insulation and thermal mass to create an energy-efficient building. Insulating concrete forms (ICFs) are hollow blocks or panels made of either insulating foam or rastra that are stacked to form the shape of the walls of a building and then filled with reinforced concrete to create the structure. Fire safety Concrete buildings are more resistant to fire than those constructed using steel frames, since concrete has lower heat conductivity than steel and can thus last longer under the same fire conditions. Concrete is sometimes used as a fire protection for steel frames, for the same effect as above. Concrete as a fire shield, for example Fondu fyre, can also be used in extreme environments like a missile launch pad. Options for non-combustible construction include floors, ceilings and roofs made of cast-in-place and hollow-core precast concrete. For walls, concrete masonry technology and Insulating Concrete Forms (ICFs) are additional options. ICFs are hollow blocks or panels made of fireproof insulating foam that are stacked to form the shape of the walls of a building and then filled with reinforced concrete to create the structure. Concrete also provides good resistance against externally applied forces such as high winds, hurricanes, and tornadoes owing to its lateral stiffness, which results in minimal horizontal movement. However, this stiffness can work against certain types of concrete structures, particularly where a relatively higher flexing structure is required to resist more extreme forces. Earthquake safety As discussed above, concrete is very strong in compression, but weak in tension. Larger earthquakes can generate very large shear loads on structures. These shear loads subject the structure to both tensile and compressional loads. Concrete structures without reinforcement, like other unreinforced masonry structures, can fail during severe earthquake shaking. Unreinforced masonry structures constitute one of the largest earthquake risks globally. These risks can be reduced through seismic retrofitting of at-risk buildings, (e.g. school buildings in Istanbul, Turkey). Construction with concrete Concrete is one of the most durable building materials. It provides superior fire resistance compared with wooden construction and gains strength over time. Structures made of concrete can have a long service life. Concrete is used more than any other artificial material in the world. As of 2006, about 7.5 billion cubic meters of concrete are made each year, more than one cubic meter for every person on Earth. Reinforced concrete The use of reinforcement, in the form of iron was introduced in the 1850s by French industrialist François Coignet, and it was not until the 1880s that German civil engineer G. A. Wayss used steel as reinforcement. Concrete is a relatively brittle material that is strong under compression but less in tension. Plain, unreinforced concrete is unsuitable for many structures as it is relatively poor at withstanding stresses induced by vibrations, wind loading, and so on. Hence, to increase its overall strength, steel rods, wires, mesh or cables can be embedded in concrete before it is set. This reinforcement, often known as rebar, resists tensile forces. Reinforced concrete (RC) is a versatile composite and one of the most widely used materials in modern construction. It is made up of different constituent materials with very different properties that complement each other. In the case of reinforced concrete, the component materials are almost always concrete and steel. These two materials form a strong bond together and are able to resist a variety of applied forces, effectively acting as a single structural element. Reinforced concrete can be precast or cast-in-place (in situ) concrete, and is used in a wide range of applications such as; slab, wall, beam, column, foundation, and frame construction. Reinforcement is generally placed in areas of the concrete that are likely to be subject to tension, such as the lower portion of beams. Usually, there is a minimum of 50 mm cover, both above and below the steel reinforcement, to resist spalling and corrosion which can lead to structural instability. Other types of non-steel reinforcement, such as Fibre-reinforced concretes are used for specialized applications, predominately as a means of controlling cracking. Precast concrete Precast concrete is concrete which is cast in one place for use elsewhere and is a mobile material. The largest part of precast production is carried out in the works of specialist suppliers, although in some instances, due to economic and geographical factors, scale of product or difficulty of access, the elements are cast on or adjacent to the construction site. Precasting offers considerable advantages because it is carried out in a controlled environment, protected from the elements, but the downside of this is the contribution to greenhouse gas emission from transportation to the construction site. Advantages to be achieved by employing precast concrete: Preferred dimension schemes exist, with elements of tried and tested designs available from a catalogue. Major savings in time result from manufacture of structural elements apart from the series of events which determine overall duration of the construction, known by planning engineers as the 'critical path'. Availability of Laboratory facilities capable of the required control tests, many being certified for specific testing in accordance with National Standards. Equipment with capability suited to specific types of production such as stressing beds with appropriate capacity, moulds and machinery dedicated to particular products. High-quality finishes achieved direct from the mould eliminate the need for interior decoration and ensure low maintenance costs. Mass structures Due to cement's exothermic chemical reaction while setting up, large concrete structures such as dams, navigation locks, large mat foundations, and large breakwaters generate excessive heat during hydration and associated expansion. To mitigate these effects, post-cooling is commonly applied during construction. An early example at Hoover Dam used a network of pipes between vertical concrete placements to circulate cooling water during the curing process to avoid damaging overheating. Similar systems are still used; depending on volume of the pour, the concrete mix used, and ambient air temperature, the cooling process may last for many months after the concrete is placed. Various methods also are used to pre-cool the concrete mix in mass concrete structures. Another approach to mass concrete structures that minimizes cement's thermal by-product is the use of roller-compacted concrete, which uses a dry mix which has a much lower cooling requirement than conventional wet placement. It is deposited in thick layers as a semi-dry material then roller compacted into a dense, strong mass. Surface finishes Raw concrete surfaces tend to be porous and have a relatively uninteresting appearance. Many finishes can be applied to improve the appearance and preserve the surface against staining, water penetration, and freezing. Examples of improved appearance include stamped concrete where the wet concrete has a pattern impressed on the surface, to give a paved, cobbled or brick-like effect, and may be accompanied with coloration. Another popular effect for flooring and table tops is polished concrete where the concrete is polished optically flat with diamond abrasives and sealed with polymers or other sealants. Other finishes can be achieved with chiseling, or more conventional techniques such as painting or covering it with other materials. The proper treatment of the surface of concrete, and therefore its characteristics, is an important stage in the construction and renovation of architectural structures. Prestressed structures Prestressed concrete is a form of reinforced concrete that builds in compressive stresses during construction to oppose tensile stresses experienced in use. This can greatly reduce the weight of beams or slabs, by better distributing the stresses in the structure to make optimal use of the reinforcement. For example, a horizontal beam tends to sag. Prestressed reinforcement along the bottom of the beam counteracts this. In pre-tensioned concrete, the prestressing is achieved by using steel or polymer tendons or bars that are subjected to a tensile force prior to casting, or for post-tensioned concrete, after casting. There are two different systems being used: Pretensioned concrete is almost always precast, and contains steel wires (tendons) that are held in tension while the concrete is placed and sets around them. Post-tensioned concrete has ducts through it. After the concrete has gained strength, tendons are pulled through the ducts and stressed. The ducts are then filled with grout. Bridges built in this way have experienced considerable corrosion of the tendons, so external post-tensioning may now be used in which the tendons run along the outer surface of the concrete. In pre-tensioned concrete, the prestressing is achieved by using steel or polymer tendons or bars that are subjected to a tensile force prior to casting, or for post-tensioned concrete, after casting. More than of highways in the United States are paved with this material. Reinforced concrete, prestressed concrete and precast concrete are the most widely used types of concrete functional extensions in modern days. For more information see Brutalist architecture. Cold weather placement Extreme weather conditions (extreme heat or cold; windy conditions, and humidity variations) can significantly alter the quality of concrete. Many precautions are observed in cold weather placement. Low temperatures significantly slow the chemical reactions involved in hydration of cement, thus affecting the strength development. Preventing freezing is the most important precaution, as formation of ice crystals can cause damage to the crystalline structure of the hydrated cement paste. If the surface of the concrete pour is insulated from the outside temperatures, the heat of hydration will prevent freezing. The American Concrete Institute (ACI) definition of cold weather placement, ACI 306, is: A period when for more than three successive days the average daily air temperature drops below 40 °F (~ 4.5 °C), and Temperature stays below for more than one-half of any 24-hour period. In Canada, where temperatures tend to be much lower during the cold season, the following criteria are used by CSA A23.1: When the air temperature is ≤ 5 °C, and When there is a probability that the temperature may fall below 5 °C within 24 hours of placing the concrete. The minimum strength before exposing concrete to extreme cold is . CSA A 23.1 specified a compressive strength of 7.0 MPa to be considered safe for exposure to freezing. Underwater placement Concrete may be placed and cured underwater. Care must be taken in the placement method to prevent washing out the cement. Underwater placement methods include the tremie, pumping, skip placement, manual placement using toggle bags, and bagwork. Grouted aggregate is an alternative method of forming a concrete mass underwater, where the forms are filled with coarse aggregate and the voids then completely filled with pumped grout. Roads Concrete roads are more fuel efficient to drive on, more reflective and last significantly longer than other paving surfaces, yet have a much smaller market share than other paving solutions. Modern-paving methods and design practices have changed the economics of concrete paving, so that a well-designed and placed concrete pavement will be less expensive on initial costs and significantly less expensive over the life cycle. Another major benefit is that pervious concrete can be used, which eliminates the need to place storm drains near the road, and reducing the need for slightly sloped roadway to help rainwater to run off. No longer requiring discarding rainwater through use of drains also means that less electricity is needed (more pumping is otherwise needed in the water-distribution system), and no rainwater gets polluted as it no longer mixes with polluted water. Rather, it is immediately absorbed by the ground. Environment, health and safety The manufacture and use of concrete produce a wide range of environmental, economic and social impacts. Concrete, cement and the environment A major component of concrete is cement, a fine, soft, powdery-type substance, used mainly to bind fine sand and coarse aggregates together in concrete. Although a variety of cement types exist, the most common is “Portland cement”, which is produced by mixing clinker with smaller quantities of other additives such as gypsum and ground limestone. The production of clinker, the main constituent of cement, is responsible for the bulk of the sector’s greenhouse gas emissions, including both energy intensity and process emissions. The cement industry is one of the three primary producers of carbon dioxide, a major greenhouse gas – the other two being energy production and transportation industries. On average, every tonne of cement produced releases one tonne of CO2 into the atmosphere. Pioneer cement manufacturers have claimed to reach lower carbon intensities, with 590 kg of CO2eq per tonne of cement produced. The emissions are due to combustion and calcination processes, which roughly account for 40% and 60% of the greenhouse gases, respectively. Considering that cement is only a fraction of the constituents of concrete, it is estimated that a tonne of concrete is responsible for emitting about 100–200 kg of CO2. Every year more than 10 bilion tonnes of concrete are used worldwide. In the coming years, large quantities of concrete will continue to be used, and the mitigation of CO2 emissions from the sector will be even more critical. Concrete is used to create hard surfaces that contribute to surface runoff, which can cause heavy soil erosion, water pollution, and flooding, but conversely can be used to divert, dam, and control flooding. Concrete dust released by building demolition and natural disasters can be a major source of dangerous air pollution. Concrete is a contributor to the urban heat island effect, though less so than asphalt. Concrete and climate change mitigation Reducing the cement clinker content might have positive effects on the environmental life-cycle assessment of concrete. Some research work on reducing the cement clinker content in concrete has already been carried out. However, there exist different research strategies. Often replacement of some clinker for large amounts of slag or fly ash was investigated based on conventional concrete technology. This could lead to a waste of scarce raw materials such as slag and fly ash. The aim of other research activities is the efficient use of cement and reactive materials like slag and fly ash in concrete based on a modified mix design approach. An environmental investigation found that the embodied carbon of a precast concrete facade can be reduced by 50% when using the presented fiber reinforced high performance concrete in place of typical reinforced concrete cladding. Studies have been conducted with the expectation of being utilized as data for the commercialization of low-carbon concretes. Life cycle assessment (LCA) of low-carbon concrete was investigated according to the ground granulated blast-furnace slag (GGBS) and fly ash (FA) replacement ratios. Global warming potential (GWP) of GGBS decreased by 1.1 kg CO2 eq/m3, while FA decreased by 17.3 kg CO2 eq/m3 when the mineral admixture replacement ratio was increased by 10%. This study also compared the compressive strength properties of binary blended low-carbon concrete according to the replacement ratios, and the applicable range of mixing proportions was derived. Researchers at University of Auckland are working on utilizing biochar in concrete applications to reduce carbon emissions during concrete production and to improve strength. Concrete and climate change adaptation High-performance building materials will be particularly important for enhancing resilience, including for flood defenses and critical-infrastructure protection. Risks to infrastructure and cities posed by extreme weather events are especially serious for those places exposed to flood and hurricane damage, but also where residents need protection from extreme summer temperatures. Traditional concrete can come under strain when exposed to humidity and higher concentrations of atmospheric CO2. While concrete is likely to remain important in applications where the environment is challenging, novel, smarter and more adaptable materials are also needed. Concrete – health and safety Grinding of concrete can produce hazardous dust. Exposure to cement dust can lead to issues such as silicosis, kidney disease, skin irritation and similar effects. The U.S. National Institute for Occupational Safety and Health in the United States recommends attaching local exhaust ventilation shrouds to electric concrete grinders to control the spread of this dust. In addition, the Occupational Safety and Health Administration (OSHA) has placed more stringent regulations on companies whose workers regularly come into contact with silica dust. An updated silica rule, which OSHA put into effect 23 September 2017 for construction companies, restricted the amount of breathable crystalline silica workers could legally come into contact with to 50 micro grams per cubic meter of air per 8-hour workday. That same rule went into effect 23 June 2018 for general industry, hydraulic fracturing and maritime. That the deadline was extended to 23 June 2021 for engineering controls in the hydraulic fracturing industry. Companies which fail to meet the tightened safety regulations can face financial charges and extensive penalties. The presence of some substances in concrete, including useful and unwanted additives, can cause health concerns due to toxicity and radioactivity. Fresh concrete (before curing is complete) is highly alkaline and must be handled with proper protective equipment. Circular economy Concrete is an excellent material with which to make long-lasting and energy-efficient buildings. However, even with good design, human needs change and potential waste will be generated. End-of-life: concrete degradation and waste Concrete can be damaged by many processes, such as the expansion of corrosion products of the steel reinforcement bars, freezing of trapped water, fire or radiant heat, aggregate expansion, sea water effects, bacterial corrosion, leaching, erosion by fast-flowing water, physical damage and chemical damage (from carbonatation, chlorides, sulfates and distillate water). The micro fungi Aspergillus alternaria and Cladosporium were able to grow on samples of concrete used as a radioactive waste barrier in the Chernobyl reactor; leaching aluminum, iron, calcium, and silicon. Concrete may be considered waste according to the European Commission decision of 2014/955/EU for the List of Waste under the codes: 17 (construction and demolition wastes, including excavated soil from contaminated sites) 01 (concrete, bricks, tiles and ceramics), 01 (concrete), and 17.01.06* (mixtures of, separate fractions of concrete, bricks, tiles and ceramics containing hazardous substances), and 17.01.07 (mixtures of, separate fractions of concrete, bricks, tiles and ceramics other than those mentioned in 17.01.06). It is estimated that in 2018 the European Union generated 371,910 thousand tons of mineral waste from construction and demolition, and close to 4% of this quantity is considered hazardous. Germany, France and the United Kingdom were the top three polluters with 86,412 thousand tons, 68,976 and 68,732 thousand tons of construction waste generation, respectively. Currently, there is not a End-of-Waste criteria for concrete materials in the EU. However, different sectors have been proposing alternatives for concrete waste and re purposing it as a secondary raw material in various applications, including concrete manufacturing itself. Reuse of concrete Reuse of blocks in original form, or by cutting into smaller blocks, has even less environmental impact; however, only a limited market currently exists. Improved building designs that allow for slab reuse and building transformation without demolition could increase this use. Hollow core concrete slabs are easy to dismantle and the span is normally constant, making them good for reuse. Other cases of re-use are possible with pre-cast concrete pieces: through selective demolition, such pieces can be disassembled and collected for further use in other building sites. Studies show that back-building and remounting plans for building units (i.e., re-use of pre-fabricated concrete) is an alternative for a kind of construction which protects resources and saves energy. Especially long-living, durable, energy-intensive building materials, such as concrete, can be kept in the life-cycle longer through recycling. Prefabricated constructions are the prerequisites for constructions necessarily capable of being taken apart. In the case of optimal application in the building carcass, savings in costs are estimated in 26%, a lucrative complement to new building methods. However, this depends on several courses to be set. The viability of this alternative has to be studied as the logistics associated with transporting heavy pieces of concrete can impact the operation financially and also increase the carbon footprint of the project. Also, ever changing regulations on new buildings worldwide may require higher quality standards for construction elements and inhibit the use of old elements which may be classified as obsolete. Recycling of concrete Concrete recycling is an increasingly common method for disposing of concrete structures. Concrete debris were once routinely shipped to landfills for disposal, but recycling is increasing due to improved environmental awareness, governmental laws and economic benefits. Contrary to general belief, concrete recovery is achievable – concrete can be crushed and reused as aggregate in new projects. Recycling or recovering concrete reduces natural resource exploitation and associated transportation costs, and reduces waste landfill. However, it has little impact on reducing greenhouse gas emissions as most emissions occur when cement is made, and cement alone cannot be recycled. At present, most recovered concrete is used for road sub-base and civil engineering projects. From a sustainability viewpoint, these relatively low-grade uses currently provide the optimal outcome. The recycling process can be done in situ, with mobile plants, or in specific recycling units. The input material can be returned concrete which is fresh (wet) from ready-mix trucks, production waste at a pre-cast production facility, Waste from construction and demolition. The most significant source is demolition waste, preferably pre-sorted from selective demolition processes. By far the most common method for recycling dry and hardened concrete involves crushing. Mobile sorters and crushers are often installed on construction sites to allow on-site processing. In other situations, specific processing sites are established, which are usually able to produce higher quality aggregate. Screens are used to achieve desired particle size, and remove dirt, foreign particles and fine material from the coarse aggregate. Chloride and sulfates are undesired contaminants originated from soil and weathering and can provoke corrosion problems on aluminum and steel structures. The final product, Recycled Concrete Aggregate (RCA), presents interesting properties such as: angular shape, rougher surface, lower specific gravity (20%), higher water absorption, and pH greater than 11 – this elevated pH increases the risk of alkali reactions. The lower density of RCA usually Increases project efficiency and improve job cost - recycled concrete aggregates yield more volume by weight (up to 15%). The physical properties of coarse aggregates made from crushed demolition concrete make it the preferred material for applications such as road base and sub-base. This is because recycled aggregates often have better compaction properties and require less cement for sub-base uses. Furthermore, it is generally cheaper to obtain than virgin material. Applications of recycled concrete aggregate The main commercial applications of the final recycled concrete aggregate are: Aggregate base course (road base), or the untreated aggregates used as foundation for roadway pavement, is the underlying layer (under pavement surfacing) which forms a structural foundation for paving. To this date this has been the most popular application for RCA due to technical-economic aspects. Aggregate for ready-mix concrete, by simply replacing from 10 to 45% of the natural aggregates in the concrete mix with a blend of cement, sand and water. Some concept buildings are showing the progress of this field. Because the RCA contains cement it, the ratios of the mix have to be adjusted to achieve desired structural requirements such as workability, strength and water absorption. Soil Stabilization, with the incorporation of recycled aggregate, lime, or fly ash into marginal quality subgrade material used to enhance the load bearing capacity of that subgrade. Pipe bedding: serving as a stable bed or firm foundation in which to lay underground utilities. Some countries' regulations prohibit the use of RCA and other construction and demolition wastes in filtration and drainage beds due to potential contamination with chromium and pH-value impacts. Landscape Materials: to promote green architecture. To date, recycled concrete aggregate has been used as boulder/stacked rock walls, underpass abutment structures, erosion structures, water features, retaining walls, and more. Cradle-to-cradle challenges The applications developed for RCA so far are not exhaustive, and many more uses are to be developed as regulations, institutions and norms find ways to accommodate construction and demolition waste as secondary raw materials in a safe and economic way. However, considering the purpose of having a circularity of resources in the concrete life cycle, the only application of RCA that could be considered as recycling of concrete is the replacement of natural aggregates on concrete mixes. All the other applications would fall under the category of downcycling. It is estimated that even near complete recovery of concrete from construction and demolition waste will only supply about 20% of total aggregate needs in the developed world. The path towards circularity goes beyond concrete technology itself, depending on multilateral advances in the cement industry, research and development of alternative materials, building design and management, and demolition as well as conscious use of spaces in urban areas to reduce consumption. World records The world record for the largest concrete pour in a single project is the Three Gorges Dam in Hubei Province, China by the Three Gorges Corporation. The amount of concrete used in the construction of the dam is estimated at 16 million cubic meters over 17 years. The previous record was 12.3 million cubic meters held by Itaipu hydropower station in Brazil. The world record for concrete pumping was set on 7 August 2009 during the construction of the Parbati Hydroelectric Project, near the village of Suind, Himachal |
chance of sperm in the pre-ejaculate. In comparison, the pill has a perfect-use failure rate of 0.3%, IUDs a rate of 0.1-0.6%, and internal condoms a rate of 2%. It has been suggested that the pre-ejaculate ("Cowper's fluid") emitted by the penis prior to ejaculation may contain spermatozoa (sperm cells), which would compromise the effectiveness of the method. However, several small studies have failed to find any viable sperm in the fluid. While no large conclusive studies have been done, it is believed by some that the cause of method (correct-use) failure is the pre-ejaculate fluid picking up sperm from a previous ejaculation. For this reason, it is recommended that the male partner urinate between ejaculations, to clear the urethra of sperm, and wash any ejaculate from objects that might come near the woman's vulva (e.g. hands and penis). However, recent research suggests that this might not be accurate. A contrary, yet non-generalizable study that found mixed evidence, including individual cases of a high sperm concentration, was published in March 2011. A noted limitation to these previous studies' findings is that pre-ejaculate samples were analyzed after the critical two-minute point. That is, looking for motile sperm in small amounts of pre-ejaculate via microscope after two minutes – when the sample has most likely dried – makes examination and evaluation "extremely difficult". Thus, in March 2011 a team of researchers assembled 27 male volunteers and analyzed their pre-ejaculate samples within two minutes after producing them. The researchers found that 11 of the 27 men (41%) produced pre-ejaculatory samples that contained sperm, and 10 of these samples (37%) contained a "fair amount" of motile sperm (i.e. as few as 1 million to as many as 35 million). This study therefore recommends, in order to minimize unintended pregnancy and disease transmission, the use of condoms from the first moment of genital contact. As a point of reference, a study showed that, of couples who conceived within a year of trying, only 2.5% included a male partner with a total sperm count (per ejaculate) of 23 million sperm or less. However, across a wide range of observed values, total sperm count (as with other identified semen and sperm characteristics) has weak power to predict which couples are at risk of pregnancy. Regardless, this study introduced the concept that some men may consistently have sperm in their pre-ejaculate, due to a "leakage," while others may not. Similarly, another robust study performed in 2016 found motile sperm in the pre-ejaculate of 16.7% (7/42) healthy men. What more, this study attempted to exclude contamination of sperm from ejaculate by drying the pre-ejaculate specimens to reveal a fern-like pattern, characteristics of true pre-ejaculate. All pre-ejaculate specimens were examined within an hour of production and then dried; all pre-ejaculate specimens were found to be true pre-ejaculate. It is widely believed that urinating after an ejaculation will flush the urethra of remaining sperm. However, some of the subjects in the March 2011 study who produced sperm in their pre-ejaculate did urinate (sometimes more than once) before producing their sample. Therefore, some males can release the pre-ejaculate fluid containing sperm without a previous ejaculation. Advantages The advantage of coitus interruptus is that it can be used by people who have objections to, or do not have access to, other forms of contraception. Some persons prefer it so they can avoid possible adverse effects of hormonal contraceptives or so that they can have a full experience and be able to "feel" their partner. Other reasons for the popularity of this method are it has no direct monetary cost, requires no artificial devices, has no physical side effects, can be practiced without a prescription or medical consultation, and provides no barriers to stimulation. Disadvantages Compared to the other common reversible methods of contraception such as IUDs, hormonal contraceptives, and male condoms, coitus interruptus is less effective at preventing pregnancy. As a result, it is also less cost-effective than many more effective methods: although the method itself has no direct cost, users have a greater chance of incurring the risks and expenses of either child-birth or abortion. Only models that assume all couples practice perfect use of the method find cost savings associated with the choice of withdrawal as a birth control | urinate between ejaculations, to clear the urethra of sperm, and wash any ejaculate from objects that might come near the woman's vulva (e.g. hands and penis). However, recent research suggests that this might not be accurate. A contrary, yet non-generalizable study that found mixed evidence, including individual cases of a high sperm concentration, was published in March 2011. A noted limitation to these previous studies' findings is that pre-ejaculate samples were analyzed after the critical two-minute point. That is, looking for motile sperm in small amounts of pre-ejaculate via microscope after two minutes – when the sample has most likely dried – makes examination and evaluation "extremely difficult". Thus, in March 2011 a team of researchers assembled 27 male volunteers and analyzed their pre-ejaculate samples within two minutes after producing them. The researchers found that 11 of the 27 men (41%) produced pre-ejaculatory samples that contained sperm, and 10 of these samples (37%) contained a "fair amount" of motile sperm (i.e. as few as 1 million to as many as 35 million). This study therefore recommends, in order to minimize unintended pregnancy and disease transmission, the use of condoms from the first moment of genital contact. As a point of reference, a study showed that, of couples who conceived within a year of trying, only 2.5% included a male partner with a total sperm count (per ejaculate) of 23 million sperm or less. However, across a wide range of observed values, total sperm count (as with other identified semen and sperm characteristics) has weak power to predict which couples are at risk of pregnancy. Regardless, this study introduced the concept that some men may consistently have sperm in their pre-ejaculate, due to a "leakage," while others may not. Similarly, another robust study performed in 2016 found motile sperm in the pre-ejaculate of 16.7% (7/42) healthy men. What more, this study attempted to exclude contamination of sperm from ejaculate by drying the pre-ejaculate specimens to reveal a fern-like pattern, characteristics of true pre-ejaculate. All pre-ejaculate specimens were examined within an hour of production and then dried; all pre-ejaculate specimens were found to be true pre-ejaculate. It is widely believed that urinating after an ejaculation will flush the urethra of remaining sperm. However, some of the subjects in the March 2011 study who produced sperm in their pre-ejaculate did urinate (sometimes more than once) before producing their sample. Therefore, some males can release the pre-ejaculate fluid containing sperm without a previous ejaculation. Advantages The advantage of coitus interruptus is that it can be used by people who have objections to, or do not have access to, other forms of contraception. Some persons prefer it so they can avoid possible adverse effects of hormonal contraceptives or so that they can have a full experience and be able to "feel" their partner. Other reasons for the popularity of this method are it has no direct monetary cost, requires no artificial devices, has no physical side effects, can be practiced without a prescription or medical consultation, and provides no barriers to stimulation. Disadvantages Compared to the other common reversible methods of contraception such as IUDs, hormonal contraceptives, and male condoms, coitus interruptus is less effective at preventing pregnancy. As a result, it is also less cost-effective than many more effective methods: although the method itself has no direct cost, users have a greater chance of incurring the risks and expenses of either child-birth or abortion. Only models that assume all couples practice perfect use of the method find cost savings associated with the choice of |
off the penis after ejaculation, break due to improper application or physical damage (such as tears caused when opening the package), or break or slip due to latex degradation (typically from usage past the expiration date, improper storage, or exposure to oils). The rate of breakage is between 0.4% and 2.3%, while the rate of slippage is between 0.6% and 1.3%. Even if no breakage or slippage is observed, 1–3% of women will test positive for semen residue after intercourse with a condom. Failure rates are higher for anal sex, and until 2022, condoms were only approved by the FDA for vaginal sex. The One Male Condom received FDA approval for anal sex on February 23, 2022. "Double bagging", using two condoms at once, is often believed to cause a higher rate of failure due to the friction of rubber on rubber. This claim is not supported by research. The limited studies that have been done found that the simultaneous use of multiple condoms decreases the risk of condom breakage. Different modes of condom failure result in different levels of semen exposure. If a failure occurs during application, the damaged condom may be disposed of and a new condom applied before intercourse begins – such failures generally pose no risk to the user. One study found that semen exposure from a broken condom was about half that of unprotected intercourse; semen exposure from a slipped condom was about one-fifth that of unprotected intercourse. Standard condoms will fit almost any penis, with varying degrees of comfort or risk of slippage. Many condom manufacturers offer "snug" or "magnum" sizes. Some manufacturers also offer custom sized-to-fit condoms, with claims that they are more reliable and offer improved sensation/comfort. Some studies have associated larger penises and smaller condoms with increased breakage and decreased slippage rates (and vice versa), but other studies have been inconclusive. It is recommended for condoms manufacturers to avoid very thick or very thin condoms, because they are both considered less effective. Some authors encourage users to choose thinner condoms "for greater durability, sensation, and comfort", but others warn that "the thinner the condom, the smaller the force required to break it". Experienced condom users are significantly less likely to have a condom slip or break compared to first-time users, although users who experience one slippage or breakage are more likely to suffer a second such failure. An article in Population Reports suggests that education on condom use reduces behaviors that increase the risk of breakage and slippage. A Family Health International publication also offers the view that education can reduce the risk of breakage and slippage, but emphasizes that more research needs to be done to determine all of the causes of breakage and slippage. Among people who intend condoms to be their form of birth control, pregnancy may occur when the user has sex without a condom. The person may have run out of condoms, or be traveling and not have a condom with them, or simply dislike the feel of condoms and decide to "take a chance". This type of behavior is the primary cause of typical use failure (as opposed to method or perfect use failure). Another possible cause of condom failure is sabotage. One motive is to have a child against a partner's wishes or consent. Some commercial sex workers from Nigeria reported clients sabotaging condoms in retaliation for being coerced into condom use. Using a fine needle to make several pinholes at the tip of the condom is believed to significantly impact on their effectiveness. Cases of such condom sabotage have occurred. Side effects The use of latex condoms by people with an allergy to latex can cause allergic symptoms, such as skin irritation. In people with severe latex allergies, using a latex condom can potentially be life-threatening. Repeated use of latex condoms can also cause the development of a latex allergy in some people. Irritation may also occur due to spermicides that may be present. Use Male condoms are usually packaged inside a foil or plastic wrapper, in a rolled-up form, and are designed to be applied to the tip of the penis and then unrolled over the erect penis. It is important that some space be left in the tip of the condom so that semen has a place to collect; otherwise it may be forced out of the base of the device. Most condoms have a teat end for this purpose. After use, it is recommended the condom be wrapped in tissue or tied in a knot, then disposed of in a trash receptacle. Condoms are used to reduce the likelihood of pregnancy during intercourse and to reduce the likelihood of contracting sexually-transmitted infections (STIs). Condoms are also used during fellatio to reduce the likelihood of contracting STIs. Some couples find that putting on a condom interrupts sex, although others incorporate condom application as part of their foreplay. Some men and women find the physical barrier of a condom dulls sensation. Advantages of dulled sensation can include prolonged erection and delayed ejaculation; disadvantages might include a loss of some sexual excitement. Advocates of condom use also cite their advantages of being inexpensive, easy to use, and having few side effects. Adult film industry In 2012 proponents gathered 372,000 voter signatures through a citizens' initiative in Los Angeles County to put Measure B on the 2012 ballot. As a result, Measure B, a law requiring the use of condoms in the production of pornographic films, was passed. This requirement has received much criticism and is said by some to be counter-productive, merely forcing companies that make pornographic films to relocate to other places without this requirement. Producers claim that condom use depresses sales. Sex education Condoms are often used in sex education programs, because they have the capability to reduce the chances of pregnancy and the spread of some sexually transmitted diseases when used correctly. A recent American Psychological Association (APA) press release supported the inclusion of information about condoms in sex education, saying "comprehensive sexuality education programs ... discuss the appropriate use of condoms", and "promote condom use for those who are sexually active." In the United States, teaching about condoms in public schools is opposed by some religious organizations. Planned Parenthood, which advocates family planning and sex education, argues that no studies have shown abstinence-only programs to result in delayed intercourse, and cites surveys showing that 76% of American parents want their children to receive comprehensive sexuality education including condom use. Infertility treatment Common procedures in infertility treatment such as semen analysis and intrauterine insemination (IUI) require collection of semen samples. These are most commonly obtained through masturbation, but an alternative to masturbation is use of a special collection condom to collect semen during sexual intercourse. Collection condoms are made from silicone or polyurethane, as latex is somewhat harmful to sperm. Some men prefer collection condoms to masturbation, and some religions prohibit masturbation entirely. Also, compared with samples obtained from masturbation, semen samples from collection condoms have higher total sperm counts, sperm motility, and percentage of sperm with normal morphology. For this reason, they are believed to give more accurate results when used for semen analysis, and to improve the chances of pregnancy when used in procedures such as intracervical or intrauterine insemination. Adherents of religions that prohibit contraception, such as Catholicism, may use collection condoms with holes pricked in them. For fertility treatments, a collection condom may be used to collect semen during sexual intercourse where the semen is provided by the woman's partner. Private sperm donors may also use a collection condom to obtain samples through masturbation or by sexual intercourse with a partner and will transfer the ejaculate from the collection condom to a specially designed container. The sperm is transported in such containers, in the case of a donor, to a recipient woman to be used for insemination, and in the case of a woman's partner, to a fertility clinic for processing and use. However, transportation may reduce the fecundity of the sperm. Collection condoms may also be used where semen is produced at a sperm bank or fertility clinic. Condom therapy is sometimes prescribed to infertile couples when the female has high levels of antisperm antibodies. The theory is that preventing exposure to her partner's semen will lower her level of antisperm antibodies, and thus increase her chances of pregnancy when condom therapy is discontinued. However, condom therapy has not been shown to increase subsequent pregnancy rates. Other uses Condoms excel as multipurpose containers and barriers because they are waterproof, elastic, durable, and (for military and espionage uses) will not arouse suspicion if found. Ongoing military utilization began during World War II, and includes covering the muzzles of rifle barrels to prevent fouling, the waterproofing of firing assemblies in underwater demolitions, and storage of corrosive materials and garrotes by paramilitary agencies. Condoms have also been used to smuggle alcohol, cocaine, heroin, and other drugs across borders and into prisons by filling the condom with drugs, tying it in a knot and then either swallowing it or inserting it into the rectum. These methods are very dangerous and potentially lethal; if the condom breaks, the drugs inside become absorbed into the bloodstream and can cause an overdose. Medically, condoms can be used to cover endovaginal ultrasound probes, or in field chest needle decompressions they can be used to make a one-way valve. Condoms have also been used to protect scientific samples from the environment, and to waterproof microphones for underwater recording. Types Most condoms have a reservoir tip or teat end, making it easier to accommodate the man's ejaculate. Condoms come in different sizes, from snug to larger, and shapes. Width often varies from 49 mm to 56 mm. Sizes from 45 mm to 60 mm, however exist. They also come in a variety of surfaces intended to stimulate the user's partner. Condoms are usually supplied with a lubricant coating to facilitate penetration, while flavored condoms are principally used for oral sex. As mentioned above, most condoms are made of latex, but polyurethane and lambskin condoms also exist. Female condom Male condoms have a tight ring to form a seal around the penis while female condoms usually have a large stiff ring to prevent them from slipping into the body orifice. The Female Health Company produced a female condom that was initially made of polyurethane, but newer versions are made of nitrile. Medtech Products produces a female condom made of latex. Materials Natural latex Latex has outstanding elastic properties: Its tensile strength exceeds 30 MPa, and latex condoms may be stretched in excess of 800% before breaking. In 1990 the ISO set standards for condom production (ISO 4074, Natural latex rubber condoms), and the EU followed suit with its CEN standard (Directive 93/42/EEC concerning medical devices). Every latex condom is tested for holes with an electric current. If the condom passes, it is rolled and packaged. In addition, a portion of each batch of condoms is subject to water leak and air burst testing. While the advantages of latex have made it the most popular condom material, it does have some drawbacks. Latex condoms are damaged when used with oil-based substances as lubricants, such as petroleum jelly, cooking oil, baby oil, mineral oil, skin lotions, suntan lotions, cold creams, butter or margarine. Contact with oil makes latex condoms more likely to break or slip off due to loss of elasticity caused by the oils. Additionally, latex allergy precludes use of latex condoms and is one of the principal reasons for the use of other materials. In May 2009, the U.S. Food and Drug Administration (FDA) granted approval for the production of condoms composed of Vytex, latex that has been treated to remove 90% of the proteins responsible for allergic reactions. An allergen-free condom made of synthetic latex (polyisoprene) is also available. Synthetic The most common non-latex condoms are made from polyurethane. Condoms may also be made from other synthetic materials, such as AT-10 resin, and most polyisoprene. Polyurethane condoms tend to be the same width and thickness as latex condoms, with most polyurethane condoms between 0.04 mm and 0.07 mm thick. Polyurethane can be considered better than latex in several ways: it conducts heat better than latex, is not as sensitive to temperature and ultraviolet light (and so has less rigid storage requirements and a longer shelf life), can be used with oil-based lubricants, is less allergenic than latex, and does not have an odor. Polyurethane condoms have gained FDA approval for sale in the United States as an effective method of contraception and HIV prevention, and under laboratory conditions have been shown to be just as effective as latex for these purposes. However, polyurethane condoms are less elastic than latex ones, and may be more likely to slip or break than latex, lose their shape or bunch up more than latex, and are more expensive. Polyisoprene is a synthetic version of natural rubber latex. While significantly more expensive, it has the advantages of latex (such as being softer and more elastic than polyurethane condoms) without the protein which is responsible for latex allergies. Unlike polyurethane condoms, they cannot be used with an oil-based lubricant. Lambskin Condoms made from sheep intestines, labeled "lambskin", are also available. Although they are generally effective as a contraceptive by blocking sperm, it is presumed that they are likely less effective than latex in preventing the transmission of sexually transmitted infections, because of pores in the material. This is based on the idea that intestines, by their nature, are porous, permeable membranes, and while sperm are too large to pass through the pores, viruses — such as HIV, herpes, and genital warts — are small enough to pass. However, there are to date no clinical data confirming or denying this theory. As a result of laboratory data on condom porosity, in 1989 the FDA began requiring lambskin condom manufacturers to indicate that the products were not to be used for the prevention of sexually transmitted infections. This was based on the presumption that lambskin condoms would be less effective than latex in preventing HIV transmission, rather than a conclusion that lambskin condoms lack efficacy in STI prevention altogether. An FDA publication in 1992 states that lambskin condoms "provide good birth control and a varying degree of protection against some, but not all, sexually transmitted diseases", and that the labelling requirement was decided upon because the FDA "cannot expect people to know which STDs they need to be protected against", and since "the reality is that you don't know what your partner has, we wanted natural-membrane condoms to have labels that don't allow the user to assume they're effective against the small viral STDs." Some believe that lambskin condoms provide a more "natural" sensation, and they lack the allergens that are inherent to latex, but because of their lesser protection against infection, other hypoallergenic materials such as polyurethane are recommended for latex-allergic users and/or partners. Lambskin condoms are also significantly more expensive than other types and as slaughter by-products they are also not vegetarian. Spermicide Some latex condoms are lubricated at the manufacturer with a small amount of a nonoxynol-9, a spermicidal chemical. According to Consumer Reports, condoms lubricated with spermicide have no additional benefit in preventing pregnancy, have a shorter shelf life, and may cause urinary tract infections in women. In contrast, application of separately packaged spermicide is believed to increase the contraceptive efficacy of condoms. Nonoxynol-9 was once believed to offer additional protection against STDs (including HIV) but recent studies have shown that, with frequent use, nonoxynol-9 may increase the risk of HIV transmission. The World Health Organization says that spermicidally lubricated condoms should no longer be promoted. However, it recommends using a nonoxynol-9 lubricated condom over no condom at all. , nine condom manufacturers have stopped manufacturing condoms with nonoxynol-9 and Planned Parenthood has discontinued the distribution of condoms so lubricated. Ribbed and studded Textured condoms include studded and ribbed condoms which can provide extra sensations to both partners. The studs or ribs can be located on the inside, outside, or both; alternatively, they are located in specific sections to provide directed stimulation to either the G-spot or frenulum. Many textured condoms which advertise "mutual pleasure" also are bulb-shaped at the top, to provide extra stimulation to the penis. Some women experience irritation during vaginal intercourse with studded condoms. Other The anti-rape condom is another variation designed to be worn by women. It is designed to cause pain to the attacker, hopefully allowing the victim a chance to escape. A collection condom is used to collect semen for fertility treatments or sperm analysis. These condoms are designed to maximize sperm life. Some condom-like devices are intended for entertainment only, such as glow-in-the dark condoms. These novelty condoms may not provide protection against pregnancy and STDs. In February 2022, the U.S. Food and Drug Administration (FDA) approved the first condoms specifically indicated to help reduce transmission of sexually transmitted infections (STIs) during anal intercourse. Prevalence The prevalence of condom use varies greatly between countries. Most surveys of contraceptive use are among married women, or women in informal unions. Japan has the highest rate of condom usage in the world: in that country, condoms account for almost 80% of contraceptive use by married women. On average, in developed countries, condoms are the most popular method of birth control: 28% of married contraceptive users rely on condoms. In the average less-developed country, condoms are less common: only 6–8% of married contraceptive users choose condoms. History Before the 19th century Whether condoms were used in ancient civilizations is debated by archaeologists and historians. In ancient Egypt, Greece, and Rome, pregnancy prevention was generally seen as a woman's responsibility, and the only well documented contraception methods were female-controlled devices. In Asia before the 15th century, some use of glans condoms (devices covering only the head of the penis) is recorded. Condoms seem to have been used for contraception, and to have been known only by members of the upper classes. In China, glans condoms may have been made of oiled silk paper, or of lamb intestines. In Japan, they were made of tortoise shell or animal horn. In 16th-century Italy, anatomist and physician Gabriele Falloppio wrote a treatise on syphilis. The earliest documented strain of syphilis, first appearing in Europe in a 1490s outbreak, caused severe symptoms and often death within a few months of contracting the disease. Falloppio's treatise is the earliest uncontested description of condom use: it describes linen sheaths soaked in a chemical solution and allowed to dry before use. The cloths he described were sized to cover the glans of the penis, and were held on with a ribbon. Falloppio claimed that an experimental trial of the linen sheath demonstrated protection against syphilis. After this, the use of penis coverings to protect from disease is described in a wide variety of literature throughout Europe. The first indication that these devices were used for birth control, rather than disease prevention, is the 1605 theological publication De iustitia et iure (On justice and law) by Catholic theologian Leonardus Lessius, who condemned them as immoral. In 1666, the English Birth Rate Commission attributed a recent downward fertility rate to use of "condons", the first documented use of that word (or any similar spelling). (Other early spellings include "condam" and "quondam", from which the Italian derivation "guantone" has been suggested, from "guanto", "a glove".) In addition to linen, condoms during the Renaissance were made out of intestines and bladder. In the late 16th century, Dutch traders introduced condoms made from "fine leather" to Japan. Unlike the horn condoms used previously, these leather condoms covered the entire penis. Casanova in the 18th century was one of the first reported using "assurance caps" to prevent impregnating his mistresses. From at least the 18th century, condom use was opposed in some legal, religious, and medical circles for essentially the same reasons that are given today: condoms reduce the likelihood of pregnancy, which some thought immoral or undesirable for the nation; they do not provide full protection against sexually transmitted infections, while belief in their protective powers was thought to encourage sexual promiscuity; and, they are not used consistently due to inconvenience, expense, or loss of sensation. Despite some opposition, the condom market grew | from disease. Many countries passed laws impeding the manufacture and promotion of contraceptives. In spite of these restrictions, condoms were promoted by traveling lecturers and in newspaper advertisements, using euphemisms in places where such ads were illegal. Instructions on how to make condoms at home were distributed in the United States and Europe. Despite social and legal opposition, at the end of the 19th century the condom was the Western world's most popular birth control method. Beginning in the second half of the 19th century, American rates of sexually transmitted diseases skyrocketed. Causes cited by historians include effects of the American Civil War, and the ignorance of prevention methods promoted by the Comstock laws. To fight the growing epidemic, sex education classes were introduced to public schools for the first time, teaching about venereal diseases and how they were transmitted. They generally taught that abstinence was the only way to avoid sexually transmitted diseases. Condoms were not promoted for disease prevention because the medical community and moral watchdogs considered STDs to be punishment for sexual misbehavior. The stigma against victims of these diseases was so great that many hospitals refused to treat people who had syphilis. The German military was the first to promote condom use among its soldiers, beginning in the later 19th century. Early 20th century experiments by the American military concluded that providing condoms to soldiers significantly lowered rates of sexually transmitted diseases. During World War I, the United States and (at the beginning of the war only) Britain were the only countries with soldiers in Europe who did not provide condoms and promote their use. In the decades after World War I, there remained social and legal obstacles to condom use throughout the U.S. and Europe. Founder of psychoanalysis Sigmund Freud opposed all methods of birth control on the grounds that their failure rates were too high. Freud was especially opposed to the condom because he thought it cut down on sexual pleasure. Some feminists continued to oppose male-controlled contraceptives such as condoms. In 1920 the Church of England's Lambeth Conference condemned all "unnatural means of conception avoidance". The Bishop of London, Arthur Winnington-Ingram, complained of the huge number of condoms discarded in alleyways and parks, especially after weekends and holidays. However, European militaries continued to provide condoms to their members for disease protection, even in countries where they were illegal for the general population. Through the 1920s, catchy names and slick packaging became an increasingly important marketing technique for many consumer items, including condoms and cigarettes. Quality testing became more common, involving filling each condom with air followed by one of several methods intended to detect loss of pressure. Worldwide, condom sales doubled in the 1920s. Rubber and manufacturing advances In 1839, Charles Goodyear discovered a way of processing natural rubber, which is too stiff when cold and too soft when warm, in such a way as to make it elastic. This proved to have advantages for the manufacture of condoms; unlike the sheep's gut condoms, they could stretch and did not tear quickly when used. The rubber vulcanization process was patented by Goodyear in 1844. The first rubber condom was produced in 1855. The earliest rubber condoms had a seam and were as thick as a bicycle inner tube. Besides this type, small rubber condoms covering only the glans were often used in England and the United States. There was more risk of losing them and if the rubber ring was too tight, it would constrict the penis. This type of condom was the original "capote" (French for condom), perhaps because of its resemblance to a woman's bonnet worn at that time, also called a capote. For many decades, rubber condoms were manufactured by wrapping strips of raw rubber around penis-shaped molds, then dipping the wrapped molds in a chemical solution to cure the rubber. In 1912, Polish-born inventor Julius Fromm developed a new, improved manufacturing technique for condoms: dipping glass molds into a raw rubber solution. Called cement dipping, this method required adding gasoline or benzene to the rubber to make it liquid. Latex, rubber suspended in water, was invented in 1920. Latex condoms required less labor to produce than cement-dipped rubber condoms, which had to be smoothed by rubbing and trimming. The use of water to suspend the rubber instead of gasoline and benzene eliminated the fire hazard previously associated with all condom factories. Latex condoms also performed better for the consumer: they were stronger and thinner than rubber condoms, and had a shelf life of five years (compared to three months for rubber). Until the twenties, all condoms were individually hand-dipped by semi-skilled workers. Throughout the decade of the 1920s, advances in the automation of the condom assembly line were made. The first fully automated line was patented in 1930. Major condom manufacturers bought or leased conveyor systems, and small manufacturers were driven out of business. The skin condom, now significantly more expensive than the latex variety, became restricted to a niche high-end market. 1930 to present In 1930 the Anglican Church's Lambeth Conference sanctioned the use of birth control by married couples. In 1931 the Federal Council of Churches in the U.S. issued a similar statement. The Roman Catholic Church responded by issuing the encyclical Casti connubii affirming its opposition to all contraceptives, a stance it has never reversed. In the 1930s, legal restrictions on condoms began to be relaxed. But during this period Fascist Italy and Nazi Germany increased restrictions on condoms (limited sales as disease preventatives were still allowed). During the Depression, condom lines by Schmid gained in popularity. Schmid still used the cement-dipping method of manufacture which had two advantages over the latex variety. Firstly, cement-dipped condoms could be safely used with oil-based lubricants. Secondly, while less comfortable, these older-style rubber condoms could be reused and so were more economical, a valued feature in hard times. More attention was brought to quality issues in the 1930s, and the U.S. Food and Drug Administration began to regulate the quality of condoms sold in the United States. Throughout World War II, condoms were not only distributed to male U.S. military members, but also heavily promoted with films, posters, and lectures. European and Asian militaries on both sides of the conflict also provided condoms to their troops throughout the war, even Germany which outlawed all civilian use of condoms in 1941. In part because condoms were readily available, soldiers found a number of non-sexual uses for the devices, many of which continue to this day. After the war, condom sales continued to grow. From 1955 to 1965, 42% of Americans of reproductive age relied on condoms for birth control. In Britain from 1950 to 1960, 60% of married couples used condoms. The birth control pill became the world's most popular method of birth control in the years after its 1960 début, but condoms remained a strong second. The U.S. Agency for International Development pushed condom use in developing countries to help solve the "world population crises": by 1970 hundreds of millions of condoms were being used each year in India alone.(This number has grown in recent decades: in 2004, the government of India purchased 1.9 billion condoms for distribution at family planning clinics.) In the 1960s and 1970s quality regulations tightened, and more legal barriers to condom use were removed. In Ireland, legal condom sales were allowed for the first time in 1978. Advertising, however was one area that continued to have legal restrictions. In the late 1950s, the American National Association of Broadcasters banned condom advertisements from national television; this policy remained in place until 1979. After it was discovered in the early 1980s that AIDS can be a sexually transmitted infection, the use of condoms was encouraged to prevent transmission of HIV. Despite opposition by some political, religious, and other figures, national condom promotion campaigns occurred in the U.S. and Europe. These campaigns increased condom use significantly. Due to increased demand and greater social acceptance, condoms began to be sold in a wider variety of retail outlets, including in supermarkets and in discount department stores such as Walmart. Condom sales increased every year until 1994, when media attention to the AIDS pandemic began to decline. The phenomenon of decreasing use of condoms as disease preventatives has been called prevention fatigue or condom fatigue. Observers have cited condom fatigue in both Europe and North America. As one response, manufacturers have changed the tone of their advertisements from scary to humorous. New developments continued to occur in the condom market, with the first polyurethane condom—branded Avanti and produced by the manufacturer of Durex—introduced in the 1990s. Worldwide condom use is expected to continue to grow: one study predicted that developing nations would need 18.6 billion condoms by 2015. , condoms are available inside prisons in Canada, most of the European Union, Australia, Brazil, Indonesia, South Africa, and the US states of Vermont (on September 17, 2013, the Californian Senate approved a bill for condom distribution inside the state's prisons, but the bill was not yet law at the time of approval). The global condom market was estimated at US$9.2 billion in 2020. Etymology and other terms The term condom first appears in the early 18th century: early forms include condum (1706 and 1717), condon (1708) and cundum (1744). The word's etymology is unknown. In popular tradition, the invention and naming of the condom came to be attributed to an associate of England's King Charles II, one "Dr. Condom" or "Earl of Condom". There is however no evidence of the existence of such a person, and condoms had been used for over one hundred years before King Charles II ascended to the throne. A variety of unproven Latin etymologies have been proposed, including (receptacle), (house), and (scabbard or case). It has also been speculated to be from the Italian word guantone, derived from guanto, meaning glove. William E. Kruck wrote an article in 1981 concluding that, "As for the word 'condom', I need state only that its origin remains completely unknown, and there ends this search for an etymology." Modern dictionaries may also list the etymology as "unknown". Other terms are also commonly used to describe condoms. In North America condoms are also commonly known as prophylactics, or rubbers. In Britain they may be called French letters or rubber johnnies. Additionally, condoms may be referred to using the manufacturer's name. Society and culture Some moral and scientific criticism of condoms exists despite the many benefits of condoms agreed on by scientific consensus and sexual health experts. Condom usage is typically recommended for new couples who have yet to develop full trust in their partner with regard to STDs. Established couples on the other hand have few concerns about STDs, and can use other methods of birth control such as the pill, which does not act as a barrier to intimate sexual contact. Note that the polar debate with regard to condom usage is attenuated by the target group the argument is directed. Notably the age category and stable partner question are factors, as well as the distinction between heterosexual and homosexuals, who have different kinds of sex and have different risk consequences and factors. Among the prime objections to condom usage is the blocking of erotic sensation, or the intimacy that barrier-free sex provides. As the condom is held tightly to the skin of the penis, it diminishes the delivery of stimulation through rubbing and friction. Condom proponents claim this has the benefit of making sex last longer, by diminishing sensation and delaying male ejaculation. Those who promote condom-free heterosexual sex (slang: "bareback") claim that the condom puts a barrier between partners, diminishing what is normally a highly sensual, intimate, and spiritual connection between partners. Religious The United Church of Christ (UCC), a Reformed denomination of the Congregationalist tradition, promotes the distribution of condoms in churches and faith-based educational settings. Michael Shuenemeyer, a UCC minister, has stated that "The practice of safer sex is a matter of life and death. People of faith make condoms available because we have chosen life so that we and our |
codes used by the U.S. government and in the CIA World Factbook: list of FIPS country codes. On September 2, 2008, FIPS 10-4 was one of ten standards withdrawn by NIST as a Federal Information Processing Standard. The Bureau of Transportation Statistics, part of the United States Department of Transportation (US DOT), maintains its own list of codes, so-called World Area Codes (WAC), for state and country codes. GOST 7.67: country codes in Cyrillic from the GOST standards committee From the International Civil Aviation Organization (ICAO): The national prefixes used in aircraft registration numbers Location prefixes in four-character ICAO airport codes International Olympic Committee (IOC) three-letter codes used in sporting events: list of IOC country codes From the International Telecommunication Union (ITU): the E.164 international telephone dialing codes: list of country calling codes with 1-3 digits, the E.212 mobile country codes (MCC), for mobile/wireless phone addresses, the first few characters of call signs of radio stations (maritime, aeronautical, amateur radio, broadcasting, and so on) define the country: the ITU prefix, ITU letter codes for member-countries, ITU prefix - amateur and experimental stations - The International Telecommunication Union (ITU) assigns national telecommunication prefixes for amateur and experimental radio use, so that operators can be identified by their country of origin. These prefixes are legally administered by the national entity to which prefix ranges are assigned. Three-digit codes used to identify countries in maritime mobile radio transmissions, known as maritime identification digits License plates for automobiles: Under the 1949 and 1968 United Nations Road Traffic Conventions (distinguishing signs of vehicles in international traffic): List of international license plate codes. Diplomatic license plates in the United States, assigned by the U.S. State Department. North Atlantic Treaty Organisation (NATO) used two-letter codes of its own: list of NATO country codes. They were largely borrowed from the FIPS 10-4 codes mentioned below. In 2003 the eighth edition of the Standardisation Agreement (STANAG) adopted the ISO 3166 three-letter codes with one exception (the code for Macedonia). With the ninth edition, NATO is transitioning to four- and six-letter codes based on ISO 3166 with a few exceptions and additions United Nations | member-countries, ITU prefix - amateur and experimental stations - The International Telecommunication Union (ITU) assigns national telecommunication prefixes for amateur and experimental radio use, so that operators can be identified by their country of origin. These prefixes are legally administered by the national entity to which prefix ranges are assigned. Three-digit codes used to identify countries in maritime mobile radio transmissions, known as maritime identification digits License plates for automobiles: Under the 1949 and 1968 United Nations Road Traffic Conventions (distinguishing signs of vehicles in international traffic): List of international license plate codes. Diplomatic license plates in the United States, assigned by the U.S. State Department. North Atlantic Treaty Organisation (NATO) used two-letter codes of its own: list of NATO country codes. They were largely borrowed from the FIPS 10-4 codes mentioned below. In 2003 the eighth edition of the Standardisation Agreement (STANAG) adopted the ISO 3166 three-letter codes with one exception (the code for Macedonia). With the ninth edition, NATO is transitioning to four- and six-letter codes based on ISO 3166 with a few exceptions and additions United Nations Development Programme (UNDP) also has its own list of trigram country codes World Intellectual Property Organization (WIPO): WIPO ST.3 gives two-letter codes to countries and regional intellectual property organizations World Meteorological Organization (WMO) has its own list of country codes, used in reporting meteorological observations UIC (the International Union of Railways): UIC Country Codes The developers of ISO 3166 intended that in time it would replace other coding systems in existence. Other codings The following can represent countries: The initial digits of International Standard Book Numbers (ISBN) are group identifiers for countries, areas, or language regions. The first three digits of GS1 Company Prefixes used to identify products, for example, in |
about the shape of a phylogenetic tree are used to justify decisions about character states, which are then used as evidence for the shape of the tree. Phylogenetics uses various forms of parsimony to decide such questions; the conclusions reached often depend on the dataset and the methods. Such is the nature of empirical science, and for this reason, most cladists refer to their cladograms as hypotheses of relationship. Cladograms that are supported by a large number and variety of different kinds of characters are viewed as more robust than those based on more limited evidence. Terminology for taxa Mono-, para- and polyphyletic taxa can be understood based on the shape of the tree (as done above), as well as based on their character states. These are compared in the table below. Criticism Cladistics, either generally or in specific applications, has been criticized from its beginnings. Decisions as to whether particular character states are homologous, a precondition of their being synapomorphies, have been challenged as involving circular reasoning and subjective judgements. Of course, the potential unreliability of evidence is a problem for any systematic method, or for that matter, for any empirical scientific endeavor at all. Transformed cladistics arose in the late 1970s in an attempt to resolve some of these problems by removing a priori assumptions about phylogeny from cladistic analysis, but it has remained unpopular. Issues The cladistic method does not identify fossil species as actual ancestors of a clade. Instead, fossil taxa are identified as belonging to separate extinct branches. While a fossil species could be the actual ancestor of a clade, there is no way to know that. Therefore, a more conservative hypothesis is that the fossil taxon is related to other fossil and extant taxa, as implied by the pattern of shared apomorphic features. In disciplines other than biology The comparisons used to acquire data on which cladograms can be based are not limited to the field of biology. Any group of individuals or classes that are hypothesized to have a common ancestor, and to which a set of common characteristics may or may not apply, can be compared pairwise. Cladograms can be used to depict the hypothetical descent relationships within groups of items in many different academic realms. The only requirement is that the items have characteristics that can be identified and measured. Anthropology and archaeology: Cladistic methods have been used to reconstruct the development of cultures or artifacts using groups of cultural traits or artifact features. Comparative mythology and folktale use cladistic methods to reconstruct the protoversion of many myths. Mythological phylogenies constructed with mythemes clearly support low horizontal transmissions (borrowings), historical (sometimes Palaeolithic) diffusions and punctuated evolution. They also are a powerful way to test hypotheses about cross-cultural relationships among folktales. Literature: Cladistic methods have been used in the classification of the surviving manuscripts of the Canterbury Tales, and the manuscripts of the Sanskrit Charaka Samhita. Historical linguistics: Cladistic methods have been used to reconstruct the phylogeny of languages using linguistic features. This is similar to the traditional comparative method of historical linguistics, but is more explicit in its use of parsimony and allows much faster analysis of large datasets (computational phylogenetics). Textual criticism or stemmatics: Cladistic methods have been used to reconstruct the phylogeny of manuscripts of the same work (and reconstruct the lost original) using distinctive copying errors as apomorphies. This differs from traditional historical-comparative linguistics in enabling the editor to evaluate and place in genetic relationship large groups of manuscripts with large numbers of variants that would be impossible to handle manually. It also enables parsimony analysis of contaminated traditions of transmission that would be impossible to evaluate manually in a reasonable period of time. Astrophysics infers the history of relationships between galaxies to create branching diagram hypotheses of galaxy diversification. See also Bioinformatics Biomathematics Coalescent theory Common descent Glossary of scientific naming Language family Patrocladogram Phylogenetic network Scientific classification Stratocladistics Subclade Systematics Three-taxon analysis Tree model Tree structure Notes and references Bibliography Available free online at Gallica (No direct URL). This is the paper credited by for the first use of the term 'clade'. responding to . Translated from manuscript in German eventually published in 1982 (Phylogenetische Systematik, Verlag Paul Parey, Berlin). d'Huy, Julien (2012b), "Le motif de Pygmalion : origine afrasienne et diffusion en Afrique". Sahara, 23: 49-59 . d'Huy, Julien (2013a), "Polyphemus (Aa. Th. 1137)." "A phylogenetic reconstruction of a prehistoric tale". Nouvelle Mythologie Comparée / New Comparative Mythology 1, d'Huy, Julien (2013c) "Les mythes évolueraient par ponctuations". Mythologie française, 252, 2013c: 8-12. d'Huy, Julien (2013d) "A Cosmic Hunt in the Berber sky : a phylogenetic reconstruction of Palaeolithic mythology". Les Cahiers de l'AARS, 15, 2013d: 93-106. Reissued 1997 in paperback. Includes a reprint of Mayr's 1974 anti-cladistics paper at pp. 433–476, "Cladistic analysis or cladistic classification." This is the paper to which is a response. . Tehrani, Jamshid J., 2013, "The Phylogeny of Little Red Riding Hood", PLOS ONE, 13 November. External links OneZoom: Tree of Life – all living species as intuitive and zoomable fractal explorer (responsive design) Willi Hennig Society Cladistics | Features that are derived in individual taxa (a single species or a group that is represented by a single terminal in a given phylogenetic analysis) are called autapomorphies (from auto-, "self"). Autapomorphies express nothing about relationships among groups; clades are identified (or defined) by synapomorphies (from syn-, "together"). For example, the possession of digits that are homologous with those of Homo sapiens is a synapomorphy within the vertebrates. The tetrapods can be singled out as consisting of the first vertebrate with such digits homologous to those of Homo sapiens together with all descendants of this vertebrate (an apomorphy-based phylogenetic definition). Importantly, snakes and other tetrapods that do not have digits are nonetheless tetrapods: other characters, such as amniotic eggs and diapsid skulls, indicate that they descended from ancestors that possessed digits which are homologous with ours. A character state is homoplastic or "an instance of homoplasy" if it is shared by two or more organisms but is absent from their common ancestor or from a later ancestor in the lineage leading to one of the organisms. It is therefore inferred to have evolved by convergence or reversal. Both mammals and birds are able to maintain a high constant body temperature (i.e., they are warm-blooded). However, the accepted cladogram explaining their significant features indicates that their common ancestor is in a group lacking this character state, so the state must have evolved independently in the two clades. Warm-bloodedness is separately a synapomorphy of mammals (or a larger clade) and of birds (or a larger clade), but it is not a synapomorphy of any group including both these clades. Hennig's Auxiliary Principle states that shared character states should be considered evidence of grouping unless they are contradicted by the weight of other evidence; thus, homoplasy of some feature among members of a group may only be inferred after a phylogenetic hypothesis for that group has been established. The terms plesiomorphy and apomorphy are relative; their application depends on the position of a group within a tree. For example, when trying to decide whether the tetrapods form a clade, an important question is whether having four limbs is a synapomorphy of the earliest taxa to be included within Tetrapoda: did all the earliest members of the Tetrapoda inherit four limbs from a common ancestor, whereas all other vertebrates did not, or at least not homologously? By contrast, for a group within the tetrapods, such as birds, having four limbs is a plesiomorphy. Using these two terms allows a greater precision in the discussion of homology, in particular allowing clear expression of the hierarchical relationships among different homologous features. It can be difficult to decide whether a character state is in fact the same and thus can be classified as a synapomorphy, which may identify a monophyletic group, or whether it only appears to be the same and is thus a homoplasy, which cannot identify such a group. There is a danger of circular reasoning: assumptions about the shape of a phylogenetic tree are used to justify decisions about character states, which are then used as evidence for the shape of the tree. Phylogenetics uses various forms of parsimony to decide such questions; the conclusions reached often depend on the dataset and the methods. Such is the nature of empirical science, and for this reason, most cladists refer to their cladograms as hypotheses of relationship. Cladograms that are supported by a large number and variety of different kinds of characters are viewed as more robust than those based on more limited evidence. Terminology for taxa Mono-, para- and polyphyletic taxa can be understood based on the shape of the tree (as done above), as well as based on their character states. These are compared in the table below. |
time, date, and weekday. Some may also show the lunar phase. Gregorian The Gregorian calendar is the de facto international standard and is used almost everywhere in the world for civil purposes. The widely used solar aspect is a cycle of leap days in a 400-year cycle designed to keep the duration of the year aligned with the solar year. There is a lunar aspect which approximates the position of the moon during the year, and is used in the calculation of the date of Easter. Each Gregorian year has either 365 or 366 days (the leap day being inserted as 29 February), amounting to an average Gregorian year of 365.2425 days (compared to a solar year of 365.2422 days). It was introduced in 1582 as a refinement to the Julian calendar which had been in use throughout the European Middle Ages, amounting to a 0.002% correction in the length of the year. During the Early Modern period, however, its adoption was mostly limited to Roman Catholic nations, but by the 19th century, it became widely adopted worldwide for the sake of convenience in international trade. The last European country to adopt the reform was Greece, in 1923. The calendar epoch used by the Gregorian calendar is inherited from the medieval convention established by Dionysius Exiguus and associated with the Julian calendar. The year number is variously given as AD (for Anno Domini) or CE (for Common Era or Christian Era). Religious The most important use of pre-modern calendars is keeping track of the liturgical year and the observation of religious feast days. While the Gregorian calendar is itself historically motivated to the calculation of the Easter date, it is now in worldwide secular use as the de facto standard. Alongside the use of the Gregorian calendar for secular matters, there remain several calendars in use for religious purposes. Western Christian liturgical calendars are based on the cycle of the Roman Rite of the Catholic Church and generally include the liturgical seasons of Advent, Christmas, Ordinary Time (Time after Epiphany), Lent, Easter, and Ordinary Time (Time after Pentecost). Some Christian calendars do not include Ordinary Time and every day falls into a denominated season. Eastern Christians, including the Orthodox Church, use the Julian calendar. The Islamic calendar or Hijri calendar is a lunar calendar consisting of 12 lunar months in a year of 354 or 355 days. It is used to date events in most of the Muslim countries (concurrently with the Gregorian calendar) and used by Muslims everywhere to determine the proper day on which to celebrate Islamic holy days and festivals. Its epoch is the Hijra (corresponding to AD 622) With an annual drift of 11 or 12 days, the seasonal relation is repeated approximately every 33 Islamic years. Various Hindu calendars remain in use in the Indian subcontinent, including the Nepali calendars, Bengali calendar, Malayalam calendar, Tamil calendar, Vikrama Samvat used in Northern India, and Shalivahana calendar in the Deccan states. The Buddhist calendar and the traditional lunisolar calendars of Cambodia, Laos, Myanmar, Sri Lanka and Thailand are also based on an older version of the Hindu calendar. Most of the Hindu calendars are inherited from a system first enunciated in Vedanga Jyotisha of Lagadha, standardized in the Sūrya Siddhānta and subsequently reformed by astronomers such as Āryabhaṭa (AD 499), Varāhamihira (6th century) and Bhāskara II (12th century). The Hebrew calendar is used by Jews worldwide for religious and cultural affairs, also influences civil matters in Israel (such as national holidays) and can be used business dealings (such as for the dating of cheques). Followers of the Baháʼí Faith use the Baháʼí calendar. The Baháʼí Calendar, also known as the Badi Calendar was first established by the Bab in the Kitab-i-Asma. The Baháʼí Calendar is also purely a solar calendar and comprises 19 months each having nineteen days. National The Chinese, Hebrew, Hindu, and Julian calendars are widely used for religious and social purposes. The Iranian (Persian) calendar is used in Iran and some parts of Afghanistan. The Assyrian calendar is in use by the members of the Assyrian community in the Middle East (mainly Iraq, Syria, Turkey, and Iran) and the diaspora. The first year of the calendar is exactly 4750 years prior to the start of the Gregorian calendar. The Ethiopian calendar or Ethiopic calendar is the principal calendar used in Ethiopia and Eritrea, with the Oromo calendar also in use in some areas. In neighboring Somalia, the Somali calendar co-exists alongside the Gregorian and Islamic calendars. In Thailand, where the Thai solar calendar is used, the months and days have adopted the western standard, although the years are still based on the traditional Buddhist calendar. Fiscal A fiscal calendar generally means the accounting year of a government or a business. It is used for budgeting, keeping accounts, and taxation. It is a set of 12 months that may start at any date in a year. The US government's fiscal year starts on 1 October and ends on 30 September. The government of India's fiscal year starts on 1 April and ends on 31 March. Small traditional businesses in India start the fiscal year on Diwali festival and end the day before the next year's Diwali festival. In accounting (and particularly accounting software), a fiscal calendar (such as a 4/4/5 calendar) fixes each month at a specific number of weeks to facilitate comparisons from month to month and year to year. January always has exactly 4 weeks (Sunday through Saturday), February has 4 weeks, March has 5 weeks, etc. Note that this calendar will normally need to add a 53rd week to every 5th or 6th year, which might be added to December or might not be, depending on how the organization uses those dates. There exists an international standard way to do this (the ISO week). The ISO week starts on a Monday and ends on a Sunday. Week 1 is always the week that contains 4 January in the Gregorian calendar. Formats The term calendar applies not only to a given scheme of timekeeping but also to a specific record or device displaying such a scheme, for example, an appointment book in the form of a pocket calendar (or personal organizer), desktop calendar, a wall calendar, etc. In a paper calendar, one or two sheets can show a single day, a week, a month, or a year. If a sheet is for a single day, it easily shows the date and the weekday. If a sheet is for multiple days it shows a conversion table to convert from weekday to date and back. With a special pointing device, or by crossing out past days, it may indicate the current date | and ordinal date within the year, e.g., the ISO 8601 ordinal date system Calendars with two levels of cycles: year, month, and day – most systems, including the Gregorian calendar (and its very similar predecessor, the Julian calendar), the Islamic calendar, the Solar Hijri calendar and the Hebrew calendar year, week, and weekday – e.g., the ISO week date Cycles can be synchronized with periodic phenomena: Lunar calendars are synchronized to the motion of the Moon (lunar phases); an example is the Islamic calendar. Solar calendars are based on perceived seasonal changes synchronized to the apparent motion of the Sun; an example is the Persian calendar. Lunisolar calendars are based on a combination of both solar and lunar reckonings; examples include the traditional calendar of China, the Hindu calendar in India and Nepal, and the Hebrew calendar. The week cycle is an example of one that is not synchronized to any external phenomenon (although it may have been derived from lunar phases, beginning anew every month). Very commonly a calendar includes more than one type of cycle or has both cyclic and non-cyclic elements. Most calendars incorporate more complex cycles. For example, the vast majority of them track years, months, weeks and days. The seven-day week is practically universal, though its use varies. It has run uninterrupted for millennia. Solar Solar calendars assign a date to each solar day. A day may consist of the period between sunrise and sunset, with a following period of night, or it may be a period between successive events such as two sunsets. The length of the interval between two such successive events may be allowed to vary slightly during the year, or it may be averaged into a mean solar day. Other types of calendar may also use a solar day. Lunar Not all calendars use the solar year as a unit. A lunar calendar is one in which days are numbered within each lunar phase cycle. Because the length of the lunar month is not an even fraction of the length of the tropical year, a purely lunar calendar quickly drifts against the seasons, which do not vary much near the equator. It does, however, stay constant with respect to other phenomena, notably tides. An example is the Islamic calendar. Alexander Marshack, in a controversial reading, believed that marks on a bone baton (c. 25,000 BC) represented a lunar calendar. Other marked bones may also represent lunar calendars. Similarly, Michael Rappenglueck believes that marks on a 15,000-year-old cave painting represent a lunar calendar. Lunisolar A lunisolar calendar is a lunar calendar that compensates by adding an extra month as needed to realign the months with the seasons. Prominent examples of lunisolar calendar are Hindu calendar and Buddhist calendar that are popular in South Asia and Southeast Asia. Another example is the Hebrew calendar, which uses a 19-year cycle. Subdivisions Nearly all calendar systems group consecutive days into "months" and also into "years". In a solar calendar a year approximates Earth's tropical year (that is, the time it takes for a complete cycle of seasons), traditionally used to facilitate the planning of agricultural activities. In a lunar calendar, the month approximates the cycle of the moon phase. Consecutive days may be grouped into other periods such as the week. Because the number of days in the tropical year is not a whole number, a solar calendar must have a different number of days in different years. This may be handled, for example, by adding an extra day in leap years. The same applies to months in a lunar calendar and also the number of months in a year in a lunisolar calendar. This is generally known as intercalation. Even if a calendar is solar, but not lunar, the year cannot be divided entirely into months that never vary in length. Cultures may define other units of time, such as the week, for the purpose of scheduling regular activities that do not easily coincide with months or years. Many cultures use different baselines for their calendars' starting years. Historically, several countries have based their calendars on regnal years, a calendar based on the reign of their current sovereign. For example, the year 2006 in Japan is year 18 Heisei, with Heisei being the era name of Emperor Akihito. Other types Arithmetical and astronomical An astronomical calendar is based on ongoing observation; examples are the religious Islamic calendar and the old religious Jewish calendar in the time of the Second Temple. Such a calendar is also referred to as an observation-based calendar. The advantage of such a calendar is that it is perfectly and perpetually accurate. The disadvantage is that working out when a particular date would occur is difficult. An arithmetic calendar is one that is based on a strict set of rules; an example is the current Jewish calendar. Such a calendar is also referred to as a rule-based calendar. The advantage of such a calendar is the ease of calculating when a particular date occurs. The disadvantage is imperfect accuracy. Furthermore, even if the calendar is very accurate, its accuracy diminishes slowly over time, owing to changes in Earth's rotation. This limits the lifetime of an accurate arithmetic calendar to a few thousand years. After then, the rules would need to be modified from observations made since the invention of the calendar. Complete and incomplete Calendars may be either complete or incomplete. Complete calendars provide a way of naming each consecutive day, while incomplete calendars do not. The early Roman calendar, which had no way of designating the days of the winter months other than to lump them together as "winter", is an example of an incomplete calendar, while the Gregorian calendar is an example of a complete calendar. Usage The primary practical use of a calendar is to identify days: to be informed about or to agree on a future event and to record an event that has happened. Days may be significant for agricultural, civil, religious, or social reasons. For example, a calendar provides a way to determine when to start planting or harvesting, which days are religious or civil holidays, which days mark the beginning and end of business accounting periods, and which days have legal significance, such as the day taxes are due or a contract expires. Also, a calendar may, by identifying a day, provide other useful information about the day such as its season. Calendars are also used to help people manage their personal schedules, time, and activities, particularly when individuals have numerous work, school, and family commitments. People frequently use multiple systems and may keep both a business and family calendar to help prevent them from overcommitting their time. Calendars are also used as part of a complete timekeeping system: date and time of day together specify a moment in time. In the modern world, timekeepers can show time, date, and weekday. Some may also show the lunar phase. Gregorian The Gregorian calendar is the de facto international standard and is used almost everywhere in the world for civil purposes. The widely used solar aspect is a cycle of leap days in a 400-year cycle designed to keep the duration of the year aligned with the solar year. There is a lunar aspect which approximates the position of the moon during the year, and is used in the calculation of the date of Easter. Each Gregorian year has either 365 or 366 days (the leap day being inserted as 29 February), amounting to an average Gregorian year of 365.2425 days (compared to a solar year of 365.2422 days). It was introduced in 1582 as a refinement to the Julian calendar which had been in use throughout the European Middle Ages, amounting to a 0.002% correction in the length of the year. During the Early Modern period, however, its adoption was mostly limited to Roman Catholic nations, but by the 19th century, it became widely adopted worldwide for the sake of convenience in international trade. The last European country to adopt the reform was Greece, in 1923. The calendar epoch used by the Gregorian calendar is inherited from the medieval convention established by Dionysius Exiguus and associated with the Julian calendar. The year number is variously given as AD (for Anno Domini) or CE (for Common Era or Christian Era). Religious The most important use of pre-modern calendars is keeping track of the liturgical year and the observation of religious feast days. While the Gregorian calendar is itself historically motivated to the calculation of the Easter date, it is now in worldwide secular use as the de facto standard. Alongside the use of the Gregorian calendar for secular matters, there remain several calendars in use for religious purposes. Western Christian liturgical calendars are based on the cycle of the Roman Rite of the Catholic Church and generally include the liturgical seasons of Advent, Christmas, Ordinary Time (Time after Epiphany), Lent, Easter, and Ordinary Time (Time after Pentecost). Some Christian calendars do not include Ordinary Time and every day falls into a denominated season. Eastern Christians, including the Orthodox Church, use the Julian calendar. The Islamic calendar or Hijri calendar is a lunar calendar consisting of 12 lunar months in a year of 354 or 355 days. It is used to date events in most of the Muslim countries (concurrently with the Gregorian calendar) and used by Muslims everywhere to determine the proper day on which to celebrate Islamic holy days and festivals. Its epoch is the Hijra (corresponding to AD 622) With an annual drift of 11 or 12 days, the seasonal relation is repeated approximately every 33 Islamic years. Various Hindu calendars remain in use in the Indian subcontinent, including the Nepali calendars, Bengali calendar, Malayalam calendar, Tamil calendar, Vikrama Samvat used in Northern India, and Shalivahana calendar in the Deccan states. The Buddhist calendar and the traditional lunisolar calendars of Cambodia, Laos, Myanmar, Sri Lanka and Thailand are also based on an older version of the Hindu calendar. Most of the Hindu calendars are inherited from a system first enunciated in Vedanga Jyotisha of Lagadha, standardized in the Sūrya Siddhānta and subsequently reformed by astronomers such |
Arno Penzias and Robert Woodrow Wilson, has a perfect thermal black-body spectrum. It has a temperature of 2.7 kelvins today and is isotropic to one part in 105. Cosmological perturbation theory, which describes the evolution of slight inhomogeneities in the early universe, has allowed cosmologists to precisely calculate the angular power spectrum of the radiation, and it has been measured by the recent satellite experiments (COBE and WMAP) and many ground and balloon-based experiments (such as Degree Angular Scale Interferometer, Cosmic Background Imager, and Boomerang). One of the goals of these efforts is to measure the basic parameters of the Lambda-CDM model with increasing accuracy, as well as to test the predictions of the Big Bang model and look for new physics. The results of measurements made by WMAP, for example, have placed limits on the neutrino masses. Newer experiments, such as QUIET and the Atacama Cosmology Telescope, are trying to measure the polarization of the cosmic microwave background. These measurements are expected to provide further confirmation of the theory as well as information about cosmic inflation, and the so-called secondary anisotropies, such as the Sunyaev-Zel'dovich effect and Sachs-Wolfe effect, which are caused by interaction between galaxies and clusters with the cosmic microwave background. On 17 March 2014, astronomers of the BICEP2 Collaboration announced the apparent detection of B-mode polarization of the CMB, considered to be evidence of primordial gravitational waves that are predicted by the theory of inflation to occur during the earliest phase of the Big Bang. However, later that year the Planck collaboration provided a more accurate measurement of cosmic dust, concluding that the B-mode signal from dust is the same strength as that reported from BICEP2. On 30 January 2015, a joint analysis of BICEP2 and Planck data was published and the European Space Agency announced that the signal can be entirely attributed to interstellar dust in the Milky Way. Formation and evolution of large-scale structure Understanding the formation and evolution of the largest and earliest structures (i.e., quasars, galaxies, clusters and superclusters) is one of the largest efforts in cosmology. Cosmologists study a model of hierarchical structure formation in which structures form from the bottom up, with smaller objects forming first, while the largest objects, such as superclusters, are still assembling. One way to study structure in the universe is to survey the visible galaxies, in order to construct a three-dimensional picture of the galaxies in the universe and measure the matter power spectrum. This is the approach of the Sloan Digital Sky Survey and the 2dF Galaxy Redshift Survey. Another tool for understanding structure formation is simulations, which cosmologists use to study the gravitational aggregation of matter in the universe, as it clusters into filaments, superclusters and voids. Most simulations contain only non-baryonic cold dark matter, which should suffice to understand the universe on the largest scales, as there is much more dark matter in the universe than visible, baryonic matter. More advanced simulations are starting to include baryons and study the formation of individual galaxies. Cosmologists study these simulations to see if they agree with the galaxy surveys, and to understand any discrepancy. Other, complementary observations to measure the distribution of matter in the distant universe and to probe reionization include: The Lyman-alpha forest, which allows cosmologists to measure the distribution of neutral atomic hydrogen gas in the early universe, by measuring the absorption of light from distant quasars by the gas. The 21-centimeter absorption line of neutral atomic hydrogen also provides a sensitive test of cosmology. Weak lensing, the distortion of a distant image by gravitational lensing due to dark matter. These will help cosmologists settle the question of when and how structure formed in the universe. Dark matter Evidence from Big Bang nucleosynthesis, the cosmic microwave background, structure formation, and galaxy rotation curves suggests that about 23% of the mass of the universe consists of non-baryonic dark matter, whereas only 4% consists of visible, baryonic matter. The gravitational effects of dark matter are well understood, as it behaves like a cold, non-radiative fluid that forms haloes around galaxies. Dark matter has never been detected in the laboratory, and the particle physics nature of dark matter remains completely unknown. Without observational constraints, there are a number of candidates, such as a stable supersymmetric particle, a weakly interacting massive particle, a gravitationally-interacting massive particle, an axion, and a massive compact halo object. Alternatives to the dark matter hypothesis include a modification of gravity at small accelerations (MOND) or an effect from brane cosmology. Dark energy If the universe is flat, there must be an additional component making up 73% (in addition to the 23% dark matter and 4% baryons) of the energy density of the universe. This is called dark energy. In order not to interfere with Big Bang nucleosynthesis and the cosmic microwave background, it must not cluster in haloes like baryons and dark matter. There is strong observational evidence for dark energy, as the total energy density of the universe is known through constraints on the flatness of the universe, but the amount of clustering matter is tightly measured, and is much less than this. The case for dark energy was strengthened in 1999, when measurements demonstrated that the expansion of the universe has begun to gradually accelerate. Apart from its density and its clustering properties, nothing is known about dark energy. Quantum field theory predicts a cosmological constant (CC) much like dark energy, but 120 orders of magnitude larger than that observed. Steven Weinberg and a number of string theorists (see string landscape) have invoked the 'weak anthropic principle': i.e. the reason that physicists observe a universe with such a small cosmological constant is that no physicists (or any life) could exist in a universe with a larger cosmological constant. Many cosmologists find this an unsatisfying explanation: perhaps because while the weak anthropic principle is self-evident (given that living observers exist, there must be at least one universe with a cosmological constant which allows for life to exist) it does not attempt to explain the context of that universe. For example, the weak anthropic principle alone does not distinguish between: Only one universe will ever exist and there is some underlying principle that constrains the CC to the value we observe. Only one universe will ever exist and although there is no underlying principle fixing the CC, we got lucky. Lots of universes exist (simultaneously or serially) with a range of CC values, and of course ours is one of the life-supporting ones. Other possible explanations for dark energy include quintessence or a modification of gravity on the largest scales. The effect on cosmology of the dark energy that these models describe is given by the dark energy's equation of state, which varies depending upon the theory. The nature of dark energy is one of the most challenging problems in cosmology. A better understanding of dark energy is likely to solve the problem of the ultimate fate of the universe. In the current cosmological epoch, the accelerated expansion due to dark energy is preventing structures larger than superclusters from forming. It is not known whether the acceleration will continue indefinitely, perhaps even increasing until a big rip, or whether it will eventually reverse, lead to a big freeze, or follow some other scenario. Gravitational waves Gravitational waves are ripples in the curvature of spacetime that propagate as waves at the speed of light, generated in certain gravitational interactions that propagate outward from their source. Gravitational-wave astronomy is an emerging branch of observational astronomy which aims to use gravitational waves to collect observational data about sources of detectable gravitational waves such as binary star systems composed of white dwarfs, neutron stars, and black holes; and events such as supernovae, and the formation of the early universe shortly after the Big Bang. In 2016, the LIGO Scientific Collaboration and Virgo Collaboration teams announced that they had made the first observation of gravitational waves, originating from a pair of merging black holes using the Advanced LIGO detectors. On 15 June 2016, a second detection of gravitational waves from coalescing black holes was announced. Besides LIGO, many other gravitational-wave observatories (detectors) are under construction. Other areas of inquiry Cosmologists also study: Whether primordial black holes were formed in our universe, and what happened to them. Detection of cosmic rays with energies above the GZK cutoff, and whether it signals a failure of special relativity at high energies. The equivalence principle, whether or not Einstein's general theory of relativity is the correct theory of gravitation, and if the fundamental laws of physics are the same everywhere in the universe. See also Accretion Hubble's law Illustris project List of cosmologists Photon Physical ontology Quantum cosmology String cosmology Universal Rotation Curve References Further reading Popular Textbooks Introductory cosmology and general relativity without the full tensor apparatus, deferred until the last part of the book. An introductory text, released slightly before the WMAP results. For undergraduates; mathematically gentle with a strong historical focus. An introductory astronomy text. The classic reference for researchers. Cosmology without general relativity. An introduction to cosmology with a thorough discussion of inflation. Discusses the formation of large-scale structures in detail. An introduction including more on general relativity and quantum field theory than most. Strong historical focus. The classic work on | the expansion of the universe starts to accelerate rather than decelerate. In our universe this happened billions of years ago. Particle physics in cosmology During the earliest moments of the universe, the average energy density was very high, making knowledge of particle physics critical to understanding this environment. Hence, scattering processes and decay of unstable elementary particles are important for cosmological models of this period. As a rule of thumb, a scattering or a decay process is cosmologically important in a certain epoch if the time scale describing that process is smaller than, or comparable to, the time scale of the expansion of the universe. The time scale that describes the expansion of the universe is with being the Hubble parameter, which varies with time. The expansion timescale is roughly equal to the age of the universe at each point in time. Timeline of the Big Bang Observations suggest that the universe began around 13.8 billion years ago. Since then, the evolution of the universe has passed through three phases. The very early universe, which is still poorly understood, was the split second in which the universe was so hot that particles had energies higher than those currently accessible in particle accelerators on Earth. Therefore, while the basic features of this epoch have been worked out in the Big Bang theory, the details are largely based on educated guesses. Following this, in the early universe, the evolution of the universe proceeded according to known high energy physics. This is when the first protons, electrons and neutrons formed, then nuclei and finally atoms. With the formation of neutral hydrogen, the cosmic microwave background was emitted. Finally, the epoch of structure formation began, when matter started to aggregate into the first stars and quasars, and ultimately galaxies, clusters of galaxies and superclusters formed. The future of the universe is not yet firmly known, but according to the ΛCDM model it will continue expanding forever. Areas of study Below, some of the most active areas of inquiry in cosmology are described, in roughly chronological order. This does not include all of the Big Bang cosmology, which is presented in Timeline of the Big Bang. Very early universe The early, hot universe appears to be well explained by the Big Bang from roughly 10−33 seconds onwards, but there are several problems. One is that there is no compelling reason, using current particle physics, for the universe to be flat, homogeneous, and isotropic (see the cosmological principle). Moreover, grand unified theories of particle physics suggest that there should be magnetic monopoles in the universe, which have not been found. These problems are resolved by a brief period of cosmic inflation, which drives the universe to flatness, smooths out anisotropies and inhomogeneities to the observed level, and exponentially dilutes the monopoles. The physical model behind cosmic inflation is extremely simple, but it has not yet been confirmed by particle physics, and there are difficult problems reconciling inflation and quantum field theory. Some cosmologists think that string theory and brane cosmology will provide an alternative to inflation. Another major problem in cosmology is what caused the universe to contain far more matter than antimatter. Cosmologists can observationally deduce that the universe is not split into regions of matter and antimatter. If it were, there would be X-rays and gamma rays produced as a result of annihilation, but this is not observed. Therefore, some process in the early universe must have created a small excess of matter over antimatter, and this (currently not understood) process is called baryogenesis. Three required conditions for baryogenesis were derived by Andrei Sakharov in 1967, and requires a violation of the particle physics symmetry, called CP-symmetry, between matter and antimatter. However, particle accelerators measure too small a violation of CP-symmetry to account for the baryon asymmetry. Cosmologists and particle physicists look for additional violations of the CP-symmetry in the early universe that might account for the baryon asymmetry. Both the problems of baryogenesis and cosmic inflation are very closely related to particle physics, and their resolution might come from high energy theory and experiment, rather than through observations of the universe. Big Bang Theory Big Bang nucleosynthesis is the theory of the formation of the elements in the early universe. It finished when the universe was about three minutes old and its temperature dropped below that at which nuclear fusion could occur. Big Bang nucleosynthesis had a brief period during which it could operate, so only the very lightest elements were produced. Starting from hydrogen ions (protons), it principally produced deuterium, helium-4, and lithium. Other elements were produced in only trace abundances. The basic theory of nucleosynthesis was developed in 1948 by George Gamow, Ralph Asher Alpher, and Robert Herman. It was used for many years as a probe of physics at the time of the Big Bang, as the theory of Big Bang nucleosynthesis connects the abundances of primordial light elements with the features of the early universe. Specifically, it can be used to test the equivalence principle, to probe dark matter, and test neutrino physics. Some cosmologists have proposed that Big Bang nucleosynthesis suggests there is a fourth "sterile" species of neutrino. Standard model of Big Bang cosmology The ΛCDM (Lambda cold dark matter) or Lambda-CDM model is a parametrization of the Big Bang cosmological model in which the universe contains a cosmological constant, denoted by Lambda (Greek Λ), associated with dark energy, and cold dark matter (abbreviated CDM). It is frequently referred to as the standard model of Big Bang cosmology. Cosmic microwave background The cosmic microwave background is radiation left over from decoupling after the epoch of recombination when neutral atoms first formed. At this point, radiation produced in the Big Bang stopped Thomson scattering from charged ions. The radiation, first observed in 1965 by Arno Penzias and Robert Woodrow Wilson, has a perfect thermal black-body spectrum. It has a temperature of 2.7 kelvins today and is isotropic to one part in 105. Cosmological perturbation theory, which describes the evolution of slight inhomogeneities in the early universe, has allowed cosmologists to precisely calculate the angular power spectrum of the radiation, and it has been measured by the recent satellite experiments (COBE and WMAP) and many ground and balloon-based experiments (such as Degree Angular Scale Interferometer, Cosmic Background Imager, and Boomerang). One of the goals of these efforts is to measure the basic parameters of the Lambda-CDM model with increasing accuracy, as well as to test the predictions of the Big Bang model and look for new physics. The results of measurements made by WMAP, for example, have placed limits on the neutrino masses. Newer experiments, such as QUIET and the Atacama Cosmology Telescope, are trying to measure the polarization of the cosmic microwave background. These measurements are expected to provide further confirmation of the theory as well as information about cosmic inflation, and the so-called secondary anisotropies, such as the Sunyaev-Zel'dovich effect and Sachs-Wolfe effect, which are caused by interaction between galaxies and clusters with the cosmic microwave background. On 17 March 2014, astronomers of the BICEP2 Collaboration announced the apparent detection of B-mode polarization of the CMB, considered to be evidence of primordial gravitational waves that are predicted by the theory of inflation to occur during the earliest phase of the Big Bang. However, later that year the Planck collaboration provided a more accurate measurement of cosmic dust, concluding that the B-mode signal from dust is the same strength as that reported from BICEP2. On 30 January 2015, a joint analysis of BICEP2 and Planck data was published and the European Space Agency announced that the signal can be entirely attributed to interstellar dust in the Milky Way. Formation and evolution of large-scale structure Understanding the formation and evolution of the largest and earliest structures (i.e., quasars, galaxies, clusters and superclusters) is one of the largest efforts in cosmology. Cosmologists study a model of hierarchical structure formation in which structures form from the bottom up, with smaller objects forming first, while the largest objects, such as superclusters, are still assembling. One way to study structure in the universe is to survey the visible galaxies, in order to construct a three-dimensional picture of the galaxies in the universe and measure the matter power spectrum. This is the approach of the Sloan Digital Sky Survey and the 2dF Galaxy Redshift Survey. Another tool for understanding structure formation is simulations, which cosmologists use to study the gravitational aggregation of matter in the universe, as it clusters into filaments, superclusters and voids. Most simulations contain only non-baryonic cold dark matter, which should suffice to understand the universe on the largest scales, as there is much more dark matter in the universe than visible, baryonic matter. More advanced simulations are starting to include baryons and study the formation of individual galaxies. Cosmologists study these simulations to see if they agree with the galaxy surveys, and to understand any discrepancy. Other, complementary observations to measure the distribution of matter in the distant universe and to probe reionization include: The Lyman-alpha forest, which allows cosmologists to measure the distribution of neutral atomic hydrogen gas in the early universe, by measuring the absorption of light from distant quasars by the gas. The 21-centimeter absorption line of neutral atomic hydrogen also provides a sensitive test of cosmology. Weak lensing, the distortion of a distant image by gravitational lensing due to dark matter. These will help cosmologists settle the question of when and how structure formed in the universe. Dark matter Evidence from Big Bang nucleosynthesis, the cosmic microwave background, structure formation, and galaxy rotation curves suggests that about 23% of the mass of the universe consists of non-baryonic dark matter, whereas only 4% consists of visible, baryonic matter. The gravitational effects of dark matter are well understood, as it behaves like a cold, non-radiative fluid that forms haloes around galaxies. Dark matter has never been detected in the laboratory, and the particle physics nature of dark matter remains completely unknown. Without observational constraints, there are a number of candidates, such as a stable supersymmetric particle, a weakly interacting massive particle, a gravitationally-interacting massive particle, an axion, and a massive compact halo object. Alternatives to the dark matter hypothesis include a modification of gravity at small accelerations (MOND) or an effect from brane cosmology. Dark energy If the universe is flat, there must be an additional component making up 73% (in addition to the 23% dark matter and 4% baryons) of the energy density of the universe. This is called dark energy. In order not to interfere with Big Bang nucleosynthesis and the cosmic microwave background, it must not cluster in haloes like baryons and dark matter. There is strong observational evidence for dark energy, as the total energy density of the universe is known through constraints on the flatness of the universe, but the amount of clustering matter is tightly measured, and is much less than this. The case for dark energy was strengthened in 1999, when measurements demonstrated that the expansion of the universe has begun to gradually accelerate. Apart from its density and its clustering properties, nothing is known about dark energy. Quantum field theory predicts a cosmological constant (CC) much like dark energy, but 120 orders of magnitude larger than that observed. Steven Weinberg and a number of string theorists (see string landscape) have invoked the 'weak anthropic principle': i.e. the reason that physicists observe a universe with such a small cosmological constant is that no physicists (or any life) could exist in a universe with a larger cosmological constant. Many cosmologists find this an unsatisfying explanation: perhaps because while the weak anthropic principle is self-evident (given that living observers exist, there must be at least one universe with a cosmological constant which allows for life to exist) it does not attempt to explain the context of that universe. For example, the weak anthropic principle alone does not distinguish between: Only one universe will ever exist and there is some underlying principle that constrains the CC to the value we observe. Only one universe will ever exist and although there is no underlying principle fixing the CC, we got lucky. Lots of universes exist (simultaneously or serially) with a range of CC values, and of course ours is one of the life-supporting ones. Other possible explanations for dark energy include quintessence or a modification of gravity on the largest scales. The effect on cosmology of the dark energy that these models describe is given by the dark energy's equation of state, which varies depending upon the theory. The nature of dark energy is one of the most challenging problems |
remote galaxies was redshifted; the more remote, the more shifted. This was quickly interpreted as meaning galaxies were receding from Earth. If Earth is not in some special, privileged, central position in the universe, then it would mean all galaxies are moving apart, and the further away, the faster they are moving away. It is now understood that the universe is expanding, carrying the galaxies with it, and causing this observation. Many other observations agree, and also lead to the same conclusion. However, for many years it was not clear why or how the universe might be expanding, or what it might signify. Based on a huge amount of experimental observation and theoretical work, it is now believed that the reason for the observation is that space itself is expanding, and that it expanded very rapidly within the first fraction of a second after the Big Bang. This kind of expansion is known as a "metric" expansion. In the terminology of mathematics and physics, a "metric" is a measure of distance that satisfies a specific list of properties, and the term implies that the sense of distance within the universe is itself changing. Today, metric variation is far too small an effect to see on less than an intergalactic scale. The modern explanation for the metric expansion of space was proposed by physicist Alan Guth in 1979, while investigating the problem of why no magnetic monopoles are seen today. He found that if the universe contained a field in a positive-energy false vacuum state, then according to general relativity it would generate an exponential expansion of space. It was very quickly realized that such an expansion would resolve many other long-standing problems. These problems arise from the observation that to look like it does today, the Universe would have to have started from very finely tuned, or "special" initial conditions at the Big Bang. Inflation theory largely resolves these problems as well, thus making a universe like ours much more likely in the context of Big Bang theory. No physical field has yet been discovered that is responsible for this inflation. However such a field would be scalar and the first relativistic scalar field proven to exist, the Higgs field, was only discovered in 2012–2013 and is still being researched. So it is not seen as problematic that a field responsible for cosmic inflation and the metric expansion of space has not yet been discovered. The proposed field and its quanta (the subatomic particles related to it) have been named the inflaton. If this field did not exist, scientists would have to propose a different explanation for all the observations that strongly suggest a metric expansion of space has occurred, and is still occurring (much more slowly) today. Theory An expanding universe generally has a cosmological horizon, which, by analogy with the more familiar horizon caused by the curvature of Earth's surface, marks the boundary of the part of the Universe that an observer can see. Light (or other radiation) emitted by objects beyond the cosmological horizon in an accelerating universe never reaches the observer, because the space in between the observer and the object is expanding too rapidly. The observable universe is one causal patch of a much larger unobservable universe; other parts of the Universe cannot communicate with Earth yet. These parts of the Universe are outside our current cosmological horizon. In the standard hot big bang model, without inflation, the cosmological horizon moves out, bringing new regions into view. Yet as a local observer sees such a region for the first time, it looks no different from any other region of space the local observer has already seen: its background radiation is at nearly the same temperature as the background radiation of other regions, and its space-time curvature is evolving lock-step with the others. This presents a mystery: how did these new regions know what temperature and curvature they were supposed to have? They couldn't have learned it by getting signals, because they were not previously in communication with our past light cone. Inflation answers this question by postulating that all the regions come from an earlier era with a big vacuum energy, or cosmological constant. A space with a cosmological constant is qualitatively different: instead of moving outward, the cosmological horizon stays put. For any one observer, the distance to the cosmological horizon is constant. With exponentially expanding space, two nearby observers are separated very quickly; so much so, that the distance between them quickly exceeds the limits of communications. The spatial slices are expanding very fast to cover huge volumes. Things are constantly moving beyond the cosmological horizon, which is a fixed distance away, and everything becomes homogeneous. As the inflationary field slowly relaxes to the vacuum, the cosmological constant goes to zero and space begins to expand normally. The new regions that come into view during the normal expansion phase are exactly the same regions that were pushed out of the horizon during inflation, and so they are at nearly the same temperature and curvature, because they come from the same originally small patch of space. The theory of inflation thus explains why the temperatures and curvatures of different regions are so nearly equal. It also predicts that the total curvature of a space-slice at constant global time is zero. This prediction implies that the total ordinary matter, dark matter and residual vacuum energy in the Universe have to add up to the critical density, and the evidence supports this. More strikingly, inflation allows physicists to calculate the minute differences in temperature of different regions from quantum fluctuations during the inflationary era, and many of these quantitative predictions have been confirmed. Space expands In a space that expands exponentially (or nearly exponentially) with time, any pair of free-floating objects that are initially at rest will move apart from each other at an accelerating rate, at least as long as they are not bound together by any force. From the point of view of one such object, the spacetime is something like an inside-out Schwarzschild black hole—each object is surrounded by a spherical event horizon. Once the other object has fallen through this horizon it can never return, and even light signals it sends will never reach the first object (at least so long as the space continues to expand exponentially). In the approximation that the expansion is exactly exponential, the horizon is static and remains a fixed physical distance away. This patch of an inflating universe can be described by the following metric: This exponentially expanding spacetime is called a de Sitter space, and to sustain it there must be a cosmological constant, a vacuum energy density that is constant in space and time and proportional to Λ in the above metric. For the case of exactly exponential expansion, the vacuum energy has a negative pressure p equal in magnitude to its energy density ρ; the equation of state is p=−ρ. Inflation is typically not an exactly exponential expansion, but rather quasi- or near-exponential. In such a universe the horizon will slowly grow with time as the vacuum energy density gradually decreases. Few inhomogeneities remain Because the accelerating expansion of space stretches out any initial variations in density or temperature to very large length scales, an essential feature of inflation is that it smooths out inhomogeneities and anisotropies, and reduces the curvature of space. This pushes the Universe into a very simple state in which it is completely dominated by the inflaton field and the only significant inhomogeneities are tiny quantum fluctuations. Inflation also dilutes exotic heavy particles, such as the magnetic monopoles predicted by many extensions to the Standard Model of particle physics. If the Universe was only hot enough to form such particles before a period of inflation, they would not be observed in nature, as they would be so rare that it is quite likely that there are none in the observable universe. Together, these effects are called the inflationary "no-hair theorem" by analogy with the no hair theorem for black holes. The "no-hair" theorem works essentially because the cosmological horizon is no different from a black-hole horizon, except for philosophical disagreements about what is on the other side. The interpretation of the no-hair theorem is that the Universe (observable and unobservable) expands by an enormous factor during inflation. In an expanding universe, energy densities generally fall, or get diluted, as the volume of the Universe increases. For example, the density of ordinary "cold" matter (dust) goes down as the inverse of the volume: when linear dimensions double, the energy density goes down by a factor of eight; the radiation energy density goes down even more rapidly as the Universe expands since the wavelength of each photon is stretched (redshifted), in addition to the photons being dispersed by the expansion. When linear dimensions are doubled, the energy density in radiation falls by a factor of sixteen (see the solution of the energy density continuity equation for an ultra-relativistic fluid). During inflation, the energy density in the inflaton field is roughly constant. However, the energy density in everything else, including inhomogeneities, curvature, anisotropies, exotic particles, and standard-model particles is falling, and through sufficient inflation these all become negligible. This leaves the Universe flat and symmetric, and (apart from the homogeneous inflaton field) mostly empty, at the moment inflation ends and reheating begins. Duration A key requirement is that inflation must continue long enough to produce the present observable universe from a single, small inflationary Hubble volume. This is necessary to ensure that the Universe appears flat, homogeneous and isotropic at the largest observable scales. This requirement is generally thought to be satisfied if the Universe expanded by a factor of at least during inflation. Reheating Inflation is a period of supercooled expansion, when the temperature drops by a factor of 100,000 or so. (The exact drop is model-dependent, but in the first models it was typically from K down to K.) This relatively low temperature is maintained during the inflationary phase. When inflation ends the temperature returns to the pre-inflationary temperature; this is called reheating or thermalization because the large potential energy of the inflaton field decays into particles and fills the Universe with Standard Model particles, including electromagnetic radiation, starting the radiation dominated phase of the Universe. Because the nature of the inflation is not known, this process is still poorly understood, although it is believed to take place through a parametric resonance. Motivations Inflation resolves several problems in Big Bang cosmology that were discovered in the 1970s. Inflation was first proposed by Alan Guth in 1979 while investigating the problem of why no magnetic monopoles are seen today; he found that a positive-energy false vacuum would, according to general relativity, generate an exponential expansion of space. It was very quickly realised that such an expansion would resolve many other long-standing problems. These problems arise from the observation that to look like it does today, the Universe would have to have started from very finely tuned, or "special" initial conditions at the Big Bang. Inflation attempts to resolve these problems by providing a dynamical mechanism that drives the Universe to this special state, thus making a universe like ours much more likely in the context of the Big Bang theory. Horizon problem The horizon problem is the problem of determining why the Universe appears statistically homogeneous and isotropic in accordance with the cosmological principle. For example, molecules in a canister of gas are distributed homogeneously and isotropically because they are in thermal equilibrium: gas throughout the canister has had enough time to interact to dissipate inhomogeneities and anisotropies. The situation is quite different in the big bang model without inflation, because gravitational expansion does not give the early universe enough time to equilibrate. In a big bang with only the matter and radiation known in the Standard Model, two widely separated regions of the observable universe cannot have equilibrated because they move apart from each other faster than the speed of light and thus have never come into causal contact. In the early Universe, it was not possible to send a light signal between the two regions. Because they have had no interaction, it is difficult to explain why they have the same temperature (are thermally equilibrated). Historically, proposed solutions included the Phoenix universe of Georges Lemaître, the related oscillatory universe of Richard Chase Tolman, and the Mixmaster universe of Charles Misner. Lemaître and Tolman proposed that a universe undergoing a number of cycles of contraction and expansion could come into thermal equilibrium. Their models failed, however, because of the buildup of entropy over several cycles. Misner made the (ultimately incorrect) conjecture that the Mixmaster mechanism, which made the Universe more chaotic, could lead to statistical homogeneity and isotropy. Flatness problem The flatness problem is sometimes called one of the Dicke coincidences (along with the cosmological constant problem). It became known in the 1960s that the density of matter in the Universe was comparable to the critical density necessary for a flat universe (that is, a universe whose large scale geometry is the usual Euclidean geometry, rather than a non-Euclidean hyperbolic or spherical geometry). Therefore, regardless of the shape of the universe the contribution of spatial curvature to the expansion of the Universe could not be much greater than the contribution of matter. But as the Universe expands, the curvature redshifts away more slowly than matter and radiation. Extrapolated into the past, this presents a fine-tuning problem because the contribution of curvature to the Universe must be exponentially small (sixteen orders of magnitude less than the density of radiation at Big Bang nucleosynthesis, for example). This problem is exacerbated by recent observations of the cosmic microwave background that have demonstrated that the Universe is flat to within a few percent. Magnetic-monopole problem The magnetic monopole problem, sometimes called the exotic-relics problem, says that if the early universe were very hot, a large number of very heavy, stable magnetic monopoles would have been produced. This is a problem with Grand Unified Theories, which propose that at high temperatures (such as in the early universe) the electromagnetic force, strong, and weak nuclear forces are not actually fundamental forces but arise due to spontaneous symmetry breaking from a single gauge theory. These theories predict a number of heavy, stable particles that have not been observed in nature. The most notorious is the magnetic monopole, a kind of stable, heavy "charge" of magnetic field. Monopoles are predicted to be copiously produced following Grand Unified Theories at high temperature, and they should have persisted to the present day, to such an extent that they would become the primary constituent of the Universe. Not only is that not the case, but all searches for them have failed, placing stringent limits on the density of relic magnetic monopoles in the Universe. A period of inflation that occurs below the temperature where magnetic monopoles can be produced would offer a possible resolution of this problem: monopoles would be separated from each other as the Universe around them expands, potentially lowering their observed density by many orders of magnitude. Though, as cosmologist Martin Rees has written, "Skeptics about exotic physics might not be hugely impressed by a theoretical argument to explain the absence of particles that are themselves only hypothetical. Preventive medicine can readily seem 100 percent effective against a disease that doesn't exist!" History Precursors In the early days of General Relativity, Albert Einstein introduced the cosmological constant to allow a static solution, which was a three-dimensional sphere with a uniform density of matter. Later, Willem de Sitter found a highly symmetric inflating universe, which described a universe with a cosmological constant that is otherwise empty. It was discovered that Einstein's universe is unstable, and that small fluctuations cause it to collapse or turn into a de Sitter universe. In the early 1970s Zeldovich noticed the flatness and horizon problems of Big Bang cosmology; before his work, cosmology was presumed to be symmetrical on purely philosophical grounds. In the Soviet Union, this and other considerations led Belinski and Khalatnikov to analyze the chaotic BKL singularity in General Relativity. Misner's Mixmaster universe attempted to use this chaotic behavior to solve the cosmological problems, with limited success. False vacuum In the late 1970s, Sidney Coleman applied the instanton techniques developed by Alexander Polyakov and collaborators to study the fate of the false vacuum in quantum field theory. Like a metastable phase in statistical mechanics—water below the freezing temperature or above the boiling point—a quantum field would need to nucleate a large enough bubble of the new vacuum, the new phase, in order to make a transition. Coleman found the most likely decay pathway for vacuum decay and calculated the inverse lifetime per unit volume. He eventually noted that gravitational effects would be significant, but he did not calculate these effects and did not apply the results to cosmology. The universe could have been spontaneously created from nothing (no space, time, nor matter) by quantum fluctuations of metastable false vacuum causing an expanding bubble of true vacuum. Starobinsky inflation In the Soviet Union, Alexei Starobinsky noted that quantum corrections to general relativity should be important for the early universe. These generically lead to curvature-squared corrections to the Einstein–Hilbert action and a form of f(R) modified gravity. The solution to Einstein's equations in the presence of curvature squared terms, when the curvatures are large, leads to an effective cosmological constant. Therefore, he proposed that the early universe went through an inflationary de Sitter era. This resolved the cosmology problems and led to specific predictions for the corrections to the microwave background radiation, corrections that were then calculated in detail. Starobinsky used the action which corresponds to the potential in the Einstein frame. This results in the observables: Monopole problem In 1978, Zeldovich noted the monopole problem, which was an unambiguous quantitative version of the horizon problem, this time in a subfield of particle physics, which led to several speculative attempts to resolve it. In 1980 Alan Guth realized that false vacuum decay in the early universe would solve the problem, leading him to propose a scalar-driven inflation. Starobinsky's and Guth's scenarios both predicted an initial de Sitter phase, differing only in mechanistic details. Early inflationary models Guth proposed inflation in January 1981 to explain the nonexistence of magnetic monopoles; it was Guth who coined the term "inflation". At the same time, Starobinsky argued that quantum corrections to gravity would replace the initial singularity of the Universe with an exponentially expanding de Sitter phase. In October 1980, Demosthenes Kazanas suggested that exponential expansion could eliminate the particle horizon and perhaps solve the horizon problem, while Sato suggested that an exponential expansion could eliminate domain walls (another kind of exotic relic). In 1981 Einhorn and Sato published a model similar to Guth's and showed that it would resolve the puzzle of the magnetic monopole abundance in Grand Unified Theories. Like Guth, they concluded that such a model not only required fine tuning of the cosmological constant, but also would likely lead to a much too granular universe, i.e., to large density variations resulting from bubble wall collisions. Guth proposed that as the early universe cooled, it was trapped in a false vacuum with a high energy density, which is much like a cosmological constant. As the very early universe cooled it was trapped in a metastable state (it was supercooled), which it could only decay out of through the process of bubble nucleation via quantum tunneling. Bubbles of true vacuum spontaneously form in the sea of false vacuum and rapidly begin expanding at the speed | parameters related to energy. From Planck data it can be inferred that ns=0.968 ± 0.006, and a tensor to scalar ratio that is less than 0.11. These are considered an important confirmation of the theory of inflation. Various inflation theories have been proposed that make radically different predictions, but they generally have much more fine tuning than should be necessary. As a physical model, however, inflation is most valuable in that it robustly predicts the initial conditions of the Universe based on only two adjustable parameters: the spectral index (that can only change in a small range) and the amplitude of the perturbations. Except in contrived models, this is true regardless of how inflation is realized in particle physics. Occasionally, effects are observed that appear to contradict the simplest models of inflation. The first-year WMAP data suggested that the spectrum might not be nearly scale-invariant, but might instead have a slight curvature. However, the third-year data revealed that the effect was a statistical anomaly. Another effect remarked upon since the first cosmic microwave background satellite, the Cosmic Background Explorer is that the amplitude of the quadrupole moment of the CMB is unexpectedly low and the other low multipoles appear to be preferentially aligned with the ecliptic plane. Some have claimed that this is a signature of non-Gaussianity and thus contradicts the simplest models of inflation. Others have suggested that the effect may be due to other new physics, foreground contamination, or even publication bias. An experimental program is underway to further test inflation with more precise CMB measurements. In particular, high precision measurements of the so-called "B-modes" of the polarization of the background radiation could provide evidence of the gravitational radiation produced by inflation, and could also show whether the energy scale of inflation predicted by the simplest models (– GeV) is correct. In March 2014, the BICEP2 team announced B-mode CMB polarization confirming inflation had been demonstrated. The team announced the tensor-to-scalar power ratio was between 0.15 and 0.27 (rejecting the null hypothesis; is expected to be 0 in the absence of inflation). However, on 19 June 2014, lowered confidence in confirming the findings was reported; on 19 September 2014, a further reduction in confidence was reported and, on 30 January 2015, even less confidence yet was reported. By 2018, additional data suggested, with 95% confidence, that is 0.06 or lower: consistent with the null hypothesis, but still also consistent with many remaining models of inflation. Other potentially corroborating measurements are expected from the Planck spacecraft, although it is unclear if the signal will be visible, or if contamination from foreground sources will interfere. Other forthcoming measurements, such as those of 21 centimeter radiation (radiation emitted and absorbed from neutral hydrogen before the first stars formed), may measure the power spectrum with even greater resolution than the CMB and galaxy surveys, although it is not known if these measurements will be possible or if interference with radio sources on Earth and in the galaxy will be too great. Theoretical status In Guth's early proposal, it was thought that the inflaton was the Higgs field, the field that explains the mass of the elementary particles. It is now believed by some that the inflaton cannot be the Higgs field although the recent discovery of the Higgs boson has increased the number of works considering the Higgs field as inflaton. One problem of this identification is the current tension with experimental data at the electroweak scale, which is currently under study at the Large Hadron Collider (LHC). Other models of inflation relied on the properties of Grand Unified Theories. Since the simplest models of grand unification have failed, it is now thought by many physicists that inflation will be included in a supersymmetric theory such as string theory or a supersymmetric grand unified theory. At present, while inflation is understood principally by its detailed predictions of the initial conditions for the hot early universe, the particle physics is largely ad hoc modelling. As such, although predictions of inflation have been consistent with the results of observational tests, many open questions remain. Fine-tuning problem One of the most severe challenges for inflation arises from the need for fine tuning. In new inflation, the slow-roll conditions must be satisfied for inflation to occur. The slow-roll conditions say that the inflaton potential must be flat (compared to the large vacuum energy) and that the inflaton particles must have a small mass. New inflation requires the Universe to have a scalar field with an especially flat potential and special initial conditions. However, explanations for these fine-tunings have been proposed. For example, classically scale invariant field theories, where scale invariance is broken by quantum effects, provide an explanation of the flatness of inflationary potentials, as long as the theory can be studied through perturbation theory. Linde proposed a theory known as chaotic inflation in which he suggested that the conditions for inflation were actually satisfied quite generically. Inflation will occur in virtually any universe that begins in a chaotic, high energy state that has a scalar field with unbounded potential energy. However, in his model the inflaton field necessarily takes values larger than one Planck unit: for this reason, these are often called large field models and the competing new inflation models are called small field models. In this situation, the predictions of effective field theory are thought to be invalid, as renormalization should cause large corrections that could prevent inflation. This problem has not yet been resolved and some cosmologists argue that the small field models, in which inflation can occur at a much lower energy scale, are better models. While inflation depends on quantum field theory (and the semiclassical approximation to quantum gravity) in an important way, it has not been completely reconciled with these theories. Brandenberger commented on fine-tuning in another situation. The amplitude of the primordial inhomogeneities produced in inflation is directly tied to the energy scale of inflation. This scale is suggested to be around GeV or times the Planck energy. The natural scale is naïvely the Planck scale so this small value could be seen as another form of fine-tuning (called a hierarchy problem): the energy density given by the scalar potential is down by compared to the Planck density. This is not usually considered to be a critical problem, however, because the scale of inflation corresponds naturally to the scale of gauge unification. Eternal inflation In many models, the inflationary phase of the Universe's expansion lasts forever in at least some regions of the Universe. This occurs because inflating regions expand very rapidly, reproducing themselves. Unless the rate of decay to the non-inflating phase is sufficiently fast, new inflating regions are produced more rapidly than non-inflating regions. In such models, most of the volume of the Universe is continuously inflating at any given time. All models of eternal inflation produce an infinite, hypothetical multiverse, typically a fractal. The multiverse theory has created significant dissension in the scientific community about the viability of the inflationary model. Paul Steinhardt, one of the original architects of the inflationary model, introduced the first example of eternal inflation in 1983. He showed that the inflation could proceed forever by producing bubbles of non-inflating space filled with hot matter and radiation surrounded by empty space that continues to inflate. The bubbles could not grow fast enough to keep up with the inflation. Later that same year, Alexander Vilenkin showed that eternal inflation is generic. Although new inflation is classically rolling down the potential, quantum fluctuations can sometimes lift it to previous levels. These regions in which the inflaton fluctuates upwards expand much faster than regions in which the inflaton has a lower potential energy, and tend to dominate in terms of physical volume. It has been shown that any inflationary theory with an unbounded potential is eternal. There are well-known theorems that this steady state cannot continue forever into the past. Inflationary spacetime, which is similar to de Sitter space, is incomplete without a contracting region. However, unlike de Sitter space, fluctuations in a contracting inflationary space collapse to form a gravitational singularity, a point where densities become infinite. Therefore, it is necessary to have a theory for the Universe's initial conditions. In eternal inflation, regions with inflation have an exponentially growing volume, while regions that are not inflating don't. This suggests that the volume of the inflating part of the Universe in the global picture is always unimaginably larger than the part that has stopped inflating, even though inflation eventually ends as seen by any single pre-inflationary observer. Scientists disagree about how to assign a probability distribution to this hypothetical anthropic landscape. If the probability of different regions is counted by volume, one should expect that inflation will never end or applying boundary conditions that a local observer exists to observe it, that inflation will end as late as possible. Some physicists believe this paradox can be resolved by weighting observers by their pre-inflationary volume. Others believe that there is no resolution to the paradox and that the multiverse is a critical flaw in the inflationary paradigm. Paul Steinhardt, who first introduced the eternal inflationary model, later became one of its most vocal critics for this reason. Initial conditions Some physicists have tried to avoid the initial conditions problem by proposing models for an eternally inflating universe with no origin. These models propose that while the Universe, on the largest scales, expands exponentially it was, is and always will be, spatially infinite and has existed, and will exist, forever. Other proposals attempt to describe the ex nihilo creation of the Universe based on quantum cosmology and the following inflation. Vilenkin put forth one such scenario. Hartle and Hawking offered the no-boundary proposal for the initial creation of the Universe in which inflation comes about naturally. Guth described the inflationary universe as the "ultimate free lunch": new universes, similar to our own, are continually produced in a vast inflating background. Gravitational interactions, in this case, circumvent (but do not violate) the first law of thermodynamics (energy conservation) and the second law of thermodynamics (entropy and the arrow of time problem). However, while there is consensus that this solves the initial conditions problem, some have disputed this, as it is much more likely that the Universe came about by a quantum fluctuation. Don Page was an outspoken critic of inflation because of this anomaly. He stressed that the thermodynamic arrow of time necessitates low entropy initial conditions, which would be highly unlikely. According to them, rather than solving this problem, the inflation theory aggravates it – the reheating at the end of the inflation era increases entropy, making it necessary for the initial state of the Universe to be even more orderly than in other Big Bang theories with no inflation phase. Hawking and Page later found ambiguous results when they attempted to compute the probability of inflation in the Hartle-Hawking initial state. Other authors have argued that, since inflation is eternal, the probability doesn't matter as long as it is not precisely zero: once it starts, inflation perpetuates itself and quickly dominates the Universe. However, Albrecht and Lorenzo Sorbo argued that the probability of an inflationary cosmos, consistent with today's observations, emerging by a random fluctuation from some pre-existent state is much higher than that of a non-inflationary cosmos. This is because the "seed" amount of non-gravitational energy required for the inflationary cosmos is so much less than that for a non-inflationary alternative, which outweighs any entropic considerations. Another problem that has occasionally been mentioned is the trans-Planckian problem or trans-Planckian effects. Since the energy scale of inflation and the Planck scale are relatively close, some of the quantum fluctuations that have made up the structure in our universe were smaller than the Planck length before inflation. Therefore, there ought to be corrections from Planck-scale physics, in particular the unknown quantum theory of gravity. Some disagreement remains about the magnitude of this effect: about whether it is just on the threshold of detectability or completely undetectable. Hybrid inflation Another kind of inflation, called hybrid inflation, is an extension of new inflation. It introduces additional scalar fields, so that while one of the scalar fields is responsible for normal slow roll inflation, another triggers the end of inflation: when inflation has continued for sufficiently long, it becomes favorable to the second field to decay into a much lower energy state. In hybrid inflation, one scalar field is responsible for most of the energy density (thus determining the rate of expansion), while another is responsible for the slow roll (thus determining the period of inflation and its termination). Thus fluctuations in the former inflaton would not affect inflation termination, while fluctuations in the latter would not affect the rate of expansion. Therefore, hybrid inflation is not eternal. When the second (slow-rolling) inflaton reaches the bottom of its potential, it changes the location of the minimum of the first inflaton's potential, which leads to a fast roll of the inflaton down its potential, leading to termination of inflation. Relation to dark energy Dark energy is broadly similar to inflation and is thought to be causing the expansion of the present-day universe to accelerate. However, the energy scale of dark energy is much lower, GeV, roughly 27 orders of magnitude less than the scale of inflation. Inflation and string cosmology The discovery of flux compactifications opened the way for reconciling inflation and string theory. Brane inflation suggests that inflation arises from the motion of D-branes in the compactified geometry, usually towards a stack of anti-D-branes. This theory, governed by the Dirac-Born-Infeld action, is different from ordinary inflation. The dynamics are not completely understood. It appears that special conditions are necessary since inflation occurs in tunneling between two vacua in the string landscape. The process of tunneling between two vacua is a form of old inflation, but new inflation must then occur by some other mechanism. Inflation and loop quantum gravity When investigating the effects the theory of loop quantum gravity would have on cosmology, a loop quantum cosmology model has evolved that provides a possible mechanism for cosmological inflation. Loop quantum gravity assumes a quantized spacetime. If the energy density is larger than can be held by the quantized spacetime, it is thought to bounce back. Alternatives and adjuncts Other models have been advanced that are claimed to explain some or all of the observations addressed by inflation. Big bounce The big bounce hypothesis attempts to replace the cosmic singularity with a cosmic contraction and bounce, thereby explaining the initial conditions that led to the big bang. The flatness and horizon problems are naturally solved in the Einstein-Cartan-Sciama-Kibble theory of gravity, without needing an exotic form of matter or free parameters. This theory extends general relativity by removing a constraint of the symmetry of the affine connection and regarding its antisymmetric part, the torsion tensor, as a dynamical variable. The minimal coupling between torsion and Dirac spinors generates a spin-spin interaction that is significant in fermionic matter at extremely high densities. Such an interaction averts the unphysical Big Bang singularity, replacing it with a cusp-like bounce at a finite minimum scale factor, before which the Universe was contracting. The rapid expansion immediately after the Big Bounce explains why the present Universe at largest scales appears spatially flat, homogeneous and isotropic. As the density of the Universe decreases, the effects of torsion weaken and the Universe smoothly enters the radiation-dominated era. Ekpyrotic and cyclic models The ekpyrotic and cyclic models are also considered adjuncts to inflation. These models solve the horizon problem through an expanding epoch well before the Big Bang, and then generate the required spectrum of primordial density perturbations during a contracting phase leading to a Big Crunch. The Universe passes through the Big Crunch and emerges in a hot Big Bang phase. In this sense they are reminiscent of Richard Chace Tolman's oscillatory universe; in Tolman's model, however, the total age of the Universe is necessarily finite, while in these models this is not necessarily so. Whether the correct spectrum of density fluctuations can be produced, and whether the Universe can successfully navigate the Big Bang/Big Crunch transition, remains a topic of controversy and current research. Ekpyrotic models avoid the magnetic monopole problem as long as the temperature at the Big Crunch/Big Bang transition remains below the Grand Unified Scale, as this is the temperature required to produce magnetic monopoles in the first place. As things stand, there is no evidence of any 'slowing down' of the expansion, but this is not surprising as each cycle is expected to last on the order of a trillion years. String gas cosmology String theory requires that, in addition to the three observable spatial dimensions, additional dimensions exist that are curled up or compactified (see also Kaluza–Klein theory). Extra dimensions appear as a frequent component of supergravity models and other approaches to quantum gravity. This raised the contingent question of why four space-time dimensions became large and the rest became unobservably small. An attempt to address this question, called string gas cosmology, was proposed by Robert Brandenberger and Cumrun Vafa. This model focuses on the dynamics of the early universe considered as a hot gas of strings. Brandenberger and Vafa show that a dimension of spacetime can only expand if the |
specified luminosity function. An appendix to the SI Brochure makes it clear that the luminosity function is not uniquely specified, but must be selected to fully define the candela. The arbitrary (1/683) term was chosen so that the new definition would precisely match the old definition. Although the candela is now defined in terms of the second (an SI base unit) and the watt (a derived SI unit), the candela remains a base unit of the SI system, by definition. The 26th CGPM approved the modern definition of the candela in 2018 as part of the 2019 redefinition of SI base units, which redefined the SI base units in terms of fundamental physical constants. SI photometric light units Relationships between luminous intensity, luminous flux, and illuminance If a source emits a known luminous intensity Iv (in candelas) in a well-defined cone, the total luminous flux Φv in lumens is given by Φv = Iv 2 [1 − cos(A/2)], where A is the radiation angle of the lamp—the full vertex angle of the emission cone. For example, a lamp that emits 590 cd with a radiation angle of 40° emits about 224 lumens. See MR16 for emission angles of some common lamps. If the source emits light uniformly in all directions, the flux can be found by multiplying the intensity by 4: a uniform 1 candela source emits 12.6 lumens. For the purpose of measuring illumination, the candela is not a practical unit, as it only applies to idealized point light sources, each approximated by a source small compared to the distance from which its luminous radiation is measured, also assuming that it is done so in the absence of other light sources. What gets directly measured by a light meter is incident light on a sensor of finite area, i.e. illuminance in lm/m2 (lux). However, if designing illumination from many point light sources, like light bulbs, of known approximate omnidirectionally-uniform intensities, the contributions | square centimetre. It was then ratified in 1948 by the 9th CGPM which adopted a new name for this unit, the candela. In 1967 the 13th CGPM removed the term "new candle" and gave an amended version of the candela definition, specifying the atmospheric pressure applied to the freezing platinum: The candela is the luminous intensity, in the perpendicular direction, of a surface of square metre of a black body at the temperature of freezing platinum under a pressure of newtons per square metre. In 1979, because of the difficulties in realizing a Planck radiator at high temperatures and the new possibilities offered by radiometry, the 16th CGPM adopted a new definition of the candela: The candela is the luminous intensity, in a given direction, of a source that emits monochromatic radiation of frequency and that has a radiant intensity in that direction of watt per steradian. The definition describes how to produce a light source that (by definition) emits one candela, but does not specify the luminosity function for weighting radiation at other frequencies. Such a source could then be used to calibrate instruments designed to measure luminous intensity with reference to a specified luminosity function. An appendix to the SI Brochure makes it clear that the luminosity function is not uniquely specified, but must be selected to fully define the candela. The arbitrary (1/683) term was chosen so that the new definition would precisely match the old definition. Although the candela is now defined in terms of the second (an SI base unit) and the watt (a derived SI unit), the candela remains a base unit of the SI system, by definition. The 26th CGPM approved the modern definition of the candela in 2018 as part of the 2019 redefinition of SI base units, which redefined the SI base units in terms of fundamental physical constants. SI photometric light units Relationships between luminous intensity, luminous flux, and illuminance If a source emits a known luminous intensity Iv (in candelas) in a well-defined cone, the total luminous flux Φv in lumens is given by Φv = Iv 2 [1 − cos(A/2)], where A is the radiation angle of the lamp—the full vertex angle of the emission cone. For example, a lamp that emits 590 |
by Wilhelm Lenz and Ernst Ising through the Ising model that described magnetic materials as consisting of a periodic lattice of spins that collectively acquired magnetization. The Ising model was solved exactly to show that spontaneous magnetization cannot occur in one dimension but is possible in higher-dimensional lattices. Further research such as by Bloch on spin waves and Néel on antiferromagnetism led to developing new magnetic materials with applications to magnetic storage devices. Modern many-body physics The Sommerfeld model and spin models for ferromagnetism illustrated the successful application of quantum mechanics to condensed matter problems in the 1930s. However, there still were several unsolved problems, most notably the description of superconductivity and the Kondo effect. After World War II, several ideas from quantum field theory were applied to condensed matter problems. These included recognition of collective excitation modes of solids and the important notion of a quasiparticle. Russian physicist Lev Landau used the idea for the Fermi liquid theory wherein low energy properties of interacting fermion systems were given in terms of what are now termed Landau-quasiparticles. Landau also developed a mean-field theory for continuous phase transitions, which described ordered phases as spontaneous breakdown of symmetry. The theory also introduced the notion of an order parameter to distinguish between ordered phases. Eventually in 1956, John Bardeen, Leon Cooper and John Schrieffer developed the so-called BCS theory of superconductivity, based on the discovery that arbitrarily small attraction between two electrons of opposite spin mediated by phonons in the lattice can give rise to a bound state called a Cooper pair. The study of phase transitions and the critical behavior of observables, termed critical phenomena, was a major field of interest in the 1960s. Leo Kadanoff, Benjamin Widom and Michael Fisher developed the ideas of critical exponents and widom scaling. These ideas were unified by Kenneth G. Wilson in 1972, under the formalism of the renormalization group in the context of quantum field theory. The quantum Hall effect was discovered by Klaus von Klitzing, Dorda and Pepper in 1980 when they observed the Hall conductance to be integer multiples of a fundamental constant .(see figure) The effect was observed to be independent of parameters such as system size and impurities. In 1981, theorist Robert Laughlin proposed a theory explaining the unanticipated precision of the integral plateau. It also implied that the Hall conductance is proportional to a topological invariant, called Chern number, whose relevance for the band structure of solids was formulated by David J. Thouless and collaborators. Shortly after, in 1982, Horst Störmer and Daniel Tsui observed the fractional quantum Hall effect where the conductance was now a rational multiple of the constant . Laughlin, in 1983, realized that this was a consequence of quasiparticle interaction in the Hall states and formulated a variational method solution, named the Laughlin wavefunction. The study of topological properties of the fractional Hall effect remains an active field of research. Decades later, the aforementioned topological band theory advanced by David J. Thouless and collaborators was further expanded leading to the discovery of topological insulators. In 1986, Karl Müller and Johannes Bednorz discovered the first high temperature superconductor, a material which was superconducting at temperatures as high as 50 kelvins. It was realized that the high temperature superconductors are examples of strongly correlated materials where the electron–electron interactions play an important role. A satisfactory theoretical description of high-temperature superconductors is still not known and the field of strongly correlated materials continues to be an active research topic. In 2009, David Field and researchers at Aarhus University discovered spontaneous electric fields when creating prosaic films of various gases. This has more recently expanded to form the research area of spontelectrics. In 2012 several groups released preprints which suggest that samarium hexaboride has the properties of a topological insulator in accord with the earlier theoretical predictions. Since samarium hexaboride is an established Kondo insulator, i.e. a strongly correlated electron material, it is expected that the existence of a topological Dirac surface state in this material would lead to a topological insulator with strong electronic correlations. Theoretical Theoretical condensed matter physics involves the use of theoretical models to understand properties of states of matter. These include models to study the electronic properties of solids, such as the Drude model, the band structure and the density functional theory. Theoretical models have also been developed to study the physics of phase transitions, such as the Ginzburg–Landau theory, critical exponents and the use of mathematical methods of quantum field theory and the renormalization group. Modern theoretical studies involve the use of numerical computation of electronic structure and mathematical tools to understand phenomena such as high-temperature superconductivity, topological phases, and gauge symmetries. Emergence Theoretical understanding of condensed matter physics is closely related to the notion of emergence, wherein complex assemblies of particles behave in ways dramatically different from their individual constituents. For example, a range of phenomena related to high temperature superconductivity are understood poorly, although the microscopic physics of individual electrons and lattices is well known. Similarly, models of condensed matter systems have been studied where collective excitations behave like photons and electrons, thereby describing electromagnetism as an emergent phenomenon. Emergent properties can also occur at the interface between materials: one example is the lanthanum aluminate-strontium titanate interface, where two band-insulators are joined to create conductivity and superconductivity. Electronic theory of solids The metallic state has historically been an important building block for studying properties of solids. The first theoretical description of metals was given by Paul Drude in 1900 with the Drude model, which explained electrical and thermal properties by describing a metal as an ideal gas of then-newly discovered electrons. He was able to derive the empirical Wiedemann-Franz law and get results in close agreement with the experiments. This classical model was then improved by Arnold Sommerfeld who incorporated the Fermi–Dirac statistics of electrons and was able to explain the anomalous behavior of the specific heat of metals in the Wiedemann–Franz law. In 1912, The structure of crystalline solids was studied by Max von Laue and Paul Knipping, when they observed the X-ray diffraction pattern of crystals, and concluded that crystals get their structure from periodic lattices of atoms. In 1928, Swiss physicist Felix Bloch provided a wave function solution to the Schrödinger equation with a periodic potential, known as Bloch's theorem. Calculating electronic properties of metals by solving the many-body wavefunction is often computationally hard, and hence, approximation methods are needed to obtain meaningful predictions. The Thomas–Fermi theory, developed in the 1920s, was used to estimate system energy and electronic density by treating the local electron density as a variational parameter. Later in the 1930s, Douglas Hartree, Vladimir Fock and John Slater developed the so-called Hartree–Fock wavefunction as an | condensed matter problems in the 1930s. However, there still were several unsolved problems, most notably the description of superconductivity and the Kondo effect. After World War II, several ideas from quantum field theory were applied to condensed matter problems. These included recognition of collective excitation modes of solids and the important notion of a quasiparticle. Russian physicist Lev Landau used the idea for the Fermi liquid theory wherein low energy properties of interacting fermion systems were given in terms of what are now termed Landau-quasiparticles. Landau also developed a mean-field theory for continuous phase transitions, which described ordered phases as spontaneous breakdown of symmetry. The theory also introduced the notion of an order parameter to distinguish between ordered phases. Eventually in 1956, John Bardeen, Leon Cooper and John Schrieffer developed the so-called BCS theory of superconductivity, based on the discovery that arbitrarily small attraction between two electrons of opposite spin mediated by phonons in the lattice can give rise to a bound state called a Cooper pair. The study of phase transitions and the critical behavior of observables, termed critical phenomena, was a major field of interest in the 1960s. Leo Kadanoff, Benjamin Widom and Michael Fisher developed the ideas of critical exponents and widom scaling. These ideas were unified by Kenneth G. Wilson in 1972, under the formalism of the renormalization group in the context of quantum field theory. The quantum Hall effect was discovered by Klaus von Klitzing, Dorda and Pepper in 1980 when they observed the Hall conductance to be integer multiples of a fundamental constant .(see figure) The effect was observed to be independent of parameters such as system size and impurities. In 1981, theorist Robert Laughlin proposed a theory explaining the unanticipated precision of the integral plateau. It also implied that the Hall conductance is proportional to a topological invariant, called Chern number, whose relevance for the band structure of solids was formulated by David J. Thouless and collaborators. Shortly after, in 1982, Horst Störmer and Daniel Tsui observed the fractional quantum Hall effect where the conductance was now a rational multiple of the constant . Laughlin, in 1983, realized that this was a consequence of quasiparticle interaction in the Hall states and formulated a variational method solution, named the Laughlin wavefunction. The study of topological properties of the fractional Hall effect remains an active field of research. Decades later, the aforementioned topological band theory advanced by David J. Thouless and collaborators was further expanded leading to the discovery of topological insulators. In 1986, Karl Müller and Johannes Bednorz discovered the first high temperature superconductor, a material which was superconducting at temperatures as high as 50 kelvins. It was realized that the high temperature superconductors are examples of strongly correlated materials where the electron–electron interactions play an important role. A satisfactory theoretical description of high-temperature superconductors is still not known and the field of strongly correlated materials continues to be an active research topic. In 2009, David Field and researchers at Aarhus University discovered spontaneous electric fields when creating prosaic films of various gases. This has more recently expanded to form the research area of spontelectrics. In 2012 several groups released preprints which suggest that samarium hexaboride has the properties of a topological insulator in accord with the earlier theoretical predictions. Since samarium hexaboride is an established Kondo insulator, i.e. a strongly correlated electron material, it is expected that the existence of a topological Dirac surface state in this material would lead to a topological insulator with strong electronic correlations. Theoretical Theoretical condensed matter physics involves the use of theoretical models to understand properties of states of matter. These include models to study the electronic properties of solids, such as the Drude model, the band structure and the density functional theory. Theoretical models have also been developed to study the physics of phase transitions, such as the Ginzburg–Landau theory, critical exponents and the use of mathematical methods of quantum field theory and the renormalization group. Modern theoretical studies involve the use of numerical computation of electronic structure and mathematical tools to understand phenomena such as high-temperature superconductivity, topological phases, and gauge symmetries. Emergence Theoretical understanding of condensed matter physics is closely related to the notion of emergence, wherein complex assemblies of particles behave in ways dramatically different from their individual constituents. For example, a range of phenomena related to high temperature superconductivity are understood poorly, although the microscopic physics of individual electrons and lattices is well known. Similarly, models of condensed matter systems have been studied where collective excitations behave like photons and electrons, thereby describing electromagnetism as an emergent phenomenon. Emergent properties can also occur at the interface between materials: one example is the lanthanum aluminate-strontium titanate interface, where two band-insulators are joined to create conductivity and superconductivity. Electronic theory of solids The metallic state has historically been an important building block for studying properties of solids. The first theoretical description of metals was given by Paul Drude in 1900 with the Drude model, which explained electrical and thermal properties by describing a metal as an ideal gas of then-newly discovered electrons. He was able to derive the empirical Wiedemann-Franz law and get results in close agreement with the experiments. This classical model was then improved by Arnold Sommerfeld who incorporated the Fermi–Dirac statistics of electrons and was able to explain the anomalous behavior of the specific heat of metals in the Wiedemann–Franz law. In 1912, The structure of crystalline solids was studied by Max von Laue and Paul Knipping, when they observed the X-ray diffraction pattern of crystals, and concluded that crystals get their structure from periodic lattices of atoms. In 1928, Swiss physicist Felix Bloch provided a wave function solution to the Schrödinger equation with a periodic potential, known as Bloch's theorem. Calculating electronic properties of metals by solving the many-body wavefunction is often computationally hard, and hence, approximation methods are needed to obtain meaningful predictions. The Thomas–Fermi theory, developed in the 1920s, was used to estimate system energy and electronic density by treating the local electron density as a variational parameter. Later in the 1930s, Douglas Hartree, Vladimir Fock and John Slater developed the so-called Hartree–Fock wavefunction as an improvement over the Thomas–Fermi model. The Hartree–Fock method accounted for exchange statistics of single particle electron wavefunctions. In general, it's very difficult to solve the Hartree–Fock equation. Only the free electron gas case can be solved exactly. Finally in 1964–65, Walter Kohn, Pierre Hohenberg and Lu Jeu Sham proposed the density functional theory which gave realistic descriptions for bulk and surface properties of metals. The density functional theory (DFT) has been widely used since the 1970s for band structure calculations of variety of solids. Symmetry breaking Some states of matter exhibit symmetry breaking, where the relevant laws of physics possess some form of symmetry that is broken. A common example is crystalline solids, which break continuous translational symmetry. Other examples include magnetized ferromagnets, which break rotational symmetry, and more exotic states such as the ground state of a BCS superconductor, that breaks U(1) phase rotational symmetry. Goldstone's theorem in quantum field theory states that in a system with broken continuous symmetry, there may exist excitations with arbitrarily low energy, called the Goldstone bosons. For example, in crystalline solids, these correspond to phonons, which are quantized versions of lattice vibrations. Phase transition Phase transition refers to the change of phase of a system, which is brought about by change |
in which they grew up. Though such works as Mead's Coming of Age in Samoa (1928) and Benedict's The Chrysanthemum and the Sword (1946) remain popular with the American public, Mead and Benedict never had the impact on the discipline of anthropology that some expected. Boas had planned for Ruth Benedict to succeed him as chair of Columbia's anthropology department, but she was sidelined in favor of Ralph Linton, and Mead was limited to her offices at the AMNH. Wolf, Sahlins, Mintz, and political economy In the 1950s and mid-1960s anthropology tended increasingly to model itself after the natural sciences. Some anthropologists, such as Lloyd Fallers and Clifford Geertz, focused on processes of modernization by which newly independent states could develop. Others, such as Julian Steward and Leslie White, focused on how societies evolve and fit their ecological niche—an approach popularized by Marvin Harris. Economic anthropology as influenced by Karl Polanyi and practiced by Marshall Sahlins and George Dalton challenged standard neoclassical economics to take account of cultural and social factors, and employed Marxian analysis into anthropological study. In England, British Social Anthropology's paradigm began to fragment as Max Gluckman and Peter Worsley experimented with Marxism and authors such as Rodney Needham and Edmund Leach incorporated Lévi-Strauss's structuralism into their work. Structuralism also influenced a number of developments in the 1960s and 1970s, including cognitive anthropology and componential analysis. In keeping with the times, much of anthropology became politicized through the Algerian War of Independence and opposition to the Vietnam War; Marxism became an increasingly popular theoretical approach in the discipline. By the 1970s the authors of volumes such as Reinventing Anthropology worried about anthropology's relevance. Since the 1980s issues of power, such as those examined in Eric Wolf's Europe and the People Without History, have been central to the discipline. In the 1980s books like Anthropology and the Colonial Encounter pondered anthropology's ties to colonial inequality, while the immense popularity of theorists such as Antonio Gramsci and Michel Foucault moved issues of power and hegemony into the spotlight. Gender and sexuality became popular topics, as did the relationship between history and anthropology, influenced by Marshall Sahlins, who drew on Lévi-Strauss and Fernand Braudel to examine the relationship between symbolic meaning, sociocultural structure, and individual agency in the processes of historical transformation. Jean and John Comaroff produced a whole generation of anthropologists at the University of Chicago that focused on these themes. Also influential in these issues were Nietzsche, Heidegger, the critical theory of the Frankfurt School, Derrida and Lacan. Geertz, Schneider, and interpretive anthropology Many anthropologists reacted against the renewed emphasis on materialism and scientific modelling derived from Marx by emphasizing the importance of the concept of culture. Authors such as David Schneider, Clifford Geertz, and Marshall Sahlins developed a more fleshed-out concept of culture as a web of meaning or signification, which proved very popular within and beyond the discipline. Geertz was to state: Geertz's interpretive method involved what he called "thick description". The cultural symbols of rituals, political and economic action, and of kinship, are "read" by the anthropologist as if they are a document in a foreign language. The interpretation of those symbols must be re-framed for their anthropological audience, i.e. transformed from the "experience-near" but foreign concepts of the other culture, into the "experience-distant" theoretical concepts of the anthropologist. These interpretations must then be reflected back to its originators, and its adequacy as a translation fine-tuned in a repeated way, a process called the hermeneutic circle. Geertz applied his method in a number of areas, creating programs of study that were very productive. His analysis of "religion as a cultural system" was particularly influential outside of anthropology. David Schnieder's cultural analysis of American kinship has proven equally influential. Schneider demonstrated that the American folk-cultural emphasis on "blood connections" had an undue influence on anthropological kinship theories, and that kinship is not a biological characteristic but a cultural relationship established on very different terms in different societies. Prominent British symbolic anthropologists include Victor Turner and Mary Douglas. The post-modern turn In the late 1980s and 1990s authors such as James Clifford pondered ethnographic authority, in particular how and why anthropological knowledge was possible and authoritative. They were reflecting trends in research and discourse initiated by feminists in the academy, although they excused themselves from commenting specifically on those pioneering critics. Nevertheless, key aspects of feminist theory and methods became de rigueur as part of the 'post-modern moment' in anthropology: Ethnographies became more interpretative and reflexive, explicitly addressing the author's methodology; cultural, gendered, and racial positioning; and their influence on his or her ethnographic analysis. This was part of a more general trend of postmodernism that was popular contemporaneously. Currently anthropologists pay attention to a wide variety of issues pertaining to the contemporary world, including globalization, medicine and biotechnology, indigenous rights, virtual communities, and the anthropology of industrialized societies. Socio-cultural anthropology subfields Anthropology of art Cognitive anthropology Anthropology of development Ecological anthropology Economic anthropology Feminist anthropology and anthropology of gender and sexuality Ethnohistory and historical anthropology Kinship and family Legal anthropology Multimodal anthropology Media anthropology Medical anthropology Political anthropology Political economy in anthropology Psychological anthropology Public anthropology Anthropology of religion Cyborg anthropology Transpersonal anthropology Urban anthropology Visual anthropology Methods Modern cultural anthropology has its origins in, and developed in reaction to, 19th century ethnology, which involves the organized comparison of human societies. Scholars like E.B. Tylor and J.G. Frazer in England worked mostly with materials collected by others—usually missionaries, traders, explorers, or colonial officials—earning them the moniker of "arm-chair anthropologists". Participant observation Participant observation is one of the principal research methods of cultural anthropology. It relies on the assumption that the best way to understand a group of people is to interact with them closely over a long period of time. The method originated in the field research of social anthropologists, especially Bronislaw Malinowski in Britain, the students of Franz Boas in the United States, and in the later urban research of the Chicago School of Sociology. Historically, the group of people being studied was a small, non-Western society. However, today it may be a specific corporation, a church group, a sports team, or a small town. There are no restrictions as to what the subject of participant observation can be, as long as the group of people is studied intimately by the observing anthropologist over a long period of time. This allows the anthropologist to develop trusting relationships with the subjects of study and receive an inside perspective on the culture, which helps him or her to give a richer description when writing about the culture later. Observable details (like daily time allotment) and more hidden details (like taboo behavior) are more easily observed and interpreted over a longer period of time, and researchers can discover discrepancies between what participants say—and often believe—should happen (the formal system) and what actually does happen, or between different aspects of the formal system; in contrast, a one-time survey of people's answers to a set of questions might be quite consistent, but is less likely to show conflicts between different aspects of the social system or between conscious representations and behavior. Interactions between an ethnographer and a cultural informant must go both ways. Just as an ethnographer may be naive or curious about a culture, the members of that culture may be curious about the ethnographer. To establish connections that will eventually lead to a better understanding of the cultural context of a situation, an anthropologist must be open to becoming part of the group, and willing to develop meaningful relationships with its members. One way to do this is to find a small area of common experience between an anthropologist and his or her subjects, and then to expand from this common ground into the larger area of difference. Once a single connection has been established, it becomes easier to integrate into the community, and more likely that accurate and complete information is being shared with the anthropologist. Before participant observation can begin, an anthropologist must choose both a location and a focus of study. This focus may change once the anthropologist is actively observing the chosen group of people, but having an idea of what one wants to study before beginning fieldwork allows an anthropologist to spend time researching background information on their topic. It can also be helpful to know what previous research has been conducted in one's chosen location or on similar topics, and if the participant observation takes place in a location where the spoken language is not one the anthropologist is familiar with, he or she will usually also learn that language. This allows the anthropologist to become better established in the community. The lack of need for a translator makes communication more direct, and allows the anthropologist to give a richer, more contextualized representation of what they witness. In addition, participant observation often requires permits from governments and research institutions in the area of study, and always needs some form of funding. The majority of participant observation is based on conversation. This can take the form of casual, friendly dialogue, or can also be a series of more structured interviews. A combination of the two is often used, sometimes along with photography, mapping, artifact collection, and various other methods. In some cases, ethnographers also turn to structured observation, in which an anthropologist's observations are directed by a specific set of questions he or she is trying to answer. In the case of structured observation, an observer might be required to record the order of a series of events, or describe a certain part of the surrounding environment. While the anthropologist still makes an effort to become integrated into the group they are studying, and still participates in the events as they observe, structured observation is more directed and specific than participant observation in general. This helps to standardize the method of study when ethnographic data is being compared across several groups or is needed to fulfill a specific purpose, such as research for a governmental policy decision. One common criticism of participant observation is its lack of objectivity. Because each anthropologist has his or her own background and set of experiences, each individual is likely to interpret the same culture in a different way. Who the ethnographer is has a lot to do with what he or she will eventually write about a culture, because each researcher is influenced by his or her own perspective. This is considered a problem especially when anthropologists write in the ethnographic present, a present tense which makes a culture seem stuck in time, and ignores the fact that it may have interacted with other cultures or gradually evolved since the anthropologist made observations. To avoid this, past ethnographers have advocated for strict training, or for anthropologists working in teams. However, these approaches have not generally been successful, and modern ethnographers often choose to include their personal experiences and possible biases in their writing instead. Participant observation has also raised ethical questions, since an anthropologist is in control of what he or she reports about a culture. In terms of representation, an anthropologist has greater power than his or her subjects of study, and this has drawn criticism of participant observation in general. Additionally, anthropologists have struggled with the effect their presence has on a culture. Simply by being present, a researcher causes changes in a culture, and anthropologists continue to question whether or not it is appropriate to influence the cultures they study, or possible to avoid having influence. | Jean and John Comaroff produced a whole generation of anthropologists at the University of Chicago that focused on these themes. Also influential in these issues were Nietzsche, Heidegger, the critical theory of the Frankfurt School, Derrida and Lacan. Geertz, Schneider, and interpretive anthropology Many anthropologists reacted against the renewed emphasis on materialism and scientific modelling derived from Marx by emphasizing the importance of the concept of culture. Authors such as David Schneider, Clifford Geertz, and Marshall Sahlins developed a more fleshed-out concept of culture as a web of meaning or signification, which proved very popular within and beyond the discipline. Geertz was to state: Geertz's interpretive method involved what he called "thick description". The cultural symbols of rituals, political and economic action, and of kinship, are "read" by the anthropologist as if they are a document in a foreign language. The interpretation of those symbols must be re-framed for their anthropological audience, i.e. transformed from the "experience-near" but foreign concepts of the other culture, into the "experience-distant" theoretical concepts of the anthropologist. These interpretations must then be reflected back to its originators, and its adequacy as a translation fine-tuned in a repeated way, a process called the hermeneutic circle. Geertz applied his method in a number of areas, creating programs of study that were very productive. His analysis of "religion as a cultural system" was particularly influential outside of anthropology. David Schnieder's cultural analysis of American kinship has proven equally influential. Schneider demonstrated that the American folk-cultural emphasis on "blood connections" had an undue influence on anthropological kinship theories, and that kinship is not a biological characteristic but a cultural relationship established on very different terms in different societies. Prominent British symbolic anthropologists include Victor Turner and Mary Douglas. The post-modern turn In the late 1980s and 1990s authors such as James Clifford pondered ethnographic authority, in particular how and why anthropological knowledge was possible and authoritative. They were reflecting trends in research and discourse initiated by feminists in the academy, although they excused themselves from commenting specifically on those pioneering critics. Nevertheless, key aspects of feminist theory and methods became de rigueur as part of the 'post-modern moment' in anthropology: Ethnographies became more interpretative and reflexive, explicitly addressing the author's methodology; cultural, gendered, and racial positioning; and their influence on his or her ethnographic analysis. This was part of a more general trend of postmodernism that was popular contemporaneously. Currently anthropologists pay attention to a wide variety of issues pertaining to the contemporary world, including globalization, medicine and biotechnology, indigenous rights, virtual communities, and the anthropology of industrialized societies. Socio-cultural anthropology subfields Anthropology of art Cognitive anthropology Anthropology of development Ecological anthropology Economic anthropology Feminist anthropology and anthropology of gender and sexuality Ethnohistory and historical anthropology Kinship and family Legal anthropology Multimodal anthropology Media anthropology Medical anthropology Political anthropology Political economy in anthropology Psychological anthropology Public anthropology Anthropology of religion Cyborg anthropology Transpersonal anthropology Urban anthropology Visual anthropology Methods Modern cultural anthropology has its origins in, and developed in reaction to, 19th century ethnology, which involves the organized comparison of human societies. Scholars like E.B. Tylor and J.G. Frazer in England worked mostly with materials collected by others—usually missionaries, traders, explorers, or colonial officials—earning them the moniker of "arm-chair anthropologists". Participant observation Participant observation is one of the principal research methods of cultural anthropology. It relies on the assumption that the best way to understand a group of people is to interact with them closely over a long period of time. The method originated in the field research of social anthropologists, especially Bronislaw Malinowski in Britain, the students of Franz Boas in the United States, and in the later urban research of the Chicago School of Sociology. Historically, the group of people being studied was a small, non-Western society. However, today it may be a specific corporation, a church group, a sports team, or a small town. There are no restrictions as to what the subject of participant observation can be, as long as the group of people is studied intimately by the observing anthropologist over a long period of time. This allows the anthropologist to develop trusting relationships with the subjects of study and receive an inside perspective on the culture, which helps him or her to give a richer description when writing about the culture later. Observable details (like daily time allotment) and more hidden details (like taboo behavior) are more easily observed and interpreted over a longer period of time, and researchers can discover discrepancies between what participants say—and often believe—should happen (the formal system) and what actually does happen, or between different aspects of the formal system; in contrast, a one-time survey of people's answers to a set of questions might be quite consistent, but is less likely to show conflicts between different aspects of the social system or between conscious representations and behavior. Interactions between an ethnographer and a cultural informant must go both ways. Just as an ethnographer may be naive or curious about a culture, the members of that culture may be curious about the ethnographer. To establish connections that will eventually lead to a better understanding of the cultural context of a situation, an anthropologist must be open to becoming part of the group, and willing to develop meaningful relationships with its members. One way to do this is to find a small area of common experience between an anthropologist and his or her subjects, and then to expand from this common ground into the larger area of difference. Once a single connection has been established, it becomes easier to integrate into the community, and more likely that accurate and complete information is being shared with the anthropologist. Before participant observation can begin, an anthropologist must choose both a location and a focus of study. This focus may change once the anthropologist is actively observing the chosen group of people, but having an idea of what one wants to study before beginning fieldwork allows an anthropologist to spend time researching background information on their topic. It can also be helpful to know what previous research has been conducted in one's chosen location or on similar topics, and if the participant observation takes place in a location where the spoken language is not one the anthropologist is familiar with, he or she will usually also learn that language. This allows the anthropologist to become better established in the community. The lack of need for a translator makes communication more direct, and allows the anthropologist to give a richer, more contextualized representation of what they witness. In addition, participant observation often requires permits from governments and research institutions in the area of study, and always needs some form of funding. The majority of participant observation is based on conversation. This can take the form of casual, friendly dialogue, or can also be a series of more structured interviews. A combination of the two is often used, sometimes along with photography, mapping, artifact collection, and various other methods. In some cases, ethnographers also turn to structured observation, in which an anthropologist's observations are directed by a specific set of questions he or she is trying to answer. In the case of structured observation, an observer might be required to record the order of a series of events, or describe a certain part of the surrounding environment. While the anthropologist still makes an effort to become integrated into the group they are studying, and still participates in the events as they observe, structured observation is more directed and specific than participant observation in general. This helps to standardize the method of study when ethnographic data is being compared across several groups or is needed to fulfill a specific purpose, such as research for a governmental policy decision. One common criticism of participant observation is its lack of objectivity. Because each anthropologist has his or her own background and set of experiences, each individual is likely to interpret the same culture in a different way. Who the ethnographer is has a lot to do with what he or she will eventually write about a culture, because each researcher is influenced by his or her own perspective. This is considered a problem especially when anthropologists write in the ethnographic present, a present tense which makes a culture seem stuck in time, and ignores the fact that it may have interacted with other cultures or gradually evolved since the anthropologist made observations. To avoid this, past ethnographers have advocated for strict training, or for anthropologists working in teams. However, these approaches have not generally been successful, and modern ethnographers often choose to include their personal experiences and possible biases in their writing instead. Participant observation has also raised ethical questions, since an anthropologist is in control of what he or she reports about a culture. In terms of representation, an anthropologist has greater power than his or her subjects of study, and this has drawn criticism of participant observation in general. Additionally, anthropologists have struggled with the effect their presence has on a culture. Simply by being present, a researcher causes changes in a culture, and anthropologists continue to question whether or not it is appropriate to influence the cultures they study, or possible to avoid having influence. Ethnography In the 20th century, most cultural and social anthropologists turned to the crafting of ethnographies. An ethnography is a piece of writing about a people, at a particular place and time. Typically, the anthropologist lives among people in another society for a period of time, simultaneously participating in and observing the social and cultural life of the group. Numerous other ethnographic techniques have resulted in ethnographic writing or details being preserved, as cultural anthropologists also curate materials, spend long hours in libraries, churches and schools poring over records, investigate graveyards, and decipher ancient scripts. A typical ethnography will also include information about physical geography, climate and habitat. It is meant to be a holistic piece of writing about the people in question, and today often includes the longest possible timeline of past events that the ethnographer can obtain through primary and secondary research. Bronisław Malinowski developed the ethnographic method, and Franz Boas taught it in the United States. Boas' students such as Alfred L. Kroeber, Ruth Benedict and Margaret Mead drew on his conception of culture and cultural relativism to develop cultural anthropology in the United States. Simultaneously, Malinowski and A.R. Radcliffe Brown's students were developing social anthropology in the United Kingdom. Whereas cultural anthropology focused on symbols and values, social anthropology focused on social groups and institutions. Today socio-cultural anthropologists attend to all these elements. In the early 20th century, socio-cultural anthropology developed in different forms in Europe and in the United States. European "social anthropologists" focused on observed social behaviors and on "social structure", that is, on relationships among social roles (for example, husband and wife, or parent and child) and social institutions (for example, religion, economy, and politics). American "cultural anthropologists" focused on the ways people expressed their view of themselves and their world, especially in symbolic forms, such as art and myths. These two approaches frequently converged and generally complemented one another. For example, kinship and leadership function both as symbolic systems and as social institutions. Today almost all socio-cultural anthropologists refer to the work of both sets of predecessors, and have an equal interest in what people do and in what people say. Cross-cultural comparison One means by which anthropologists combat ethnocentrism is to engage in the process of cross-cultural comparison. It is important to test so-called "human universals" against the ethnographic record. Monogamy, for example, is frequently touted as a universal human trait, yet comparative study shows that it is not. The Human Relations Area Files, Inc. (HRAF) is a research agency based at Yale University. Since 1949, its mission has been to encourage and facilitate worldwide comparative studies of human culture, society, and behavior in the past and present. The name came from the Institute of Human Relations, an interdisciplinary program/building at Yale at the time. The Institute of Human Relations had sponsored HRAF's precursor, the Cross-Cultural Survey (see George Peter Murdock), as part of an effort to develop an integrated science of human behavior and culture. The two eHRAF databases on the Web are expanded and updated annually. eHRAF World Cultures includes materials on cultures, past and present, and covers nearly 400 cultures. The second database, eHRAF Archaeology, covers major archaeological traditions and many more sub-traditions and sites around the world. Comparison across cultures includies the industrialized (or de-industrialized) West. Cultures in the more traditional standard cross-cultural sample of small scale societies are: Multi-sited ethnography Ethnography dominates socio-cultural anthropology. Nevertheless, many contemporary socio-cultural anthropologists have rejected earlier models of ethnography as treating local cultures as bounded and isolated. These anthropologists continue to concern themselves with the distinct ways people in different locales experience and understand their lives, but they often argue that one cannot understand these particular ways of life solely from a local perspective; they instead combine a focus on the local with an effort to grasp larger political, economic, and cultural frameworks that impact local lived realities. Notable proponents of this approach include Arjun Appadurai, James Clifford, George Marcus, Sidney Mintz, Michael Taussig, Eric Wolf and Ronald Daus. A growing trend in anthropological research and analysis is the use of multi-sited ethnography, discussed in George Marcus' article, "Ethnography In/Of the World System: the Emergence of Multi-Sited Ethnography". Looking at culture as embedded in macro-constructions of a global social order, multi-sited ethnography uses traditional methodology in various locations both spatially and temporally. Through this methodology, greater insight can be gained when examining the impact of world-systems on local and global communities. Also emerging in multi-sited ethnography are greater interdisciplinary approaches to fieldwork, bringing in methods from cultural studies, media studies, science and technology studies, and others. In multi-sited ethnography, research tracks a subject across spatial and temporal boundaries. For example, a multi-sited ethnography may follow a "thing", such as a particular commodity, as it is transported through the networks of global capitalism. Multi-sited ethnography may also follow ethnic groups in diaspora, stories or rumours that appear in multiple locations and in multiple time periods, metaphors that appear in multiple ethnographic locations, or the biographies of individual people or groups as they move through space and time. It may also follow conflicts that transcend boundaries. An example of multi-sited ethnography is Nancy Scheper-Hughes' work on the international black market for the trade of human organs. In this research, she follows organs as they are transferred through various legal and illegal networks of capitalism, as well as the rumours and urban legends that circulate in impoverished communities about child kidnapping and organ theft. Sociocultural anthropologists have increasingly turned their investigative eye on to "Western" culture. For example, Philippe Bourgois won the Margaret Mead Award in 1997 for In Search of Respect, a study of the entrepreneurs in a Harlem crack-den. Also growing more popular are ethnographies of professional communities, such as laboratory researchers, Wall Street investors, law firms, or information technology (IT) computer employees. Topics in cultural anthropology Kinship and family Kinship refers to the anthropological study of the ways in which humans form and maintain relationships with one another, and further, how those relationships operate within and define social organization. Research in kinship studies often crosses over into different anthropological subfields including medical, feminist, and public anthropology. This is likely due to its fundamental concepts, as articulated by linguistic anthropologist Patrick McConvell: Kinship is the bedrock of all human societies that we know. All humans recognize fathers and mothers, sons and daughters, brothers and sisters, uncles and aunts, husbands and wives, grandparents, cousins, and often many more complex types of relationships in the terminologies that they use. That is the matrix into which human children are born in the great majority of cases, and their first words are often kinship terms.Throughout history, kinship studies have primarily focused on the topics of marriage, descent, and procreation. Anthropologists have written extensively on the variations within marriage across cultures and its legitimacy as a human institution. There are stark differences between communities in terms of marital practice and value, leaving much room for anthropological fieldwork. For instance, the Nuer of Sudan and the Brahmans of Nepal practice polygyny, where one man has several marriages to two or more women. The Nyar of India and Nyimba of Tibet and Nepal practice polyandry, where one woman is often married to two or more men. The marital practice found in most cultures, however, is monogamy, where one woman is married to one man. Anthropologists also study different marital taboos across cultures, most commonly the incest taboo of marriage within sibling and parent-child relationships. It has been found that all cultures have an incest taboo to some degree, but the taboo shifts between cultures when the marriage extends beyond the nuclear family unit. There are similar foundational differences where the act of procreation is concerned. Although anthropologists have found that biology is acknowledged in every cultural relationship to procreation, there are differences in the ways in which cultures assess the constructs of parenthood. For example, in the Nuyoo municipality of Oaxaca, Mexico, it is believed that a child can have partible maternity and partible paternity. In this case, a child would have multiple biological mothers in the case that it is born of one woman and then breastfed by another. A child would have multiple biological fathers in the case that the mother had sex with multiple men, following the commonplace belief in Nuyoo culture that pregnancy must be preceded by sex with multiple men in order have the necessary accumulation of semen. Late twentieth-century shifts in interest In the twenty-first century, Western ideas of kinship have evolved beyond the traditional assumptions of the nuclear family, raising anthropological questions of consanguinity, lineage, and normative marital expectation. The shift can be traced back to the 1960s, with the reassessment of kinship's basic principles offered by Edmund Leach, Rodney Neeham, David Schneider, and others. Instead of relying on narrow ideas of Western normalcy, kinship studies increasingly catered to "more ethnographic voices, human agency, intersecting power structures, and historical contex". The study of kinship evolved to accommodate for the fact that it cannot be separated from its institutional roots and must pay respect to the society in which it lives, including that society's contradictions, hierarchies, and individual experiences of those within it. This shift was progressed further by the emergence of second-wave feminism in the early 1970s, which introduced ideas of marital oppression, sexual autonomy, and domestic subordination. Other themes that emerged during this time included the frequent comparisons between Eastern and Western kinship systems and the increasing amount of attention paid to anthropologists' own societies, a swift turn from the focus that had traditionally been paid to largely "foreign", non-Western communities. Kinship studies began to gain mainstream recognition in the late 1990s with the surging popularity of feminist anthropology, particularly with its work related to biological anthropology and the intersectional critique of gender relations. At this time, there was the arrival of "Third World feminism", a movement that argued kinship studies could not examine the gender relations of developing countries in isolation, and must pay respect to racial and economic nuance as well. This critique became relevant, for instance, in the anthropological study of Jamaica: race and class were seen as the primary obstacles to Jamaican liberation from economic imperialism, and gender as an identity was largely ignored. Third World feminism aimed to combat this in the early twenty-first century by promoting these categories as coexisting factors. In Jamaica, marriage as an institution is often substituted for a series of partners, as poor women cannot rely on regular financial contributions in a climate of economic instability. |
per second is equal to 5000 meters per second. Software tools There are many conversion tools. They are found in the function libraries of applications such as spreadsheets databases, in calculators, and in macro packages and plugins for many other applications such as the mathematical, scientific and technical applications. There are many standalone applications that offer the thousands of the various units with conversions. For example, the free software movement offers a command line utility GNU units for Linux and Windows. Calculation involving non-SI Units In the cases where non-SI units are used, the numerical calculation of a formula can be done by first working out the pre-factor, and then plug in the numerical values of the given/known quantities. For example, in the study of Bose–Einstein condensate, atomic mass is usually given in daltons, instead of kilograms, and chemical potential μ is often given in Boltzmann constant times nanokelvin. The condensate's healing length is given by: For a 23Na condensate with chemical potential of (Boltzmann constant times) 128 nK, the calculation of healing length (in micrometres) can be done in two steps: Calculate the pre-factor Assume that this gives which is our pre-factor. Calculate the numbers Now, make use of the fact that . With , . This method is especially useful for programming and/or making a worksheet, where input quantities are taking multiple different values; For example, with the pre-factor calculated above, it's very easy to see that the healing length of 174Yb with chemical potential 20.3 nK is . Tables of conversion factors This article gives lists of conversion factors for each of a number of physical quantities, which are listed in the index. For each physical quantity, a number of different units (some only of historical interest) are shown and expressed in terms of the corresponding SI unit. Conversions between units in the metric system are defined by their prefixes (for example, 1 kilogram = 1000 grams, 1 milligram = 0.001 grams) and are thus not listed in this article. Exceptions are made if the unit is commonly known by another name (for example, 1 micron = 10−6 metre). Within each table, the units are listed alphabetically, and the SI units (base or derived) are highlighted. Length Area Volume Plane angle Solid angle Mass Notes: See Weight for detail of mass/weight distinction and conversion. Avoirdupois is a system of mass based on a pound of 16 ounces, while Troy weight is the system of mass where 12 troy ounces equals one troy pound. In this table, the symbol is used to denote standard gravity in order to avoid confusion with the (upright) g symbol for gram. Density Time Frequency Speed or velocity A velocity consists of a speed combined with a direction; the speed part of the velocity takes units of speed. Flow (volume) Acceleration Force Pressure or mechanical stress Torque or moment of force Energy Power or heat flow rate Action Dynamic viscosity Kinematic viscosity Electric current Electric charge Electric dipole Electromotive force, electric potential difference Electrical resistance Capacitance Magnetic flux Magnetic flux density Inductance Temperature Information entropy Modern standards (such as ISO 80000) prefer the shannon to the bit as a unit for a quantity of information entropy, whereas the (discrete) storage space of digital devices is measured in bits. Thus, uncompressed redundant data occupy more than one bit of storage per shannon | derivatives used in old measurements; e.g., international foot vs. US survey foot. Some conversions from one system of units to another need to be exact, without increasing or decreasing the precision of the first measurement. This is sometimes called soft conversion. It does not involve changing the physical configuration of the item being measured. By contrast, a hard conversion or an adaptive conversion may not be exactly equivalent. It changes the measurement to convenient and workable numbers and units in the new system. It sometimes involves a slightly different configuration, or size substitution, of the item. Nominal values are sometimes allowed and used. Conversion factors A conversion factor is used to change the units of a measured quantity without changing its value. The unity bracket method of unit conversion consists of a fraction in which the denominator is equal to the numerator, but they are in different units. Because of the identity property of multiplication, the value of a quantity will not change as long as it is multiplied by one. Also, if the numerator and denominator of a fraction are equal to each other, then the fraction is equal to one. So as long as the numerator and denominator of the fraction are equivalent, they will not affect the value of the measured quantity. The following example demonstrates how the unity bracket method is used to convert the rate 5 kilometers per second to meters per second. The symbols km, m, and s represent kilometer, meter, and second, respectively. Thus, it is found that 5 kilometers per second is equal to 5000 meters per second. Software tools There are many conversion tools. They are found in the function libraries of applications such as spreadsheets databases, in calculators, and in macro packages and plugins for many other applications such as the mathematical, scientific and technical applications. There are many standalone applications that offer the thousands of the various units with conversions. For example, the free software movement offers a command line utility GNU units for Linux and Windows. Calculation involving non-SI Units In the cases where non-SI units are used, the numerical calculation of a formula can be done by first working out the pre-factor, and then plug in the numerical values of the given/known quantities. For example, in the study of Bose–Einstein condensate, atomic mass is usually given in daltons, instead of kilograms, and chemical potential μ is often given in Boltzmann constant times nanokelvin. The condensate's healing length is given by: For a 23Na condensate with chemical potential of (Boltzmann constant times) 128 nK, the calculation of healing length (in micrometres) can be done in two steps: Calculate the pre-factor Assume that this gives which is our pre-factor. Calculate the numbers Now, make use of the fact that . With , . This method is especially useful for programming and/or making a worksheet, where input quantities are taking multiple different values; For example, with the pre-factor calculated above, it's very easy to see that the healing length of 174Yb with chemical potential 20.3 nK is . Tables of conversion factors This article gives lists of conversion factors for each of a number of physical quantities, which are listed in the index. For each physical quantity, a number of different units (some only of historical interest) are shown and expressed in terms of the corresponding SI unit. Conversions between units in the metric system are defined by their prefixes (for example, 1 kilogram = 1000 grams, 1 milligram = 0.001 grams) and are thus not listed in this article. Exceptions are made if the unit |
for their large populations. Urban planning Urban planning, the application of forethought to city design, involves optimizing land use, transportation, utilities, and other basic systems, in order to achieve certain objectives. Urban planners and scholars have proposed overlapping theories as ideals for how plans should be formed. Planning tools, beyond the original design of the city itself, include public capital investment in infrastructure and land-use controls such as zoning. The continuous process of comprehensive planning involves identifying general objectives as well as collecting data to evaluate progress and inform future decisions. Government is legally the final authority on planning but in practice the process involves both public and private elements. The legal principle of eminent domain is used by government to divest citizens of their property in cases where its use is required for a project. Planning often involves tradeoffs—decisions in which some stand to gain and some to lose—and thus is closely connected to the prevailing political situation. The history of urban planning dates to some of the earliest known cities, especially in the Indus Valley and Mesoamerican civilizations, which built their cities on grids and apparently zoned different areas for different purposes. The effects of planning, ubiquitous in today's world, can be seen most clearly in the layout of planned communities, fully designed prior to construction, often with consideration for interlocking physical, economic, and cultural systems. Society Social structure Urban society is typically stratified. Spatially, cities are formally or informally segregated along ethnic, economic and racial lines. People living relatively close together may live, work, and play, in separate areas, and associate with different people, forming ethnic or lifestyle enclaves or, in areas of concentrated poverty, ghettoes. While in the US and elsewhere poverty became associated with the inner city, in France it has become associated with the banlieues, areas of urban development which surround the city proper. Meanwhile, across Europe and North America, the racially white majority is empirically the most segregated group. Suburbs in the west, and, increasingly, gated communities and other forms of "privatopia" around the world, allow local elites to self-segregate into secure and exclusive neighborhoods. Landless urban workers, contrasted with peasants and known as the proletariat, form a growing stratum of society in the age of urbanization. In Marxist doctrine, the proletariat will inevitably revolt against the bourgeoisie as their ranks swell with disenfranchised and disaffected people lacking all stake in the status quo. The global urban proletariat of today, however, generally lacks the status as factory workers which in the nineteenth century provided access to the means of production. Economics Historically, cities rely on rural areas for intensive farming to yield surplus crops, in exchange for which they provide money, political administration, manufactured goods, and culture. Urban economics tends to analyze larger agglomerations, stretching beyond city limits, in order to reach a more complete understanding of the local labor market. As hubs of trade cities have long been home to retail commerce and consumption through the interface of shopping. In the 20th century, department stores using new techniques of advertising, public relations, decoration, and design, transformed urban shopping areas into fantasy worlds encouraging self-expression and escape through consumerism. In general, the density of cities expedites commerce and facilitates knowledge spillovers, helping people and firms exchange information and generate new ideas. A thicker labor market allows for better skill matching between firms and individuals. Population density enables also sharing of common infrastructure and production facilities, however in very dense cities, increased crowding and waiting times may lead to some negative effects. Although manufacturing fueled the growth of cities, many now rely on a tertiary or service economy. The services in question range from tourism, hospitality, entertainment, housekeeping and prostitution to grey-collar work in law, finance, and administration. Culture and communications Cities are typically hubs for education and the arts, supporting universities, museums, temples, and other cultural institutions. They feature impressive displays of architecture ranging from small to enormous and ornate to brutal; skyscrapers, providing thousands of offices or homes within a small footprint, and visible from miles away, have become iconic urban features. Cultural elites tend to live in cities, bound together by shared cultural capital, and themselves playing some role in governance. By virtue of their status as centers of culture and literacy, cities can be described as the locus of civilization, world history, and social change. Density makes for effective mass communication and transmission of news, through heralds, printed proclamations, newspapers, and digital media. These communication networks, though still using cities as hubs, penetrate extensively into all populated areas. In the age of rapid communication and transportation, commentators have described urban culture as nearly ubiquitous or as no longer meaningful. Today, a city's promotion of its cultural activities dovetails with place branding and city marketing, public diplomacy techniques used to inform development strategy; to attract businesses, investors, residents, and tourists; and to create a shared identity and sense of place within the metropolitan area. Physical inscriptions, plaques, and monuments on display physically transmit a historical context for urban places. Some cities, such as Jerusalem, Mecca, and Rome have indelible religious status and for hundreds of years have attracted pilgrims. Patriotic tourists visit Agra to see the Taj Mahal, or New York City to visit the World Trade Center. Elvis lovers visit Memphis to pay their respects at Graceland. Place brands (which include place satisfaction and place loyalty) have great economic value (comparable to the value of commodity brands) because of their influence on the decision-making process of people thinking about doing business in—"purchasing" (the brand of)—a city. Bread and circuses among other forms of cultural appeal, attract and entertain the masses. Sports also play a major role in city branding and local identity formation. Cities go to considerable lengths in competing to host the Olympic Games, which bring global attention and tourism. Warfare Cities play a crucial strategic role in warfare due to their economic, demographic, symbolic, and political centrality. For the same reasons, they are targets in asymmetric warfare. Many cities throughout history were founded under military auspices, a great many have incorporated fortifications, and military principles continue to influence urban design. Indeed, war may have served as the social rationale and economic basis for the very earliest cities. Powers engaged in geopolitical conflict have established fortified settlements as part of military strategies, as in the case of garrison towns, America's Strategic Hamlet Program during the Vietnam War, and Israeli settlements in Palestine. While occupying the Philippines, the US Army ordered local people concentrated into cities and towns, in order to isolate committed insurgents and battle freely against them in the countryside. During World War II, national governments on occasion declared certain cities open, effectively surrendering them to an advancing enemy in order to avoid damage and bloodshed. Urban warfare proved decisive, however, in the Battle of Stalingrad, where Soviet forces repulsed German occupiers, with extreme casualties and destruction. In an era of low-intensity conflict and rapid urbanization, cities have become sites of long-term conflict waged both by foreign occupiers and by local governments against insurgency. Such warfare, known as counterinsurgency, involves techniques of surveillance and psychological warfare as well as close combat, functionally extends modern urban crime prevention, which already uses concepts such as defensible space. Although capture is the more common objective, warfare has in some cases spelt complete destruction for a city. Mesopotamian tablets and ruins attest to such destruction, as does the Latin motto Carthago delenda est. Since the atomic bombing of Hiroshima and Nagasaki and throughout the Cold War, nuclear strategists continued to contemplate the use of "countervalue" targeting: crippling an enemy by annihilating its valuable cities, rather than aiming primarily at its military forces. Climate change Infrastructure Urban infrastructure involves various physical networks and spaces necessary for transportation, water use, energy, recreation, and public functions. Infrastructure carries a high initial cost in fixed capital (pipes, wires, plants, vehicles, etc.) but lower marginal costs and thus positive economies of scale. Because of the higher barriers to entry, these networks have been classified as natural monopolies, meaning that economic logic favors control of each network by a single organization, public or private. Infrastructure in general (if not every infrastructure project) plays a vital role in a city's capacity for economic activity and expansion, underpinning the very survival of the city's inhabitants, as well as technological, commercial, industrial, and social activities. Structurally, many infrastructure systems take the form of networks with redundant links and multiple pathways, so that the system as a whole continue to operate even if parts of it fail. The particulars of a city's infrastructure systems have historical path dependence because new development must build from what exists already. Megaprojects such as the construction of airports, power plants, and railways require large upfront investments and thus tend to require funding from national government or the private sector. Privatization may also extend to all levels of infrastructure construction and maintenance. Urban infrastructure ideally serves all residents equally but in practice may prove uneven—with, in some cities, clear first-class and second-class alternatives. Utilities Public utilities (literally, useful things with general availability) include basic and essential infrastructure networks, chiefly concerned with the supply of water, electricity, and telecommunications capability to the populace. Sanitation, necessary for good health in crowded conditions, requires water supply and waste management as well as individual hygiene. Urban water systems include principally a water supply network and a network (sewerage system) for sewage and stormwater. Historically, either local governments or private companies have administered urban water supply, with a tendency toward government water supply in the 20th century and a tendency toward private operation at the turn of the twenty-first. The market for private water services is dominated by two French companies, Veolia Water (formerly Vivendi) and Engie (formerly Suez), said to hold 70% of all water contracts worldwide. Modern urban life relies heavily on the energy transmitted through electricity for the operation of electric machines (from household appliances to industrial machines to now-ubiquitous electronic systems used in communications, business, and government) and for traffic lights, streetlights and indoor lighting. Cities rely to a lesser extent on hydrocarbon fuels such as gasoline and natural gas for transportation, heating, and cooking. Telecommunications infrastructure such as telephone lines and coaxial cables also traverse cities, forming dense networks for mass and point-to-point communications. Transportation Because cities rely on specialization and an economic system based on wage labour, their inhabitants must have the ability to regularly travel between home, work, commerce, and entertainment. Citydwellers travel foot or by wheel on roads and walkways, or use special rapid transit systems based on underground, overground, and elevated rail. Cities also rely on long-distance transportation (truck, rail, and airplane) for economic connections with other cities and rural areas. Historically, city streets were the domain of horses and their riders and pedestrians, who only sometimes had sidewalks and special walking areas reserved for them. In the west, bicycles or (velocipedes), efficient human-powered machines for short- and medium-distance travel, enjoyed a period of popularity at the beginning of the twentieth century before the rise of automobiles. Soon after, they gained a more lasting foothold in Asian and African cities under European influence. In western cities, industrializing, expanding, and electrifying at this time, public transit systems and especially streetcars enabled urban expansion as new residential neighborhoods sprung up along transit lines and workers rode to and from work downtown. Since the mid-twentieth century, cities have relied heavily on motor vehicle transportation, with major implications for their layout, environment, and aesthetics. (This transformation occurred most dramatically in the US—where corporate and governmental policies favored automobile transport systems—and to a lesser extent in Europe.) The rise of personal cars accompanied the expansion of urban economic areas into much larger metropolises, subsequently creating ubiquitous traffic issues with accompanying construction of new highways, wider streets, and alternative walkways for pedestrians. However, severe traffic jams still occur regularly in cities around the world, as private car ownership and urbanization continue to increase, overwhelming existing urban street networks. The urban bus system, the world's most common form of public transport, uses a network of scheduled routes to move people through the city, alongside cars, on the roads. Economic function itself also became more decentralized as concentration became impractical and employers relocated to more car-friendly locations (including edge cities). Some cities have introduced bus rapid transit systems which include exclusive bus lanes and other methods for prioritizing bus traffic over private cars. Many big American cities still operate conventional public transit by rail, as exemplified by the ever-popular New York City Subway system. Rapid transit is widely used in Europe and has increased in Latin America and Asia. Walking and cycling ("non-motorized transport") enjoy increasing favor (more pedestrian zones and bike lanes) in American and Asian urban transportation planning, under the influence of such trends as the Healthy Cities movement, the drive for sustainable development, and the idea of a carfree city. Techniques such as road space rationing and road use charges have been introduced to limit urban car traffic. Housing Housing of residents presents one of the major challenges every city must face. Adequate housing entails not only physical shelters but also the physical systems necessary to sustain life and economic activity. Home ownership represents status and a modicum of economic security, compared to renting which may consume much of the income of low-wage urban workers. Homelessness, or lack of housing, is a challenge currently faced by millions of people in countries rich and poor. Ecology Urban ecosystems, influenced as they are by the density of human buildings and activities differ considerably from those of their rural surroundings. Anthropogenic buildings and waste, as well as cultivation in gardens, create physical and chemical environments which have no equivalents in wilderness, in some cases enabling exceptional biodiversity. They provide homes not only for immigrant humans but also for immigrant plants, bringing about interactions between species which never previously encountered each other. They introduce frequent disturbances (construction, walking) to plant and animal habitats, creating opportunities for recolonization and thus favoring young ecosystems with r-selected species dominant. On the whole, urban ecosystems are less complex and productive than others, due to the diminished absolute amount of biological interactions. Typical urban fauna include insects (especially ants), rodents (mice, rats), and birds, as well as cats and dogs (domesticated and feral). Large predators are scarce. Cities generate considerable ecological footprints, locally and at longer distances, due to concentrated populations and technological activities. From one perspective, cities are not ecologically sustainable due to their resource needs. From another, proper management may be able to ameliorate a city's ill effects. Air pollution arises from various forms of combustion, including fireplaces, wood or coal-burning stoves, other heating systems, and internal combustion engines. Industrialized cities, and today third-world megacities, are notorious for veils of smog (industrial haze) which envelop them, posing a chronic threat to the health of their millions of inhabitants. Urban soil contains higher concentrations of heavy metals (especially lead, copper, and nickel) and has lower pH than soil in comparable wilderness. Modern cities are known for creating their own microclimates, due to concrete, asphalt, and other artificial surfaces, which heat up in sunlight and channel rainwater into underground ducts. The temperature in New York City exceeds nearby rural temperatures by an average of 2–3 °C and at times 5–10 °C differences have been recorded. This effect varies nonlinearly with population changes (independently of the city's physical size). Aerial particulates increase rainfall by 5–10%. Thus, urban areas experience unique climates, with earlier flowering and later leaf dropping than in nearby country. Poor and working-class people face disproportionate exposure to environmental risks (known as environmental racism when intersecting also with racial segregation). For example, within the urban microclimate, less-vegetated poor neighborhoods bear more of the heat (but have fewer means of coping with it). One of the main methods of improving the urban ecology is including in the cities more natural areas: Parks, Gardens, Lawns, and Trees. These areas improve the health, the well-being of the human, animal, and plant population of the cities. Generally they are called Urban open space (although this word does not always mean green space), Green space, Urban greening. Well-maintained urban trees can provide many social, ecological, and physical benefits to the residents of the city. A study published in Nature's Scientific Reports journal in 2019 found that people who spent at least two hours per week in nature, were 23 percent more likely to be satisfied with their life and were 59 percent more likely to be in good health than those who had zero exposure. The study used data from almost 20,000 people in the UK. Benefits increased for up to 300 minutes of exposure. The benefits applied to men and women of all ages, as well as across different ethnicities, socioeconomic status, and even those with long-term illnesses and disabilities. People who did not get at least two hours — even if they surpassed an hour per week — did not get the benefits. The study is the latest addition to a compelling body of evidence for the health benefits of nature. Many doctors already give nature prescriptions to their patients. The study didn't count time spent in a person's own yard or garden as time in nature, but the majority of nature visits in the study took place within two miles from home. "Even visiting local urban green spaces seems to be a good thing," Dr. White said in a press release. "Two hours a week is hopefully a realistic target for many people, especially given that it can be spread over an entire week to get the benefit." World city system As the world becomes more closely linked through economics, politics, technology, and culture (a process called globalization), cities have come to play a leading role in transnational affairs, exceeding the limitations of international relations conducted by national governments. This phenomenon, resurgent today, can be traced back to the Silk Road, Phoenicia, and the Greek city-states, through the Hanseatic League and other alliances of cities. Today the information economy based on high-speed internet infrastructure enables instantaneous telecommunication around the world, effectively eliminating the distance between cities for the purposes of the international markets and other high-level elements of the world economy, as well as personal communications and mass media. Global city A global city, also known as a world city, is a prominent centre of trade, banking, finance, innovation, and markets. Saskia Sassen used the term "global city" in her 1991 work, The Global City: New York, London, Tokyo to refer to a city's power, status, and cosmopolitanism, rather than to its size. Following this view of cities, it is possible to rank the world's cities hierarchically. Global cities form the capstone of the global hierarchy, exerting command and control through their economic and political influence. Global cities may have reached their status due to early transition to post-industrialism or through inertia which has enabled them to maintain their dominance from the industrial era. This type of ranking exemplifies an emerging discourse in which cities, considered variations on the same ideal type, must compete with each other globally to achieve prosperity. Critics of the notion point to the different realms of power and interchange. The term "global city" is heavily influenced by economic factors and, thus, may not account for places that are otherwise significant. Paul James, for example argues that the term is "reductive and skewed" in its focus on financial systems. Multinational corporations and banks make their headquarters in global cities and conduct much of their business within this context. American firms dominate the international markets for law and engineering and maintain branches in the biggest foreign global cities. Global cities feature concentrations of extremely wealthy and extremely poor people. Their economies are lubricated by their capacity (limited by the national government's immigration policy, which functionally defines the supply side of the labor market) to recruit low- and high-skilled immigrant workers from poorer areas. More and more cities today draw on this globally available labor force. Transnational activity Cities increasingly participate in world political activities independently of their enclosing nation-states. Early examples of this phenomenon are the sister city relationship and the promotion of multi-level governance within the European Union as a technique for European integration. Cities including Hamburg, Prague, Amsterdam, The Hague, and City of London maintain their own embassies to the European Union at Brussels. New urban dwellers may increasingly not simply as immigrants but as transmigrants, keeping one foot each (through telecommunications if not travel) in their old and their new homes. Global governance Cities participate in global governance by various means including membership in global networks which transmit norms and regulations. At the general, global level, United Cities and Local Governments (UCLG) is a significant | by archaeologists are not extensive. They include (known by their Arab names) El Lahun, a workers' town associated with the pyramid of Senusret II, and the religious city Amarna built by Akhenaten and abandoned. These sites appear planned in a highly regimented and stratified fashion, with a minimalistic grid of rooms for the workers and increasingly more elaborate housing available for higher classes. In Mesopotamia, the civilization of Sumer, followed by Assyria and Babylon, gave rise to numerous cities, governed by kings and fostering multiple languages written in cuneiform. The Phoenician trading empire, flourishing around the turn of the first millennium BC, encompassed numerous cities extending from Tyre, Cydon, and Byblos to Carthage and Cádiz. In the following centuries, independent city-states of Greece, especially Athens, developed the polis, an association of male landowning citizens who collectively constituted the city. The agora, meaning "gathering place" or "assembly", was the center of athletic, artistic, spiritual and political life of the polis. Rome was the first city that surpassed one million inhabitants. Under the authority of its empire, Rome transformed and founded many cities (coloniae), and with them brought its principles of urban architecture, design, and society. In the ancient Americas, early urban traditions developed in the Andes and Mesoamerica. In the Andes, the first urban centers developed in the Norte Chico civilization, Chavin and Moche cultures, followed by major cities in the Huari, Chimu and Inca cultures. The Norte Chico civilization included as many as 30 major population centers in what is now the Norte Chico region of north-central coastal Peru. It is the oldest known civilization in the Americas, flourishing between the 30th century BC and the 18th century BC. Mesoamerica saw the rise of early urbanism in several cultural regions, beginning with the Olmec and spreading to the Preclassic Maya, the Zapotec of Oaxaca, and Teotihuacan in central Mexico. Later cultures such as the Aztec, Andean civilization, Mayan, Mississippians, and Pueblo peoples drew on these earlier urban traditions. Many of their ancient cities continue to be inhabited, including major metropolitan cities such as Mexico City, in the same location as Tenochtitlan; while ancient continuously inhabited Pueblos are near modern urban areas in New Mexico, such as Acoma Pueblo near the Albuquerque metropolitan area and Taos Pueblo near Taos; while others like Lima are located nearby ancient Peruvian sites such as Pachacamac. Jenné-Jeno, located in present-day Mali and dating to the third century BC, lacked monumental architecture and a distinctive elite social class—but nevertheless had specialized production and relations with a hinterland. Pre-Arabic trade contacts probably existed between Jenné-Jeno and North Africa. Other early urban centers in sub-Saharan Africa, dated to around 500 AD, include Awdaghust, Kumbi-Saleh the ancient capital of Ghana, and Maranda a center located on a trade route between Egypt and Gao. Middle Ages In the remnants of the Roman Empire, cities of late antiquity gained independence but soon lost population and importance. The locus of power in the West shifted to Constantinople and to the ascendant Islamic civilization with its major cities Baghdad, Cairo, and Córdoba. From the 9th through the end of the 12th century, Constantinople, capital of the Eastern Roman Empire, was the largest and wealthiest city in Europe, with a population approaching 1 million. The Ottoman Empire gradually gained control over many cities in the Mediterranean area, including Constantinople in 1453. In the Holy Roman Empire, beginning in the 12th. century, free imperial cities such as Nuremberg, Strasbourg, Frankfurt, Basel, Zurich, Nijmegen became a privileged elite among towns having won self-governance from their local lay or secular lord or having been granted self-governanace by the emperor and being placed under his immediate protection. By 1480, these cities, as far as still part of the empire, became part of the Imperial Estates governing the empire with the emperor through the Imperial Diet. By the thirteenth and fourteenth centuries, some cities become powerful states, taking surrounding areas under their control or establishing extensive maritime empires. In Italy medieval communes developed into city-states including the Republic of Venice and the Republic of Genoa. In Northern Europe, cities including Lübeck and Bruges formed the Hanseatic League for collective defense and commerce. Their power was later challenged and eclipsed by the Dutch commercial cities of Ghent, Ypres, and Amsterdam. Similar phenomena existed elsewhere, as in the case of Sakai, which enjoyed a considerable autonomy in late medieval Japan. In the first millennium AD, the Khmer capital of Angkor in Cambodia grew into the most extensive preindustrial settlement in the world by area, covering over 1,000 sq km and possibly supporting up to one million people. Early modern In the West, nation-states became the dominant unit of political organization following the Peace of Westphalia in the seventeenth century. Western Europe's larger capitals (London and Paris) benefited from the growth of commerce following the emergence of an Atlantic trade. However, most towns remained small. During the Spanish colonization of the Americas the old Roman city concept was extensively used. Cities were founded in the middle of the newly conquered territories, and were bound to several laws regarding administration, finances and urbanism. Industrial age The growth of modern industry from the late 18th century onward led to massive urbanization and the rise of new great cities, first in Europe and then in other regions, as new opportunities brought huge numbers of migrants from rural communities into urban areas. England led the way as London became the capital of a world empire and cities across the country grew in locations strategic for manufacturing. In the United States from 1860 to 1910, the introduction of railroads reduced transportation costs, and large manufacturing centers began to emerge, fueling migration from rural to city areas. Industrialized cities became deadly places to live, due to health problems resulting from overcrowding, occupational hazards of industry, contaminated water and air, poor sanitation, and communicable diseases such as typhoid and cholera. Factories and slums emerged as regular features of the urban landscape. Post-industrial age In the second half of the twentieth century, deindustrialization (or "economic restructuring") in the West led to poverty, homelessness, and urban decay in formerly prosperous cities. America's "Steel Belt" became a "Rust Belt" and cities such as Detroit, Michigan, and Gary, Indiana began to shrink, contrary to the global trend of massive urban expansion. Such cities have shifted with varying success into the service economy and public-private partnerships, with concomitant gentrification, uneven revitalization efforts, and selective cultural development. Under the Great Leap Forward and subsequent five-year plans continuing today, the People's Republic of China has undergone concomitant urbanization and industrialization and to become the world's leading manufacturer. Amidst these economic changes, high technology and instantaneous telecommunication enable select cities to become centers of the knowledge economy. A new smart city paradigm, supported by institutions such as the RAND Corporation and IBM, is bringing computerized surveillance, data analysis, and governance to bear on cities and city-dwellers. Some companies are building brand new masterplanned cities from scratch on greenfield sites. Urbanization Urbanization is the process of migration from rural into urban areas, driven by various political, economic, and cultural factors. Until the 18th century, an equilibrium existed between the rural agricultural population and towns featuring markets and small-scale manufacturing. With the agricultural and industrial revolutions urban population began its unprecedented growth, both through migration and through demographic expansion. In England the proportion of the population living in cities jumped from 17% in 1801 to 72% in 1891. In 1900, 15% of the world population lived in cities. The cultural appeal of cities also plays a role in attracting residents. Urbanization rapidly spread across the Europe and the Americas and since the 1950s has taken hold in Asia and Africa as well. The Population Division of the United Nations Department of Economic and Social Affairs, reported in 2014 that for the first time more than half of the world population lives in cities. Latin America is the most urban continent, with four fifths of its population living in cities, including one fifth of the population said to live in shantytowns (favelas, poblaciones callampas, etc.). Batam, Indonesia, Mogadishu, Somalia, Xiamen, China and Niamey, Niger, are considered among the world's fastest-growing cities, with annual growth rates of 5–8%. In general, the more developed countries of the "Global North" remain more urbanized than the less developed countries of the "Global South"—but the difference continues to shrink because urbanization is happening faster in the latter group. Asia is home to by far the greatest absolute number of city-dwellers: over two billion and counting. The UN predicts an additional 2.5 billion citydwellers (and 300 million fewer countrydwellers) worldwide by 2050, with 90% of urban population expansion occurring in Asia and Africa. Megacities, cities with population in the multi-millions, have proliferated into the dozens, arising especially in Asia, Africa, and Latin America. Economic globalization fuels the growth of these cities, as new torrents of foreign capital arrange for rapid industrialization, as well as relocation of major businesses from Europe and North America, attracting immigrants from near and far. A deep gulf divides rich and poor in these cities, with usually contain a super-wealthy elite living in gated communities and large masses of people living in substandard housing with inadequate infrastructure and otherwise poor conditions. Cities around the world have expanded physically as they grow in population, with increases in their surface extent, with the creation of high-rise buildings for residential and commercial use, and with development underground. Urbanization can create rapid demand for water resources management, as formerly good sources of freshwater become overused and polluted, and the volume of sewage begins to exceed manageable levels. Government Local government of cities takes different forms including prominently the municipality (especially in England, in the United States, in India, and in other British colonies; legally, the municipal corporation; municipio in Spain and in Portugal, and, along with municipalidad, in most former parts of the Spanish and Portuguese empires) and the commune (in France and in Chile; or comune in Italy). The chief official of the city has the title of mayor. Whatever their true degree of political authority, the mayor typically acts as the figurehead or personification of their city. City governments have authority to make laws governing activity within cities, while its jurisdiction is generally considered subordinate (in ascending order) to state/provincial, national, and perhaps international law. This hierarchy of law is not enforced rigidly in practice—for example in conflicts between municipal regulations and national principles such as constitutional rights and property rights. Legal conflicts and issues arise more frequently in cities than elsewhere due to the bare fact of their greater density. Modern city governments thoroughly regulate everyday life in many dimensions, including public and personal health, transport, burial, resource use and extraction, recreation, and the nature and use of buildings. Technologies, techniques, and laws governing these areas—developed in cities—have become ubiquitous in many areas. Municipal officials may be appointed from a higher level of government or elected locally. Municipal services Cities typically provide municipal services such as education, through school systems; policing, through police departments; and firefighting, through fire departments; as well as the city's basic infrastructure. These are provided more or less routinely, in a more or less equal fashion. Responsibility for administration usually falls on the city government, though some services may be operated by a higher level of government, while others may be privately run. Armies may assume responsibility for policing cities in states of domestic turmoil such as America's King assassination riots of 1968. Finance The traditional basis for municipal finance is local property tax levied on real estate within the city. Local government can also collect revenue for services, or by leasing land that it owns. However, financing municipal services, as well as urban renewal and other development projects, is a perennial problem, which cities address through appeals to higher governments, arrangements with the private sector, and techniques such as privatization (selling services into the private sector), corporatization (formation of quasi-private municipally-owned corporations), and financialization (packaging city assets into tradable financial public contracts and other related rights. This situation has become acute in deindustrialized cities and in cases where businesses and wealthier citizens have moved outside of city limits and therefore beyond the reach of taxation. Cities in search of ready cash increasingly resort to the municipal bond, essentially a loan with interest and a repayment date. City governments have also begun to use tax increment financing, in which a development project is financed by loans based on future tax revenues which it is expected to yield. Under these circumstances, creditors and consequently city governments place a high importance on city credit ratings. Governance Governance includes government but refers to a wider domain of social control functions implemented by many actors including nongovernmental organizations. The impact of globalization and the role of multinational corporations in local governments worldwide, has led to a shift in perspective on urban governance, away from the "urban regime theory" in which a coalition of local interests functionally govern, toward a theory of outside economic control, widely associated in academics with the philosophy of neoliberalism. In the neoliberal model of governance, public utilities are privatized, industry is deregulated, and corporations gain the status of governing actors—as indicated by the power they wield in public-private partnerships and over business improvement districts, and in the expectation of self-regulation through corporate social responsibility. The biggest investors and real estate developers act as the city's de facto urban planners. The related concept of good governance places more emphasis on the state, with the purpose of assessing urban governments for their suitability for development assistance. The concepts of governance and good governance are especially invoked in the emergent megacities, where international organizations consider existing governments inadequate for their large populations. Urban planning Urban planning, the application of forethought to city design, involves optimizing land use, transportation, utilities, and other basic systems, in order to achieve certain objectives. Urban planners and scholars have proposed overlapping theories as ideals for how plans should be formed. Planning tools, beyond the original design of the city itself, include public capital investment in infrastructure and land-use controls such as zoning. The continuous process of comprehensive planning involves identifying general objectives as well as collecting data to evaluate progress and inform future decisions. Government is legally the final authority on planning but in practice the process involves both public and private elements. The legal principle of eminent domain is used by government to divest citizens of their property in cases where its use is required for a project. Planning often involves tradeoffs—decisions in which some stand to gain and some to lose—and thus is closely connected to the prevailing political situation. The history of urban planning dates to some of the earliest known cities, especially in the Indus Valley and Mesoamerican civilizations, which built their cities on grids and apparently zoned different areas for different purposes. The effects of planning, ubiquitous in today's world, can be seen most clearly in the layout of planned communities, fully designed prior to construction, often with consideration for interlocking physical, economic, and cultural systems. Society Social structure Urban society is typically stratified. Spatially, cities are formally or informally segregated along ethnic, economic and racial lines. People living relatively close together may live, work, and play, in separate areas, and associate with different people, forming ethnic or lifestyle enclaves or, in areas of concentrated poverty, ghettoes. While in the US and elsewhere poverty became associated with the inner city, in France it has become associated with the banlieues, areas of urban development which surround the city proper. Meanwhile, across Europe and North America, the racially white majority is empirically the most segregated group. Suburbs in the west, and, increasingly, gated communities and other forms of "privatopia" around the world, allow local elites to self-segregate into secure and exclusive neighborhoods. Landless urban workers, contrasted with peasants and known as the proletariat, form a growing stratum of society in the age of urbanization. In Marxist doctrine, the proletariat will inevitably revolt against the bourgeoisie as their ranks swell with disenfranchised and disaffected people lacking all stake in the status quo. The global urban proletariat of today, however, generally lacks the status as factory workers which in the nineteenth century provided access to the means of production. Economics Historically, cities rely on rural areas for intensive farming to yield surplus crops, in exchange for which they provide money, political administration, manufactured goods, and culture. Urban economics tends to analyze larger agglomerations, stretching beyond city limits, in order to reach a more complete understanding of the local labor market. As hubs of trade cities have long been home to retail commerce and consumption through the interface of shopping. In the 20th century, department stores using new techniques of advertising, public relations, decoration, and design, transformed urban shopping areas into fantasy worlds encouraging self-expression and escape through consumerism. In general, the density of cities expedites commerce and facilitates knowledge spillovers, helping people and firms exchange information and generate new ideas. A thicker labor market allows for better skill matching between firms and individuals. Population density enables also sharing of common infrastructure and production facilities, however in very dense cities, increased crowding and waiting times may lead to some negative effects. Although manufacturing fueled the growth of cities, many now rely on a tertiary or service economy. The services in question range from tourism, hospitality, entertainment, housekeeping and prostitution to grey-collar work in law, finance, and administration. Culture and communications Cities are typically hubs for education and the arts, supporting universities, museums, temples, and other cultural institutions. They feature impressive displays of architecture ranging from small to enormous and ornate to brutal; skyscrapers, providing thousands of offices or homes within a small footprint, and visible from miles away, have become iconic urban features. Cultural elites tend to live in cities, bound together by shared cultural capital, and themselves playing some role in governance. By virtue of their status as centers of culture and literacy, cities can be described as the locus of civilization, world history, and social change. Density makes for effective mass communication and transmission of news, through heralds, printed proclamations, newspapers, and digital media. These communication networks, though still using cities as hubs, penetrate extensively into all populated areas. In the age of rapid communication and transportation, commentators have described urban culture as nearly ubiquitous or as no longer meaningful. Today, a city's promotion of its cultural activities dovetails with place branding and city marketing, public diplomacy techniques used to inform development strategy; to attract businesses, investors, residents, and tourists; and to create a shared identity and sense of place within the metropolitan area. Physical inscriptions, plaques, and monuments on display physically transmit a historical context for urban places. Some cities, such as Jerusalem, Mecca, and Rome have indelible religious status and for hundreds of years have attracted pilgrims. Patriotic tourists visit Agra to see the Taj Mahal, or New York City to visit the World Trade Center. Elvis lovers visit Memphis to pay their respects at Graceland. Place brands (which include place satisfaction and place loyalty) have great economic value (comparable to the value of commodity brands) because of their influence on the decision-making process of people thinking about doing business in—"purchasing" (the brand of)—a city. Bread and circuses among other forms of cultural appeal, attract and entertain the masses. Sports also play a major role in city branding and local identity formation. Cities go to considerable lengths in competing to host the Olympic Games, which bring global attention and tourism. Warfare Cities play a crucial strategic role in warfare due to their economic, demographic, symbolic, and political centrality. For the same reasons, they are targets in asymmetric warfare. Many cities throughout history were founded under military auspices, a great many have incorporated fortifications, and military principles continue to influence urban design. Indeed, war may have served as the social rationale and economic basis for the very earliest cities. Powers engaged in geopolitical conflict have established fortified settlements as part of military strategies, as in the case of garrison towns, America's Strategic Hamlet Program during the Vietnam War, and Israeli settlements in Palestine. While occupying the Philippines, the US Army ordered local people concentrated into cities and towns, in order to isolate committed insurgents and battle freely against them in the countryside. During World War II, national governments on occasion declared certain cities open, effectively surrendering them to an advancing enemy in order to avoid damage and bloodshed. Urban warfare proved decisive, however, in the Battle of Stalingrad, where Soviet forces repulsed German occupiers, with extreme casualties and destruction. In an era of low-intensity conflict and rapid urbanization, cities have become sites of long-term conflict waged both by foreign occupiers and by local governments against insurgency. Such warfare, known as counterinsurgency, involves techniques of surveillance and psychological warfare as well as close combat, functionally extends modern urban crime prevention, which already uses concepts such as defensible space. Although capture is the more common objective, warfare has in some cases spelt complete destruction for a city. Mesopotamian tablets and ruins attest to such destruction, as does the Latin motto Carthago delenda est. Since the atomic bombing of Hiroshima and Nagasaki and throughout the Cold War, nuclear strategists continued to contemplate the use of "countervalue" targeting: crippling an enemy by annihilating its valuable cities, rather than aiming primarily at its military forces. Climate change Infrastructure Urban infrastructure involves various physical networks and spaces necessary for transportation, water use, energy, recreation, and public functions. Infrastructure carries a high initial cost in fixed capital (pipes, wires, plants, vehicles, etc.) but lower marginal costs and thus positive economies of scale. Because of the higher barriers to entry, these networks have been classified as natural monopolies, meaning that economic logic favors control of each network by a single organization, public or private. Infrastructure in general (if not every infrastructure project) plays a vital role in a city's capacity for economic activity and expansion, underpinning the very survival of the city's inhabitants, as well as technological, commercial, industrial, and social activities. Structurally, many infrastructure systems take the form of networks with redundant links and multiple pathways, so that the system as a whole continue to operate even if parts of it fail. The particulars of a city's infrastructure systems have historical path dependence because new development must build from what exists already. Megaprojects such as the construction of airports, power plants, and railways require large upfront investments and thus tend to require funding from national government or the private sector. Privatization may also extend to all levels of infrastructure construction and maintenance. Urban infrastructure ideally serves all residents equally but in practice may prove uneven—with, in some cities, clear first-class and second-class alternatives. Utilities Public utilities (literally, useful things with general availability) include basic and essential infrastructure networks, chiefly concerned with the supply of water, electricity, and telecommunications capability to the populace. Sanitation, necessary for good health in crowded conditions, requires water supply and waste management as well as individual hygiene. Urban water systems include principally a water supply network and a network (sewerage system) for sewage and stormwater. Historically, either local governments or private companies have administered urban water supply, with a tendency toward government water supply in the 20th century and a tendency toward private operation at the turn of the twenty-first. The market for private water services is dominated by two French companies, Veolia Water (formerly Vivendi) and Engie (formerly Suez), said to hold 70% of all water contracts worldwide. Modern urban life relies heavily on the energy transmitted through electricity for the operation of electric machines (from household appliances to industrial machines to now-ubiquitous electronic systems used in communications, business, and government) and for traffic lights, streetlights and indoor lighting. Cities rely to a lesser extent on hydrocarbon fuels such as gasoline and natural gas for transportation, heating, and cooking. Telecommunications infrastructure such as telephone lines and coaxial cables also traverse cities, forming dense networks for mass and point-to-point communications. Transportation Because cities rely on specialization and an economic system based on wage labour, their inhabitants must have the ability to regularly travel between home, work, commerce, and entertainment. Citydwellers travel foot or by wheel on roads and walkways, or use special rapid transit systems based on underground, overground, and elevated rail. Cities also rely on long-distance transportation (truck, rail, and airplane) for economic connections with other cities and rural areas. Historically, city streets were the domain of horses and their riders and pedestrians, who only sometimes had sidewalks and special walking areas reserved for them. In the west, bicycles or (velocipedes), efficient human-powered machines for short- and medium-distance travel, enjoyed a period of popularity at the beginning of the twentieth century before the rise of automobiles. Soon after, they gained a more lasting foothold in Asian and African cities under European influence. In western cities, industrializing, expanding, and electrifying at this time, public transit systems and especially streetcars enabled urban expansion as new residential neighborhoods sprung up along transit lines and workers rode to and from work downtown. Since the mid-twentieth century, cities have relied heavily on motor vehicle transportation, with major implications for their layout, environment, and aesthetics. (This transformation occurred most dramatically in the US—where corporate and governmental policies favored automobile transport systems—and to a lesser extent in Europe.) The rise of personal cars accompanied the expansion of urban economic areas into much larger metropolises, subsequently creating ubiquitous traffic issues with accompanying construction of new highways, wider streets, and alternative walkways for pedestrians. However, severe traffic jams still occur regularly in cities around the world, as private car ownership and urbanization continue to increase, overwhelming existing urban street networks. The urban bus system, the world's most common form of public transport, uses a network of scheduled routes to move people through the city, alongside cars, on the roads. Economic function itself also became more decentralized as concentration became impractical and employers relocated to more car-friendly locations (including edge cities). Some cities have introduced bus rapid transit systems which include exclusive bus lanes and other methods for prioritizing bus traffic over private cars. Many big American cities still operate conventional public transit by rail, as exemplified by the ever-popular New York City Subway system. Rapid transit is widely used in Europe and has increased in Latin America and Asia. Walking and cycling ("non-motorized transport") enjoy increasing favor (more pedestrian zones and bike lanes) in American and Asian urban transportation planning, under the influence of such trends as the Healthy Cities movement, the drive for sustainable development, and the idea of a carfree city. Techniques such as road space rationing and road use charges have been introduced to limit urban car traffic. Housing Housing of residents presents one of the major challenges every city must face. Adequate housing entails not only physical shelters but also the physical systems necessary to sustain life and economic activity. Home ownership represents status and a modicum of economic security, compared to renting which may consume much of the income of low-wage urban workers. Homelessness, or lack of housing, is a challenge currently faced by millions of people in countries rich and poor. Ecology Urban ecosystems, influenced as they are by the density of human buildings and activities differ considerably from those of their rural surroundings. Anthropogenic buildings and waste, as well as cultivation in gardens, create physical and chemical environments which have no equivalents in wilderness, in some cases enabling exceptional biodiversity. They provide homes not only for immigrant humans but also for immigrant plants, bringing about interactions between species which never previously encountered each other. They introduce frequent disturbances (construction, walking) to plant and animal habitats, creating opportunities for recolonization and thus favoring young ecosystems with r-selected species dominant. On the whole, urban ecosystems are less complex and productive than others, due to the diminished absolute amount of biological interactions. Typical urban fauna include insects (especially ants), rodents (mice, rats), and birds, as well as cats and dogs (domesticated and feral). Large predators are scarce. Cities generate considerable ecological |
chervil is native to the Caucasus but was spread by the Romans through most of Europe, where it is now naturalised. It is also grown frequently in the United States, where it sometimes escapes cultivation. Such escape can be recognized, however, as garden chervil is distinguished from all other Anthriscus species growing in North America (i.e., A. caucalis and A. sylvestris) by its having lanceolate-linear bracteoles and a fruit with a relatively long beak. The plants grow to , with tripinnate leaves that may be curly. The small white flowers form small umbels, across. The fruit is about 1 cm long, oblong-ovoid with a slender, ridged beak. Uses and impact Culinary arts Chervil is used, particularly in France, to season poultry, seafood, young spring vegetables (such as carrots), soups, and sauces. More delicate than parsley, it has a faint taste of liquorice or aniseed. Chervil is one of the four traditional French , along with tarragon, chives, and parsley, which are essential to French cooking. Unlike the more pungent, robust herbs such as thyme and rosemary, which can take prolonged cooking, the are added at the last minute, to salads, omelettes, and soups. Horticulture According to some, slugs are attracted | is a delicate annual herb related to parsley. It is commonly used to season mild-flavoured dishes and is a constituent of the French herb mixture . Name The name chervil is from Anglo-Norman, from Latin or , ultimately from Ancient Greek (), meaning "leaves of joy". Biology A member of the Apiaceae, chervil is native to the Caucasus but was spread by the Romans through most of Europe, where it is now naturalised. It is also grown frequently in the United States, where it sometimes escapes cultivation. Such escape can be recognized, however, as garden chervil is distinguished from all other Anthriscus species growing in North America (i.e., A. caucalis and A. sylvestris) by its having lanceolate-linear bracteoles and a fruit with a relatively long beak. The plants grow to , with tripinnate leaves that may be curly. The small white flowers form small umbels, across. The fruit is about 1 cm long, oblong-ovoid with a slender, ridged beak. Uses and impact Culinary arts Chervil is used, particularly in France, to season poultry, seafood, young spring vegetables (such as carrots), soups, and sauces. More delicate than parsley, it has a faint taste of liquorice or aniseed. Chervil is one of the four traditional French , along with tarragon, chives, and parsley, which are essential to French cooking. Unlike the more pungent, robust herbs such as thyme and rosemary, which can take prolonged cooking, the are added at the |
describes how farmers would plant chives between the rocks making up the borders of their flowerbeds, to keep the plants free from pests (such as Japanese beetles). The growing plant repels unwanted insect life, and the juice of the leaves can be used for the same purpose, as well as fighting fungal infections, mildew, and scab. Cultivation Chives are cultivated both for their culinary uses and for their ornamental value; the violet flowers are often used in ornamental dry bouquets. The flowers are also edible and are used in salads, or used to make Blossom vinegars. Chives thrive in well-drained soil, rich in organic matter, with a pH of 6-7 and full sun. They can be grown from seed and mature in summer, or early the following spring. Typically, chives need to be germinated at a temperature of 15 to 20 °C (60-70 °F) and kept moist. They can also be planted under a cloche or germinated indoors in cooler climates, then planted out later. After at least four weeks, the young shoots should be ready to be planted out. They are also easily propagated by division. In cold regions, chives die back to the underground bulbs in winter, with the new leaves appearing in early spring. Chives starting to look old can be cut back to about 2–5 cm. When harvesting, the needed number of stalks should be cut to the base. During the growing season, the plant continually regrows leaves, allowing for a continuous harvest. Chives are susceptible to damage by leek moth larvae, which bore into the leaves or bulbs of the plant. History and cultural importance Chives have been cultivated in Europe since the Middle Ages (from the fifth until the 15th centuries), although their usage dates back 5,000 years. They were sometimes referred to as "rush leeks". It was mentioned in 80 A.D. by Marcus Valerius Martialis in his "Epigrams". The Romans believed chives could relieve the pain from sunburn or a sore throat. They believed eating chives could increase blood pressure and act as a diuretic. Romani have used chives in fortune telling. Bunches of dried chives hung around a house were believed to | habitat Chives are native to temperate areas of Europe, Asia and North America. Range It is found in Asia within the Caucasus (in Armenia, Azerbaijan and Georgia), also in China, Iran, Iraq, Japan (within the provinces of Hokkaido and Honshu), Kazakhstan, Kyrgyzstan, Mongolia, Pakistan, Russian Federation (within the provinces of Kamchatka, Khabarovsk, and Primorye) Siberia and Turkey. In middle Europe, it is found within Austria, the Czech Republic, Germany, the Netherlands, Poland and Switzerland. In northern Europe, in Denmark, Finland, Norway, Sweden and the United Kingdom. In southeastern Europe, within Bulgaria, Greece, Italy and Romania. It is also found in southwestern Europe, in France, Portugal and Spain. In North America, it is found in Canada (within the provinces and territories of Alberta, British Columbia, Manitoba, Northwest Territories, Nova Scotia, New Brunswick, Newfoundland, Nunavut, Ontario, Prince Edward Island, Quebec, Saskatchewan and Yukon), and the United States (within the states of Alaska, Colorado, Connecticut, Idaho, Maine, Maryland, Massachusetts, Michigan, Minnesota, Montana, New Hampshire, New Jersey, New York, Ohio, Oregon, Pennsylvania, Rhode Island, Vermont, Washington, West Virginia, Wisconsin and Wyoming). Uses Culinary arts Chives are grown for their scapes and leaves, which are used for culinary purposes as a flavoring herb, and provide a somewhat milder onion-like flavor than those of other Allium species. Chives have a wide variety of culinary uses, such as in traditional dishes in France, Sweden, and elsewhere. In his 1806 book Attempt at a Flora (Försök til en flora), Retzius describes how chives are used with pancakes, soups, fish, and sandwiches. They are also an ingredient of the gräddfil sauce with the traditional herring dish served at Swedish midsummer celebrations. The flowers may also be used to garnish dishes. In Poland and Germany, chives are served with quark. Chives are one of the fines herbes of French cuisine, the others being tarragon, chervil and parsley. Chives can be found fresh at most markets year-round, making them readily available; they can also be dry-frozen without much impairment to the taste, giving home growers the opportunity to store large quantities harvested from their own gardens. Uses in plant cultivation Retzius also describes how farmers would plant chives between the rocks making up the borders of their flowerbeds, to keep the plants free from pests (such as Japanese beetles). The growing plant repels unwanted insect life, and the juice of the leaves can be used for the same purpose, as well as fighting fungal infections, mildew, and scab. Cultivation Chives are cultivated both for their culinary uses and for their ornamental value; the violet flowers are often used in ornamental dry bouquets. The flowers are also edible and are used in salads, or used to make Blossom vinegars. Chives thrive in well-drained soil, rich in organic matter, with a pH of 6-7 and full sun. They can be grown from seed and mature in summer, or early the following spring. Typically, chives need to be germinated at a temperature of 15 to 20 °C (60-70 °F) and kept moist. They can also be planted under a cloche or germinated indoors in cooler climates, then planted out later. After at least four weeks, the young shoots should be ready to be planted out. They are also easily propagated by division. In cold regions, chives die back to the underground bulbs |
teamed up with his radio producer Armando Iannucci to create On the Hour, a satire of news programmes. This was expanded into a television spin off, The Day Today, which launched the career of comedian Steve Coogan and has since been hailed as one of the most important satirical shows of the 1990s. Morris further developed the satirical news format with Brass Eye, which lampooned celebrities whilst focusing on themes such as crime and drugs. For many, the apotheosis of Morris' career was a Brass Eye special, which dealt with the moral panic surrounding paedophilia. It quickly became one of the most complained-about programmes in British television history, leading the Daily Mail to describe him as "the most loathed man on TV". Meanwhile, Morris' postmodern sketch comedy and ambient music radio show Blue Jam, which had seen controversy similar to Brass Eye, helped him to gain a cult following. Blue Jam was adapted into the TV series Jam, which some hailed as "the most radical and original television programme broadcast in years", and he went on to win the BAFTA Award for Best Short Film after expanding a Blue Jam sketch into My Wrongs 8245–8249 & 117, which starred Paddy Considine. This was followed by Nathan Barley, a sitcom written in collaboration with a then little-known Charlie Brooker that satirised hipsters, which had low ratings but found success upon its DVD release. Morris followed this by joining the cast of the sitcom The IT Crowd, his first project in which he did not have writing or producing input. In 2010, Morris directed his first feature-length film, Four Lions, which satirised Islamic terrorism through a group of inept British Muslims. Reception of the film was largely positive, earning Morris his second BAFTA Film Award, this time for Outstanding Debut. Since 2012, he has directed four episodes of Iannucci's political comedy Veep and appeared onscreen in The Double and Stewart Lee's Comedy Vehicle. His second feature-length film, The Day Shall Come, was released in 2019. Early life Christopher J Morris was born on 15 June 1962 in Colchester, Essex, the son of Rosemary Parrington and Paul Michael Morris. His father was a GP. Morris has a large red birthmark almost completely covering the left side of his face and neck, which he hides with makeup when acting. He grew up in a Victorian farmhouse in the village Buckden, Cambridgeshire, which he described as "very dull". He has two younger brothers, including theatre director Tom Morris. From an early age, he was a prankster and had a passion for radio. From the age of 10, he was educated at the independent Jesuit boarding school Stonyhurst College in Stonyhurst, Lancashire. He went to study zoology at the University of Bristol, where he gained a 2:1. Career Radio On graduating, Morris pursued a career as a musician in various bands, for which he played the bass guitar. He then went to work for Radio West, a local radio station in Bristol. He then took up a news traineeship with BBC Radio Cambridgeshire, where he took advantage of access to editing and recording equipment to create elaborate spoofs and parodies. He also spent time in early 1987 hosting a 2–4pm afternoon show and finally ended up presenting Saturday morning show I.T. In July 1987, he moved on to BBC Radio Bristol to present his own show No Known Cure, broadcast on Saturday and Sunday mornings. The show was surreal and satirical, with odd interviews conducted with unsuspecting members of the public. He was fired from Bristol in 1990 after "talking over the news bulletins and making silly noises". In 1988 he also joined, from its launch, Greater London Radio (GLR). He presented The Chris Morris Show on GLR until 1993, when one show got suspended after a sketch was broadcast involving a child "outing" celebrities. In 1991, Morris joined Armando Iannucci's spoof news project On the Hour. Broadcast on BBC Radio 4, it saw him work alongside Iannucci, Steve Coogan, Stewart Lee, Richard Herring and Rebecca Front. In 1992, Morris hosted Danny Baker's Radio 5 Morning Edition show for a week whilst Baker was on holiday. In 1994, Morris began a weekly evening show, the Chris Morris Music Show, on BBC Radio 1 alongside Peter Baynham and 'man with a mobile phone' Paul Garner. In the shows, Morris perfected the spoof interview style that would become a central component of his Brass Eye programme. In the same year, Morris teamed up with Peter Cook (as Sir Arthur Streeb-Greebling), in a series of improvised conversations for BBC Radio 3 entitled Why Bother?. Move into television and film In 1994, a BBC 2 television series based on On the Hour was broadcast under the name The Day Today. The Day Today made a star of Morris, and marked the television debut of Steve Coogan's Alan Partridge character. The programme ended on a high after just one series, with Morris winning the 1994 British Comedy Award for Best Newcomer for his lead role as the Paxmanesque news anchor. In 1996, Morris appeared on the daytime programme The Time, The Place, posing as an academic, Thurston Lowe, in a discussion entitled "Are British Men Lousy Lovers?", but was found out when a producer alerted the show's host, John Stapleton. In 1997, the black humour which had featured in On the Hour and The Day Today became more prominent in Brass Eye, another spoof current affairs television documentary, shown on Channel 4. All three series satirised and exaggerated issues expected of news shows. The second episode of Brass Eye, for example, satirised drugs and the political rhetoric surrounding them. To help convey the satire, Morris invented a fictional drug by the name of "cake". In the episode, British celebrities and politicians describe the supposed symptoms in detail; David Amess mentioned the fictional drug at Parliament. In 2001, Morris' satirized the moral panic regarding pedophilia in the most controversial episode of Brass Eye, "Paedogeddon". Channel 4 apologised for the episode after receiving criticism from tabloids and around 3,000 complaints from viewers, which, at the time, was the most for an episode of British television. From 1997 to 1999, Morris created Blue Jam for BBC Radio 1, a surreal taboo-breaking radio show set to an ambient soundtrack. In 2000, this was followed by Jam, a television reworking. Morris released a 'remix' version of this, entitled Jaaaaam. In 2002, Morris ventured into film, directing the short My Wrongs#8245–8249 & 117, adapted from a Blue Jam monologue about a man led astray by a sinister talking dog. It was the first film project of Warp Films, a branch of Warp Records. In 2002 it won the BAFTA for best short film. In 2005 Morris worked on a sitcom entitled Nathan Barley, based on the character created by Charlie Brooker for his website TVGoHome (Morris had contributed to TVGoHome on occasion, under the pseudonym 'Sid Peach'). Co-written by Brooker and Morris, the series was broadcast on Channel 4 in early 2005. The IT Crowd and Comedy Vehicle Morris was a cast member in The IT Crowd, a Channel 4 sitcom which focused on the information technology department of the fictional company Reynholm Industries. The series was written and directed by Graham Linehan (with whom Morris collaborated on The Day Today, Brass Eye and Jam) and produced by Ash Atalla. Morris played Denholm Reynholm, the eccentric managing director of the company. This marked the first time Morris has acted in a substantial role in a project which he has not developed himself. Morris' character appeared to leave the series during episode two of the second series. His character made a brief return in the first episode of the third series. In November 2007, Morris wrote an article for The Observer in response to Ronan Bennett's article published six days earlier | symptoms in detail; David Amess mentioned the fictional drug at Parliament. In 2001, Morris' satirized the moral panic regarding pedophilia in the most controversial episode of Brass Eye, "Paedogeddon". Channel 4 apologised for the episode after receiving criticism from tabloids and around 3,000 complaints from viewers, which, at the time, was the most for an episode of British television. From 1997 to 1999, Morris created Blue Jam for BBC Radio 1, a surreal taboo-breaking radio show set to an ambient soundtrack. In 2000, this was followed by Jam, a television reworking. Morris released a 'remix' version of this, entitled Jaaaaam. In 2002, Morris ventured into film, directing the short My Wrongs#8245–8249 & 117, adapted from a Blue Jam monologue about a man led astray by a sinister talking dog. It was the first film project of Warp Films, a branch of Warp Records. In 2002 it won the BAFTA for best short film. In 2005 Morris worked on a sitcom entitled Nathan Barley, based on the character created by Charlie Brooker for his website TVGoHome (Morris had contributed to TVGoHome on occasion, under the pseudonym 'Sid Peach'). Co-written by Brooker and Morris, the series was broadcast on Channel 4 in early 2005. The IT Crowd and Comedy Vehicle Morris was a cast member in The IT Crowd, a Channel 4 sitcom which focused on the information technology department of the fictional company Reynholm Industries. The series was written and directed by Graham Linehan (with whom Morris collaborated on The Day Today, Brass Eye and Jam) and produced by Ash Atalla. Morris played Denholm Reynholm, the eccentric managing director of the company. This marked the first time Morris has acted in a substantial role in a project which he has not developed himself. Morris' character appeared to leave the series during episode two of the second series. His character made a brief return in the first episode of the third series. In November 2007, Morris wrote an article for The Observer in response to Ronan Bennett's article published six days earlier in The Guardian. Bennett's article, "Shame on us", accused the novelist Martin Amis of racism. Morris' response, "The absurd world of Martin Amis", was also highly critical of Amis; although he did not accede to Bennett's accusation of racism, Morris likened Amis to the Muslim cleric Abu Hamza (who was jailed for inciting racial hatred in 2006), suggesting that both men employ "mock erudition, vitriol and decontextualised quotes from the Qu'ran" to incite hatred. Morris served as script editor for the 2009 series Stewart Lee's Comedy Vehicle, working with former colleagues Stewart Lee, Kevin Eldon and Armando Iannucci. He maintained this role for the second (2011) and third series (2014), also appearing as a mock interviewer dubbed the "hostile interrogator" in the third and fourth series. Four Lions, Veep, and other appearances Morris completed his debut feature film Four Lions in late 2009, a satire based on a group of Islamist terrorists in Sheffield. It premiered at the Sundance Film Festival in January 2010 and was short-listed for the festival's World Cinema Narrative prize. The film (working title Boilerhouse) was picked up by Film Four. Morris told The Sunday Times that the film sought to do for Islamic terrorism what Dad's Army, the classic BBC comedy, did for the Nazis by showing them as "scary but also ridiculous". In 2012, Morris directed the seventh and penultimate episode of the first season of Veep, an Armando Iannucci-devised American version of The Thick of It. In 2013, he returned to direct two episodes for the second season of Veep, and a further episode for season three in 2014. In 2013, Morris appeared briefly in Richard Ayoade's The Double, a black comedy film based on the Fyodor Dostoyevsky novella of the same name. Morris had previously worked with Ayoade on Nathan Barley and The IT Crowd. In February 2014, Morris made a surprise appearance at the beginning of a Stewart Lee live show, introducing the comedian with fictional anecdotes about their work together. The following month, Morris appeared in the third series of Stewart Lee's Comedy Vehicle as a "hostile interrogator", a role previously occupied by Armando Iannucci. In December 2014, it was announced that a short radio collaboration with Noel Fielding and Richard Ayoade would be broadcast on BBC Radio 6. According to Fielding, the work had been in progress since around 2006. However, in January 2015 it was decided, 'in consultation with [Morris]', that the project was not yet complete, and so the intended broadcast did not go ahead. The Day Shall Come A statement released by Film4 in February 2016 made reference to funding what would be Morris' second feature film. In November 2017 it was reported that Morris had shot the movie, starring Anna Kendrick, in the Dominican Republic but the title was not made public. It was later reported in January 2018 that Jim Gaffigan and Rupert Friend had joined the cast of the still-untitled film, and that the plot would revolve around an FBI hostage situation gone wrong. The completed film, titled The Day Shall Come, had its world premiere at South by Southwest on 11 |
Asian, and Japanese descent. The highest population of Asian Americans can be found on the south and southeast side of Denver, as well as some on Denver's southwest side. The Denver metropolitan area is considered more liberal and diverse than much of the state when it comes to political issues and environmental concerns. There were a total of 70,331 births in Colorado in 2006. (Birth rate of 14.6 per thousand.) In 2007, non-Hispanic whites were involved in 59.1% of all the births. Some 14.06% of those births involved a non-Hispanic white person and someone of a different race, most often with a couple including one Hispanic. A birth where at least one Hispanic person was involved counted for 43% of the births in Colorado. As of the 2010 census, Colorado has the seventh highest percentage of Hispanics (20.7%) in the U.S. behind New Mexico (46.3%), California (37.6%), Texas (37.6%), Arizona (29.6%), Nevada (26.5%), and Florida (22.5%). Per the 2000 census, the Hispanic population is estimated to be 918,899 or approximately 20% of the state total population. Colorado has the 5th-largest population of Mexican-Americans, behind California, Texas, Arizona, and Illinois. In percentages, Colorado has the 6th-highest percentage of Mexican-Americans, behind New Mexico, California, Texas, Arizona, and Nevada. Birth data In 2011, 46% of Colorado's population younger than the age of one were minorities, meaning that they had at least one parent who was not non-Hispanic white. Note: Births in table don't add up, because Hispanics are counted both by their ethnicity and by their race, giving a higher overall number. Since 2016, data for births of White Hispanic origin are not collected, but included in one Hispanic group; persons of Hispanic origin may be of any race. In 2017, Colorado recorded the second-lowest fertility rate in the United States outside of New England, after Oregon, at 1.63 children per woman. Significant, contributing factors to the decline in pregnancies were the Title X Family Planning Program and an intrauterine device grant from Warren Buffett's family. Language English, the official language of the state, is the most commonly spoken language in Colorado, followed by Spanish. One Native American language still spoken in Colorado is the Colorado River Numic language also known as the Ute dialect. Religion Major religious affiliations of the people of Colorado are 64% Christian, of whom there are 44% Protestant, 16% Roman Catholic, 3% Mormon, and 1% Eastern Orthodox. Other religious breakdowns are 1% Jewish, 1% Muslim, 1% Buddhist and 4% other. The religiously unaffiliated make up 29% of the population. The largest denominations by number of adherents in 2010 were the Catholic Church with 811,630; multi-denominational Evangelical Protestants with 229,981; and The Church of Jesus Christ of Latter-day Saints with 151,433. Our Lady of Guadalupe Catholic Church was the first permanent Catholic parish in modern-day Colorado and was constructed by Spanish colonists from New Mexico in modern-day Conejos. Latin Church Catholics are served by three dioceses: the Archdiocese of Denver and the Dioceses of Colorado Springs and Pueblo. The first permanent settlement by members of The Church of Jesus Christ of Latter-day Saints in Colorado arrived from Mississippi and initially camped along the Arkansas River just east of the present-day site of Pueblo. Health Colorado is generally considered among the healthiest states by behavioral and healthcare researchers. Among the positive contributing factors is the state's well-known outdoor recreation opportunities and initiatives. However, there is a stratification of health metrics with wealthier counties such as Douglas and Pitkin performing significantly better relative to southern, less wealthy counties such as Huerfano and Las Animas. Obesity According to several studies, Coloradans have the lowest rates of obesity of any state in the US. , 24% of the population was considered medically obese, and while the lowest in the nation, the percentage had increased from 17% in 2004. Life expectancy According to a report in the Journal of the American Medical Association, residents of Colorado had a 2014 life expectancy of 80.21 years, the longest of any U.S. state. Economy Total employment (2019): 2,473,192 Number of employer establishments: 174,258 CNBC's list of "Top States for Business for 2010" has recognized Colorado as the third-best state in the nation, falling short only to Texas and Virginia. The total state product in 2015 was $318.6 billion. Median Annual Household Income in 2016 was $70,666, 8th in the nation. Per capita personal income in 2010 was $51,940, ranking Colorado 11th in the nation. The state's economy broadened from its mid-19th-century roots in mining when irrigated agriculture developed, and by the late 19th century, raising livestock had become important. Early industry was based on the extraction and processing of minerals and agricultural products. Current agricultural products are cattle, wheat, dairy products, corn, and hay. The federal government is also a major economic force in the state with many important federal facilities including NORAD (North American Aerospace Defense Command), United States Air Force Academy, Schriever Air Force Base located approximately 10 miles (16 kilometers) east of Peterson Air Force Base, and Fort Carson, both located in Colorado Springs within El Paso County; NOAA, the National Renewable Energy Laboratory (NREL) in Golden, and the National Institute of Standards and Technology in Boulder; U.S. Geological Survey and other government agencies at the Denver Federal Center near Lakewood; the Denver Mint, Buckley Space Force Base, the Tenth Circuit Court of Appeals, and the Byron G. Rogers Federal Building and United States Courthouse in Denver; and a federal Supermax Prison and other federal prisons near Cañon City. In addition to these and other federal agencies, Colorado has abundant National Forest land and four National Parks that contribute to federal ownership of of land in Colorado, or 37% of the total area of the state. In the second half of the 20th century, the industrial and service sectors have expanded greatly. The state's economy is diversified, and is notable for its concentration of scientific research and high-technology industries. Other industries include food processing, transportation equipment, machinery, chemical products, the extraction of metals such as gold (see Gold mining in Colorado), silver, and molybdenum. Colorado now also has the largest annual production of beer of any state. Denver is an important financial center. The state's diverse geography and majestic mountains attract millions of tourists every year, including 85.2 million in 2018. Tourism contributes greatly to Colorado's economy, with tourists generating $22.3 billion in 2018. A number of nationally known brand names have originated in Colorado factories and laboratories. From Denver came the forerunner of telecommunications giant Qwest in 1879, Samsonite luggage in 1910, Gates belts and hoses in 1911, and Russell Stover Candies in 1923. Kuner canned vegetables began in Brighton in 1864. From Golden came Coors beer in 1873, CoorsTek industrial ceramics in 1920, and Jolly Rancher candy in 1949. CF&I railroad rails, wire, nails, and pipe debuted in Pueblo in 1892. Holly Sugar was first milled from beets in Holly in 1905, and later moved its headquarters to Colorado Springs. The present-day Swift packed meat of Greeley evolved from Monfort of Colorado, Inc., established in 1930. Estes model rockets were launched in Penrose in 1958. Fort Collins has been the home of Woodward Governor Company's motor controllers (governors) since 1870, and Waterpik dental water jets and showerheads since 1962. Celestial Seasonings herbal teas have been made in Boulder since 1969. Rocky Mountain Chocolate Factory made its first candy in Durango in 1981. Colorado has a flat 4.63% income tax, regardless of income level. On November 3, 2020, voters authorized an initiative to lower that income tax rate to 4.55 percent. Unlike most states, which calculate taxes based on federal adjusted gross income, Colorado taxes are based on taxable income—income after federal exemptions and federal itemized (or standard) deductions. Colorado's state sales tax is 2.9% on retail sales. When state revenues exceed state constitutional limits, according to Colorado's Taxpayer Bill of Rights legislation, full-year Colorado residents can claim a sales tax refund on their individual state income tax return. Many counties and cities charge their own rates, in addition to the base state rate. There are also certain county and special district taxes that may apply. Real estate and personal business property are taxable in Colorado. The state's senior property tax exemption was temporarily suspended by the Colorado Legislature in 2003. The tax break was scheduled to return for assessment year 2006, payable in 2007. , the state's unemployment rate was 4.2%. The West Virginia teachers' strike in 2018 inspired teachers in other states, including Colorado, to take similar action. Natural resources Colorado has significant hydrocarbon resources. According to the Energy Information Administration, Colorado hosts seven of the Nation's hundred largest natural gas fields, and two of its hundred largest oil fields. Conventional and unconventional natural gas output from several Colorado basins typically account for more than five percent of annual U.S. natural gas production. Colorado's oil shale deposits hold an estimated of oil—nearly as much oil as the entire world's proven oil reserves; the economic viability of the oil shale, however, has not been demonstrated. Substantial deposits of bituminous, subbituminous, and lignite coal are found in the state. Uranium mining in Colorado goes back to 1872, when pitchblende ore was taken from gold mines near Central City, Colorado. Not counting byproduct uranium from phosphate, Colorado is considered to have the third-largest uranium reserves of any U.S. state, behind Wyoming and New Mexico. When Colorado and Utah dominated radium mining from 1910 to 1922, uranium and vanadium were the byproducts (giving towns like present-day Superfund site Uravan their names). Uranium price increases from 2001 to 2007 prompted a number of companies to revive uranium mining in Colorado. During the 1940s, certain communities–including Naturita and Paradox–earned the moniker of "yellowcake towns" from their relationship with uranium mining. Price drops and financing problems in late 2008 forced these companies to cancel or scale back uranium-mining project. As of 2016, there were no major uranium mining operations in the state, though plans existed to restart production. Corn grown in the flat eastern part of the state offers potential resources for ethanol production. Electricity generation Colorado's high Rocky Mountain ridges and eastern plains offer wind power potential, and geologic activity in the mountain areas provides potential for geothermal power development. Much of the state is sunny, and could produce solar power. Major rivers flowing from the Rocky Mountains offer hydroelectric power resources. Culture Arts and film List of museums in Colorado List of theaters in Colorado Music of Colorado A number of film productions have shot on location in Colorado, especially prominent Westerns like True Grit, The Searchers, and Butch Cassidy and the Sundance Kid. A number of historic military forts, railways with trains still operating, mining ghost towns have been used and transformed for historical accuracy in well known films. There are also a number of scenic highways and mountain passes that helped to feature the open road in films such as Vanishing Point, Bingo and Starman. Some Colorado landmarks have been featured in films, such as The Stanley Hotel in Dumb and Dumber and The Shining and the Sculptured House in Sleeper. In 2015, Furious 7 was to film driving sequences on Pikes Peak Highway in Colorado. The TV series Good Luck Charlie was set, but not filmed, in Denver, Colorado. The Colorado Office of Film and Television has noted that more than 400 films have been shot in Colorado. There are also a number of established film festivals in Colorado, including Aspen Shortsfest, Boulder International Film Festival, Castle Rock Film Festival, Denver Film Festival, Festivus Film Festival, Mile High Horror Film Festival, Moondance International Film Festival, Mountainfilm in Telluride, Rocky Mountain Women's Film Festival, and Telluride Film Festival. Many notable writers have lived or spent extended periods of time in Colorado. Beat Generation writers Jack Kerouac and Neal Cassady lived in and around Denver for several years each. Irish playwright Oscar Wilde visited Colorado on his tour of the United States in 1882, writing in his 1906 Impressions of America that Leadville was "the richest city in the world. It has also got the reputation of being the roughest, and every man carries a revolver." Cuisine Colorado is known for its Southwest and Rocky Mountain cuisine. Mexican restaurants are prominent throughout the state. Boulder was named America's Foodiest Town 2010 by Bon Appétit. Boulder, and Colorado in general, is home to a number of national food and beverage companies, top-tier restaurants and farmers' markets. Boulder also has more Master Sommeliers per capita than any other city, including San Francisco and New York. Denver is known for steak, but now has a diverse culinary scene with many restaurants. Polidori Sausage is a brand of pork products available in supermarkets, which originated in Colorado, in the early 20th century. The Food & Wine Classic is held annually each June in Aspen. Aspen also has a reputation as the culinary capital of the Rocky Mountain region. Wine and beer Colorado wines include award-winning varietals that have attracted favorable notice from outside the state. With wines made from traditional Vitis vinifera grapes along with wines made from cherries, peaches, plums and honey, Colorado wines have won top national and international awards for their quality. Colorado's grape growing regions contain the highest elevation vineyards in the United States, with most viticulture in the state practiced between above sea level. The mountain climate ensures warm summer days and cool nights. Colorado is home to two designated American Viticultural Areas of the Grand Valley AVA and the West Elks AVA, where most of the vineyards in the state are located. However, an increasing number of wineries are located along the Front Range. In 2018, Wine Enthusiast Magazine named Colorado's Grand Valley AVA in Mesa County, Colorado, as one of the Top Ten wine travel destinations in the world. Colorado is home to many nationally praised microbreweries, including New Belgium Brewing Company, Odell Brewing Company, Great Divide Brewing Company, and Bristol Brewing Company. The area of northern Colorado near and between the cities of Denver, Boulder, and Fort Collins is known as the "Napa Valley of Beer" due to its high density of craft breweries. Marijuana and hemp Colorado is open to cannabis (marijuana) tourism. With the adoption of the 64th state amendment in 2012, Colorado became the first state in the union to legalize marijuana for medicinal (2000), industrial (referring to hemp, 2012), and recreational (2012) use. Colorado's marijuana industry sold $1.31 billion worth of marijuana in 2016 and $1.26 billion in the first three-quarters of 2017. The state generated tax, fee, and license revenue of $194 million in 2016 on legal marijuana sales. Colorado regulates hemp as any part of the plant with less than 0.3% THC. On April 4, 2014, Senate Bill 14–184 addressing oversight of Colorado's industrial hemp program was first introduced, ultimately being signed into law by Governor John Hickenlooper on May 31, 2014. Medicinal use On November 7, 2000, 54% of Colorado voters passed Amendment 20, which amends the Colorado State constitution to allow the medical use of marijuana. A patient's medical use of marijuana, within the following limits, is lawful: (I) No more than of a usable form of marijuana; and (II) No more than twelve marijuana plants, with six or fewer being mature, flowering plants that are producing a usable form of marijuana. Currently Colorado has listed "eight medical conditions for which patients can use marijuana—cancer, glaucoma, HIV/AIDS, muscle spasms, seizures, severe pain, severe nausea and cachexia, or dramatic weight loss and muscle atrophy". While governor, John Hickenlooper allocated about half of the state's $13 million "Medical Marijuana Program Cash Fund" to medical research in the 2014 budget. By 2018, the Medical Marijuana Program Cash Fund was the "largest pool of pot money in the state" and was used to fund programs including research into pediatric applications for controlling autism symptoms. Recreational use On November 6, 2012, voters amended the state constitution to protect "personal use" of marijuana for adults, establishing a framework to regulate marijuana in a manner similar to alcohol. The first recreational marijuana shops in Colorado, and by extension the United States, opened their doors on January 1, 2014. Sports Colorado has five major professional sports leagues, all based in the Denver metropolitan area. Colorado is the least populous state with a franchise in each of the major professional sports leagues. The Colorado Springs Snow Sox professional baseball team is based in Colorado Springs. The team is a member of the Pecos League, an independent baseball league which is not affiliated with Major or Minor League Baseball. The Pikes Peak International Hill Climb is a major hillclimbing motor race held at the Pikes Peak Highway. The Cherry Hills Country Club has hosted several professional golf tournaments, including the U.S. Open, U.S. Senior Open, U.S. Women's Open, PGA Championship and BMW Championship. Professional sports teams College athletics The following universities and colleges participate in the National Collegiate Athletic Association Division I. The most popular college sports program is the University of Colorado Buffaloes, who used to play in the Big-12 but now play in the Pac-12. They have won the 1957 and 1991 Orange Bowl, 1995 Fiesta Bowl, and 1996 Cotton Bowl Classic. Transportation Colorado's primary mode of transportation (in terms of passengers) is its highway system. Interstate 25 (I-25) is the primary north–south highway in the state, connecting Pueblo, Colorado Springs, Denver, and Fort Collins, and extending north to Wyoming and south to New Mexico. I-70 is the primary east–west corridor. It connects Grand Junction and the mountain communities with Denver, and enters Utah and Kansas. The state is home to a network of US and Colorado highways that provide access to all principal areas of the state. Many smaller communities are connected to this network only via county roads. Denver International Airport (DIA) is the fifth-busiest domestic U.S. airport and twentieth busiest airport in the world by passenger traffic. DIA handles by far the largest volume of commercial air traffic in Colorado, and is the busiest U.S. hub airport between Chicago and the Pacific coast, making Denver the most important airport for connecting passenger traffic in the western United States. Extensive public transportation bus services are offered both intra-city and inter-city—including the Denver metro area's extensive RTD services. The Regional Transportation District (RTD) operates the popular RTD Bus & Rail transit system in the Denver Metropolitan Area. the RTD rail system had 170 light-rail vehicles, serving of track. Amtrak operates two passenger rail lines in Colorado, the California Zephyr and Southwest Chief. Colorado's contribution to world railroad history was forged principally by the Denver and Rio Grande Western Railroad which began in 1870 and wrote the book on mountain railroading. In 1988 the "Rio Grande" acquired, but was merged into, the Southern Pacific Railroad by their joint owner Philip Anschutz. On September 11, 1996, Anschutz sold the combined company to the Union Pacific Railroad, creating the largest railroad network in the United States. The Anschutz sale was partly in response to the earlier merger of Burlington Northern and Santa Fe which formed the large Burlington Northern and Santa Fe Railway (BNSF), Union Pacific's principal competitor in western U.S. railroading. Both Union Pacific and BNSF have extensive freight operations in Colorado. Colorado's freight railroad network consists of 2,688 miles of Class I trackage. It is integral to the U.S. economy, being a critical artery for the movement of energy, agriculture, mining, and industrial commodities as well as general freight and manufactured products between the East and Midwest and the Pacific coast states. In August 2014, Colorado began to issue driver licenses to aliens not lawfully in the United States who lived in Colorado. In September 2014, KCNC reported that 524 non-citizens were issued Colorado driver licenses that are normally issued to U.S. citizens living in Colorado. Government State government Like the federal government and all other U.S. states, Colorado's state constitution provides for three branches of government: the legislative, the executive, and the judicial branches. The Governor of Colorado heads the state's executive branch. The current governor is Jared Polis, a Democrat. Colorado's other statewide elected executive officers are the Lieutenant Governor of Colorado (elected on a ticket with the Governor), Secretary of State of Colorado, Colorado State Treasurer, and Attorney General of Colorado, all of whom serve four-year terms. The seven-member Colorado Supreme Court is the state's highest court, with seven justices. The Colorado Court of Appeals, with 22 judges, sits in divisions of three judges each. Colorado is divided into 22 judicial districts, each of which has a district court and a county court with limited jurisdiction. The state also has specialized water courts, which sit in seven distinct divisions around the state and which decide matters relating to water rights and the use and administration of water. The state legislative body is the Colorado General Assembly, which is made up of two houses – the House of Representatives and the Senate. The House has 65 members and the Senate has 35. , the Democratic Party holds a 20 to 15 majority in the Senate and a 41 to 24 majority in the House. Most Coloradans are native to other states (nearly 60% according to the 2000 census), and this is illustrated by the fact that the state did not have a native-born governor from 1975 (when John David Vanderhoof left office) until 2007, when Bill Ritter took office; his election the previous year marked the first electoral victory for a native-born Coloradan in a gubernatorial race since 1958 (Vanderhoof had ascended from the Lieutenant Governorship when John Arthur Love was given a position in Richard Nixon's administration in 1973). Tax is collected by the Colorado Department of Revenue. Politics Colorado was once considered a swing state, but more recently has swung into a relatively safe blue state in both state and federal elections. In presidential elections, it had not been won until 2020 by double digits since 1984, and has backed the winning candidate in 9 of the last 11 elections. Coloradans have elected 17 Democrats and 12 Republicans to the governorship in the last 100 years. In presidential politics, Colorado was considered a reliably Republican state during the post-World War II era, voting for the Democratic candidate only in 1948, 1964, and 1992. However, it became a competitive swing state in the 1990s. Since the mid-2000s, it has swung heavily to the Democrats, voting for Barack Obama in 2008 and 2012, Hillary Clinton in 2016 and Joe Biden in 2020. Colorado politics has the contrast of conservative cities such as Colorado Springs and Grand Junction and liberal cities such as Boulder and Denver. Democrats are strongest in metropolitan Denver, the college towns of Fort Collins and Boulder, southern Colorado (including Pueblo), and a number of western ski resort counties. The Republicans are strongest in the Eastern Plains, Colorado Springs, Greeley, and far Western Colorado near Grand Junction. Colorado is represented by two United States Senators: United States Senate Class 2, John Hickenlooper (Democratic) 2021– United States Senate Class 3, Michael Bennet (Democratic) 2009– Colorado is represented by seven Representatives to the United States House of Representatives: Colorado's 1st congressional district, Diana DeGette (Democratic) 1997– Colorado's 2nd congressional district, Joe Neguse (Democratic) 2019– Colorado's 3rd congressional district, Lauren Boebert (Republican) 2021– Colorado's 4th congressional district, Ken Buck (Republican) 2015– Colorado's 5th congressional district, Doug Lamborn (Republican) 2007– Colorado's 6th congressional district, Jason Crow (Democratic) 2019– Colorado's 7th congressional district, Ed Perlmutter (Democratic) 2007– In a 2020 study, Colorado was ranked as the 7th easiest state for citizens to vote in. Significant initiatives and legislation enacted in Colorado In 1881 Colorado voters approved a referendum that selected Denver as the state capital. Colorado was the first state in the union to enact, by voter referendum, a law extending suffrage to women. That initiative was approved by the state's voters on November 7, 1893. On the November 8, 1932, ballot, Colorado approved the repeal of alcohol prohibition more than a year before the Twenty-first Amendment to the United States Constitution was ratified. Colorado has banned, via C.R.S. section 12-6-302, the sale of motor vehicles on Sunday since at least 1953. In 1972 Colorado voters rejected a referendum proposal to fund the 1976 Winter Olympics, which had been scheduled to be held in the state. Denver had been chosen by the International Olympic Committee as host city on May 12, 1970. In 1992, by a margin of 53 to 47 percent, Colorado voters approved an amendment to the state constitution (Amendment 2) that would have prevented any city, town, or county in the state from taking any legislative, executive, or judicial action to recognize homosexuals or bisexuals as a protected class. In 1996, in a 6–3 ruling in Romer v. Evans, the U.S. Supreme Court found that preventing protected status based upon homosexuality or bisexuality did not satisfy the Equal Protection Clause. In 2006 voters passed Amendment 43, which purported to ban gay marriage in Colorado. That initiative was nullified by the U.S. Supreme Court's 2015 decision in Obergefell v. Hodges. In 2012, voters amended the state constitution protecting "personal use" of marijuana for adults, establishing a framework to regulate cannabis in a manner similar to alcohol. The first recreational marijuana shops in Colorado, and by extension the United States, opened their doors on January 1, 2014. On May 29, 2019, Governor Jared Polis signed House Bill 1124 immediately prohibiting law enforcement officials in Colorado from holding undocumented immigrants solely on the basis of a request from U.S. Immigration and Customs Enforcement. On June 14, 2006, the United States mint released the 38th of 50 state quarters authorized by Public Law 105-124, the Colorado State Quarter. Later, in 2014, the United States mint released the 24th Quarter in the America The Beautiful Quarters Program, the Colorado Great Sand Dunes National Park Quarter. Education The first institution of higher education in the Colorado Territory was the Colorado Seminary, opened on November 16, 1864, by the Methodist Episcopal Church. The seminary closed in 1867, but reopened in 1880 as the University of Denver. In 1870, the Bishop George Maxwell Randall of the Episcopal Missionary District of Colorado and Parts Adjacent opened the first of what become the Colorado University Schools which would include the Territorial School of Mines opened in 1873 and sold to the Colorado Territory in 1874. These schools were initially run by the Episcopal Church. An 1861 territorial act called for the creation of a public university in Boulder, though it would not be until 1876 that the University of Colorado was founded. The 1876 act also renamed Territorial School of Mines as the Colorado School of Mines. An 1870 territorial act created the Agricultural College of Colorado which opened in 1879. The college was renamed the Colorado State College of Agriculture and Mechanic Arts in 1935, and became Colorado State University in 1957. The first Catholic college in Colorado was the Jesuit Sacred Heart College, which was founded in New Mexico in 1877, moved to Morrison in 1884, and to Denver in 1887. The college was renamed Regis College in 1921 and Regis University in 1991. On April 1, 1924, armed students patrolled the campus after a burning cross was found, the climax of tensions between Regis College and the locally-powerful Ku Klux Klan. Following a 1950 assessment by the Service Academy Board, it was determined that there was a need to supplement the U.S. Military and Naval Academies with a third school that would provide commissioned officers for the newly independent Air Force. On April 1, 1954, President Dwight Eisenhower signed a law that moved for the creation of a U.S. Air Force Academy. Later that year, Colorado Springs was selected to host the new institution. From its establishment in 1955 until the construction of appropriate facilities in Colorado Springs were completed and opened in 1958, the Air Force Academy operated out of Lowry Air Force Base in Denver. With the opening of the Colorado Springs facility, the cadets moved to the new campus, though not in the full-kit march that some urban and campus legends suggest. The first class of Space Force officers from the Air Force Academy commissioned on April 18, 2020. Adams State University Aims Community College Arapahoe Community College Belleview Christian College & Bible Seminary Colorado Christian University Colorado College Colorado Mesa University Colorado Mountain College Colorado Northwestern Community College Colorado School of Mines Colorado State University System Colorado State University Colorado State University Pueblo CSU–Global Campus Colorado Technical University Community College of Aurora Community College of Denver Denver Seminary DeVry University Emily Griffith Opportunity School Fort Lewis College Front Range Community College Iliff School of Theology Johnson & Wales University Lamar Community College Metropolitan State University of Denver Morgan Community College Naropa University Nazarene Bible College Northeastern Junior College Otero College Pikes Peak Community College Pueblo Community College Red Rocks Community College Regis University Rocky Mountain College of Art and Design Rocky Vista University College of Osteopathic Medicine Trinidad State College United States Air Force Academy University of Colorado System University of Colorado Boulder University of Colorado Colorado Springs University of Colorado Denver Anschutz Medical Campus Auraria Campus University of Denver University of Northern Colorado Western Colorado University Military installations The major military installations in Colorado include: Buckley Space Force Base Air Reserve Personnel Center Fort Carson (U.S. Army) Piñon Canyon Maneuver Site Peterson Space Force Base Cheyenne Mountain Space Force Station Pueblo Chemical Depot (U.S. Army) Schriever Space Force Base United States Air Force Academy Former military posts in Colorado include: Spanish Fort (1819–1821) Fort Massachusetts (1852–1858) Fort Garland (1858–1883) Camp Collins (1862–1870) Fort Logan (1887–1946) Fitzsimons Army Hospital (1918–1999) Denver Medical Depot (1925-1949) Lowry Air Force Base (1938–1994) Pueblo Army Air Base (1941-1948) Rocky Mountain Arsenal (1942-1992) Camp Hale (1942–1945) La Junta Army Air Field (1942-1946) Leadville Army Air Field (1943-1944) Colorado National Guard Armory (1913-1933) Native American reservations The two Native American reservations remaining in Colorado are: Southern Ute Indian Reservation — Southern Ute Indian Tribe (1873; Ute dialect: Kapuuta-wa Moghwachi Núuchi-u) Ute Mountain Ute Indian Reservation — Ute Mountain Ute Tribe (1940; Ute dialect: Wʉgama Núuchi) The two abolished Indian reservations in Colorado were: Cheyenne and Arapaho Indian Reservation (1851–1870) Ute Indian Reservation (1855–1873) Protected areas Colorado is home to 4 national parks, 8 national monuments, 2 national historic sites, 2 national recreation areas, 4 national historic trails, 1 national scenic trail, 11 national forests, 2 national grasslands, 44 national wildernesses, 3 national conservation areas, 8 national wildlife refuges, 3 national heritage areas, 26 national historic landmarks, 16 national natural landmarks, more than 1,500 National Register of Historic Places, 1 wild and scenic river, 42 state parks, 307 state wildlife areas, 93 state natural areas, 28 national recreation trails, 6 regional trails, and numerous other scenic, historic, and recreational areas. The following are the 21 units of the National Park System in Colorado: Arapaho National Recreation Area Bent's Old Fort National Historic Site Black Canyon of the Gunnison National Park Browns Canyon National Monument California National Historic Trail Canyons of the Ancients National Monument Chimney Rock National Monument Colorado National Monument Continental Divide National Scenic Trail Curecanti National | of the Salt Lake Valley organized the extralegal State of Deseret, claiming the entire Great Basin and all lands drained by the rivers Green, Grand, and Colorado. The federal government of the U.S. flatly refused to recognize the new Mormon government, because it was theocratic and sanctioned plural marriage. Instead, the Compromise of 1850 divided the Mexican Cession and the northwestern claims of Texas into a new state and two new territories, the state of California, the Territory of New Mexico, and the Territory of Utah. On April 9, 1851, Mexican American settlers from the area of Taos settled the village of San Luis, then in the New Mexico Territory, later to become Colorado's first permanent Euro-American settlement. In 1854, Senator Stephen A. Douglas persuaded the U.S. Congress to divide the unorganized territory east of the Continental Divide into two new organized territories, the Territory of Kansas and the Territory of Nebraska, and an unorganized southern region known as the Indian territory. Each new territory was to decide the fate of slavery within its boundaries, but this compromise merely served to fuel animosity between free soil and pro-slavery factions. The gold seekers organized the Provisional Government of the Territory of Jefferson on August 24, 1859, but this new territory failed to secure approval from the Congress of the United States embroiled in the debate over slavery. The election of Abraham Lincoln for the President of the United States on November 6, 1860, led to the secession of nine southern slave states and the threat of civil war among the states. Seeking to augment the political power of the Union states, the Republican Party-dominated Congress quickly admitted the eastern portion of the Territory of Kansas into the Union as the free State of Kansas on January 29, 1861, leaving the western portion of the Kansas Territory, and its gold-mining areas, as unorganized territory. Territory act Thirty days later on February 28, 1861, outgoing U.S. President James Buchanan signed an Act of Congress organizing the free Territory of Colorado. The original boundaries of Colorado remain unchanged except for government survey amendments. The name Colorado was chosen because it was commonly believed that the Colorado River originated in the territory. In 1776, Spanish priest Silvestre Vélez de Escalante recorded that Native Americans in the area knew the river as el Rio Colorado for the red-brown silt that the river carried from the mountains. In 1859, a U.S. Army topographic expedition led by Captain John Macomb located the confluence of the Green River with the Grand River in what is now Canyonlands National Park in Utah. The Macomb party designated the confluence as the source of the Colorado River. On April 12, 1861, South Carolina artillery opened fire on Fort Sumter to start the American Civil War. While many gold seekers held sympathies for the Confederacy, the vast majority remained fiercely loyal to the Union cause. In 1862, a force of Texas cavalry invaded the Territory of New Mexico and captured Santa Fe on March 10. The object of this Western Campaign was to seize or disrupt the gold fields of Colorado and California and to seize ports on the Pacific Ocean for the Confederacy. A hastily organized force of Colorado volunteers force-marched from Denver City, Colorado Territory, to Glorieta Pass, New Mexico Territory, in an attempt to block the Texans. On March 28, the Coloradans and local New Mexico volunteers stopped the Texans at the Battle of Glorieta Pass, destroyed their cannon and supply wagons, and dispersed 500 of their horses and mules. The Texans were forced to retreat to Santa Fe. Having lost the supplies for their campaign and finding little support in New Mexico, the Texans abandoned Santa Fe and returned to San Antonio in defeat. The Confederacy made no further attempts to seize the Southwestern United States. In 1864, Territorial Governor John Evans appointed the Reverend John Chivington as Colonel of the Colorado Volunteers with orders to protect white settlers from Cheyenne and Arapaho warriors who were accused of stealing cattle. Colonel Chivington ordered his men to attack a band of Cheyenne and Arapaho encamped along Sand Creek. Chivington reported that his troops killed more than 500 warriors. The militia returned to Denver City in triumph, but several officers reported that the so-called battle was a blatant massacre of Indians at peace, that most of the dead were women and children, and that bodies of the dead had been hideously mutilated and desecrated. Three U.S. Army inquiries condemned the action, and incoming President Andrew Johnson asked Governor Evans for his resignation, but none of the perpetrators was ever punished. This event is now known as the Sand Creek massacre. In the midst and aftermath of the Civil War, many discouraged prospectors returned to their homes, but a few stayed and developed mines, mills, farms, ranches, roads, and towns in Colorado Territory. On September 14, 1864, James Huff discovered silver near Argentine Pass, the first of many silver strikes. In 1867, the Union Pacific Railroad laid its tracks west to Weir, now Julesburg, in the northeast corner of the Territory. The Union Pacific linked up with the Central Pacific Railroad at Promontory Summit, Utah, on May 10, 1869, to form the First Transcontinental Railroad. The Denver Pacific Railway reached Denver in June the following year, and the Kansas Pacific arrived two months later to forge the second line across the continent. In 1872, rich veins of silver were discovered in the San Juan Mountains on the Ute Indian reservation in southwestern Colorado. The Ute people were removed from the San Juans the following year. Statehood The United States Congress passed an enabling act on March 3, 1875, specifying the requirements for the Territory of Colorado to become a state. On August 1, 1876 (four weeks after the Centennial of the United States), U.S. President Ulysses S. Grant signed a proclamation admitting Colorado to the Union as the 38th state and earning it the moniker "Centennial State". The discovery of a major silver lode near Leadville in 1878 triggered the Colorado Silver Boom. The Sherman Silver Purchase Act of 1890 invigorated silver mining, and Colorado's last, but greatest, gold strike at Cripple Creek a few months later lured a new generation of gold seekers. Colorado women were granted the right to vote on November 7, 1893, making Colorado the second state to grant universal suffrage and the first one by a popular vote (of Colorado men). The repeal of the Sherman Silver Purchase Act in 1893 led to a staggering collapse of the mining and agricultural economy of Colorado, but the state slowly and steadily recovered. Between the 1880s and 1930s, Denver's floriculture industry developed into a major industry in Colorado. This period became known locally as the Carnation Gold Rush. Twentieth and twenty-first centuries Poor labor conditions and discontent among miners resulted in several major clashes between strikers and the Colorado National Guard, including the 1903–1904 Western Federation of Miners Strike and Colorado Coalfield War, the latter of which included the Ludlow massacre that killed a dozen women and children. Both the 1913–1914 Coalfield War and the Denver streetcar strike of 1920 resulted in federal troops intervening to end the violence. In 1927, the Columbine Mine massacre resulted in six dead strikers following a confrontation with Colorado Rangers. More than 5,000 Colorado miners—many immigrants—are estimated to have died in accidents since records began to be formally collected following an accident in Crested Butte that killed 59 in 1884. In 1924, the Ku Klux Klan Colorado Realm achieved dominance in Colorado politics. With peak membership levels, the Second Klan levied significant control over both the local and state Democrat and Republican parties, particularly in the governor's office and city governments of Denver, Cañon City, and Durango. A particularly strong element of the Klan controlled the Denver Police. Cross burnings became semi-regular occurrences in cities such as Florence and Pueblo. The Klan targeted African-Americans, Catholics, Eastern European immigrants, and other non-White Protestant groups. Efforts by non-Klan lawmen and lawyers including Philip Van Cise lead to a rapid decline in the organization's power, with membership waning significantly by the end of the 1920s. Colorado became the first western state to host a major political convention when the Democratic Party met in Denver in 1908. By the U.S. Census in 1930, the population of Colorado first exceeded one million residents. Colorado suffered greatly through the Great Depression and the Dust Bowl of the 1930s, but a major wave of immigration following World War II boosted Colorado's fortune. Tourism became a mainstay of the state economy, and high technology became an important economic engine. The United States Census Bureau estimated that the population of Colorado exceeded five million in 2009. On September 11, 1957, a plutonium fire occurred at the Rocky Flats Plant, which resulted in the significant plutonium contamination of surrounding populated areas. From the 1940s and 1970s, many protest movements gained momentum in Colorado, predominantly in Denver. This included the Chicano Movement, a civil rights and social movement of Mexican Americans emphasizing a Chicano identity that is widely considered to have begun in Denver. The First National Chicano Liberation Youth Conference was held in Colorado in March 1969. In 1967, Colorado was the first state to loosen restrictions on abortion when governor John Love signed a law allowing abortions in cases of rape, incest, or threats to the woman's mental or physical health. Many states followed Colorado's lead in loosening abortion laws in the 1960s and 1970s. Since the late 1990s, Colorado has been the site of multiple major mass shootings, including the infamous Columbine High School massacre in 1999 which made international news, where Eric Harris and Dylan Klebold killed 12 students and one teacher, before committing suicide.The incident has since spawned many copycat incidents. On July 20, 2012, a gunman killed 12 people in a movie theater in Aurora. The state responded with tighter restrictions on firearms, including introducing a limit on magazine capacity. On March 22, 2021, a gunman killed 10 people, including a police officer, in a King Soopers supermarket in Boulder. Four warships of the U.S. Navy have been named the USS Colorado. The first USS Colorado was named for the Colorado River and served in the Civil War and later the Asiatic Squadron, where it was attacked during the 1871 Korean Expedition. The later three ships were named in honor of the state, the including an armored cruiser and the battleship USS Colorado, the latter of which was the lead ship of her class and served in World War II in the Pacific beginning in 1941. At the time of the attack on Pearl Harbor, the battleship USS Colorado was located at the naval base in San Diego, California, and thus went unscathed. The most recent vessel to bear the name USS Colorado is Virginia-class submarine USS Colorado (SSN-788), which was commissioned in 2018. Geography Colorado is notable for its diverse geography, which includes alpine mountains, high plains, deserts with huge sand dunes, and deep canyons. In 1861, the United States Congress defined the boundaries of the new Territory of Colorado exclusively by lines of latitude and longitude, stretching from 37°N to 41°N latitude, and from 102°02′48″W to 109°02′48″W longitude (25°W to 32°W from the Washington Meridian). After years of government surveys, the borders of Colorado were officially defined by 697 boundary markers and 697 straight boundary lines. Colorado, Wyoming, and Utah are the only states that have their borders defined solely by straight boundary lines with no natural features. The southwest corner of Colorado is the Four Corners Monument at 36°59′56″N, 109°2′43″W. The Four Corners Monument, located at the place where Colorado, New Mexico, Arizona, and Utah meet, is the only place in the United States where four states meet. Plains Approximately half of Colorado is flat and rolling land. East of the Rocky Mountains are the Colorado Eastern Plains of the High Plains, the section of the Great Plains within Nebraska at elevations ranging from roughly . The Colorado plains are mostly prairies but also include deciduous forests, buttes, and canyons. Precipitation averages annually. Eastern Colorado is presently mainly farmland and rangeland, along with small farming villages and towns. Corn, wheat, hay, soybeans, and oats are all typical crops. Most villages and towns in this region boast both a water tower and a grain elevator. Irrigation water is available from both surface and subterranean sources. Surface water sources include the South Platte, the Arkansas River, and a few other streams. Subterranean water is generally accessed through artesian wells. Heavy usage of these wells for irrigation purposes caused underground water reserves to decline in the region. Eastern Colorado also hosts a considerable amount and range of livestock, such as cattle ranches and hog farms. Front Range Roughly 70% of Colorado's population resides along the eastern edge of the Rocky Mountains in the Front Range Urban Corridor between Cheyenne, Wyoming, and Pueblo, Colorado. This region is partially protected from prevailing storms that blow in from the Pacific Ocean region by the high Rockies in the middle of Colorado. The "Front Range" includes Denver, Boulder, Fort Collins, Loveland, Castle Rock, Colorado Springs, Pueblo, Greeley, and other townships and municipalities in between. On the other side of the Rockies, the significant population centers in Western Colorado (which is not considered the "Front Range") are the cities of Grand Junction, Durango, and Montrose. Mountains To the west of the Great Plains of Colorado rises the eastern slope of the Rocky Mountains. Notable peaks of the Rocky Mountains include Longs Peak, Mount Evans, Pikes Peak, and the Spanish Peaks near Walsenburg, in southern Colorado. This area drains to the east and the southeast, ultimately either via the Mississippi River or the Rio Grande into the Gulf of Mexico. The Rocky Mountains within Colorado contain 53 true peaks with a total of 58 that are or higher in elevation above sea level, known as fourteeners. These mountains are largely covered with trees such as conifers and aspens up to the tree line, at an elevation of about in southern Colorado to about in northern Colorado. Above this tree line only alpine vegetation grows. Only small parts of the Colorado Rockies are snow-covered year-round. Much of the alpine snow melts by mid-August with the exception of a few snow-capped peaks and a few small glaciers. The Colorado Mineral Belt, stretching from the San Juan Mountains in the southwest to Boulder and Central City on the front range, contains most of the historic gold- and silver-mining districts of Colorado. Mount Elbert is the highest summit of the Rocky Mountains. The 30 highest major summits of the Rocky Mountains of North America all lie within the state. The summit of Mount Elbert at elevation in Lake County is the highest point in Colorado and the Rocky Mountains of North America. Colorado is the only U.S. state that lies entirely above 1,000 meters elevation. The point where the Arikaree River flows out of Yuma County, Colorado, and into Cheyenne County, Kansas, is the lowest point in Colorado at elevation. This point, which is the highest low elevation point of any state, is higher than the high elevation points of 18 states and the District of Columbia. Continental Divide The Continental Divide of the Americas extends along the crest of the Rocky Mountains. The area of Colorado to the west of the Continental Divide is called the Western Slope of Colorado. West of the Continental Divide, water flows to the southwest via the Colorado River and the Green River into the Gulf of California. Within the interior of the Rocky Mountains are several large parks which are high broad basins. In the north, on the east side of the Continental Divide is the North Park of Colorado. The North Park is drained by the North Platte River, which flows north into Wyoming and Nebraska. Just to the south of North Park, but on the western side of the Continental Divide, is the Middle Park of Colorado, which is drained by the Colorado River. The South Park of Colorado is the region of the headwaters of the South Platte River. South Central region In south central Colorado is the large San Luis Valley, where the headwaters of the Rio Grande are located. The valley sits between the Sangre De Cristo Mountains and San Juan Mountains, and consists of large desert lands that eventually run into the mountains. The Rio Grande drains due south into New Mexico, Mexico, and Texas. Across the Sangre de Cristo Range to the east of the San Luis Valley lies the Wet Mountain Valley. These basins, particularly the San Luis Valley, lie along the Rio Grande Rift, a major geological formation of the Rocky Mountains, and its branches. Colorado Western Slope The Western Slope area of Colorado includes the western face of the Rocky Mountains and all of the state to the western border. This area includes several terrains and climates from alpine mountains to arid deserts. The Western Slope includes many ski resort towns in the Rocky Mountains and towns west of the mountains. It is less populous than the Front Range but includes a large number of national parks and monuments. From west to east, the land of Colorado consists of desert lands, desert plateaus, alpine mountains, National Forests, relatively flat grasslands, scattered forests, buttes, and canyons in the western edge of the Great Plains. The famous Pikes Peak is located just west of Colorado Springs. Its isolated peak is visible from nearly the Kansas border on clear days, and also far to the north and the south. The northwestern corner of Colorado is a sparsely populated region, and it contains part of the noted Dinosaur National Monument, which not only is a paleontological area, but is also a scenic area of rocky hills, canyons, arid desert, and streambeds. Here, the Green River briefly crosses over into Colorado. Desert lands in Colorado are located in and around areas such as the Pueblo, Canon City, Florence, Great Sand Dunes National Park and Preserve, San Luis Valley, Cortez, Canyon of the Ancients National Monument, Hovenweep National Monument, Ute Mountain, Delta, Grand Junction, Colorado National Monument, and other areas surrounding the Uncompahgre Plateau and Uncompahgre National Forest. The Western Slope of Colorado is drained by the Colorado River and its tributaries (primarily the Gunnison River, Green River, and the San Juan River), or by evaporation in its arid areas. The Colorado River flows through Glenwood Canyon, and then through an arid valley made up of desert from Rifle to Parachute, through the desert canyon of De Beque Canyon, and into the arid desert of Grand Valley, where the city of Grand Junction is located. Also prominent in or near the southern portion of the Western Slope are the Grand Mesa, which lies to the southeast of Grand Junction; the high San Juan Mountains, a rugged mountain range; and to the west of the San Juan Mountains, the Colorado Plateau, a high arid region that borders Southern Utah. Grand Junction, Colorado is the largest city on the Western Slope. Grand Junction and Durango are the only major centers of television broadcasting west of the Continental Divide in Colorado, though most mountain resort communities publish daily newspapers. Grand Junction is located along Interstate 70, the only major highway in Western Colorado. Grand Junction is also along the major railroad of the Western Slope, the Union Pacific. This railroad also provides the tracks for Amtrak's California Zephyr passenger train, which crosses the Rocky Mountains between Denver and Grand Junction via a route on which there are no continuous highways. The Western Slope includes multiple notable destinations in the Colorado Rocky Mountains, including Glenwood Springs, with its resort hot springs, and the ski resorts of Aspen, Breckenridge, Vail, Crested Butte, Steamboat Springs, and Telluride. Higher education in and near the Western Slope can be found at Colorado Mesa University in Grand Junction, Western Colorado University in Gunnison, Fort Lewis College in Durango, and Colorado Mountain College in Glenwood Springs and Steamboat Springs. The Four Corners Monument in the southwest corner of Colorado marks the common boundary of Colorado, New Mexico, Arizona, and Utah; the only such place in the United States. Climate The climate of Colorado is more complex than states outside of the Mountain States region. Unlike most other states, southern Colorado is not always warmer than northern Colorado. Most of Colorado is made up of mountains, foothills, high plains, and desert lands. Mountains and surrounding valleys greatly affect local climate. Northeast, east, and southeast Colorado are mostly the high plains, while Northern Colorado is a mix of high plains, foothills, and mountains. Northwest and west Colorado are predominantly mountainous, with some desert lands mixed in. Southwest and southern Colorado are a complex mixture of desert and mountain areas. Eastern Plains The climate of the Eastern Plains is semi-arid (Köppen climate classification: BSk) with low humidity and moderate precipitation, usually from annually, although many areas near the rivers is semi-humid climate. The area is known for its abundant sunshine and cool, clear nights, which give this area a great average diurnal temperature range. The difference between the highs of the days and the lows of the nights can be considerable as warmth dissipates to space during clear nights, the heat radiation not being trapped by clouds. The Front Range urban corridor, where most of the population of Colorado resides, lies in a pronounced precipitation shadow as a result of being on the lee side of the Rocky Mountains. In summer, this area can have many days above 95 °F (35 °C) and often 100 °F (38 °C). On the plains, the winter lows usually range from 25 to −10 °F (−4 to −23 °C). About 75% of the precipitation falls within the growing season, from April to September, but this area is very prone to droughts. Most of the precipitation comes from thunderstorms, which can be severe, and from major snowstorms that occur in the winter and early spring. Otherwise, winters tend to be mostly dry and cold. In much of the region, March is the snowiest month. April and May are normally the rainiest months, while April is the wettest month overall. The Front |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.