sentence1
stringlengths 1
133k
| sentence2
stringlengths 1
131k
|
---|---|
with the ICTY. The main issue, the flight of general Gotovina, however, remained unsolved and despite the agreement on an accession negotiation framework, the negotiations did not begin in March 2005. On 4 October 2005 Croatia finally received green light for accession negotiations after the Chief Prosecutor of the ICTY Carla Del Ponte officially stated that Croatia is fully cooperating with the Tribunal. This has been the main condition demanded by EU foreign ministers for accession negotiations. The ICTY called upon other southern European states to follow Croatia's good example. Thanks to the consistent position of Austria during the meeting of EU foreign ministers, a long period of instability and the questioning of the determination of the Croatian government to extradite alleged war criminals has ended successfully. Croatian Prime minister Ivo Sanader declared that full cooperation with the Hague Tribunal will continue. The accession process was also complicated by the insistence of Slovenia, an EU member state, that the two countries' border issues be dealt with prior to Croatia's accession to the EU. Croatia finished accession negotiations on 30 June 2011, and on 9 December 2011, signed the Treaty of Accession. A referendum on EU accession was held in Croatia on 22 January 2012, with 66% of participants voting in favour of joining the Union. The ratification process was concluded on 21 June 2013, and entry into force and accession of Croatia to the EU took place on 1 July 2013. Current events The main objective of the Croatian foreign policy is positioning within the EU institutions and in the region, cooperation with NATO partners and strengthening multilateral and bilateral cooperation. Government officials in charge of foreign policy include the Minister of Foreign and European Affairs, currently Gordan Grlić-Radman, and the President of the Republic, currently Zoran Milanović. Croatia has established diplomatic relations with 186 countries (see List of diplomatic relations of Croatia). As of 2009, Croatia maintains a network of 51 embassies, 24 consulates and eight permanent diplomatic missions abroad. Furthermore, there are 52 foreign embassies and 69 consulates in the Republic of Croatia in addition to offices of international organizations such as the European Bank for Reconstruction and Development, International Organization for Migration, Organization for Security and Co-operation in Europe (OSCE), World Bank, World Health Organization, International Criminal Tribunal for the former Yugoslavia (ICTY), United Nations Development Programme, United Nations High Commissioner for Refugees and UNICEF. International organizations Republic of Croatia participates in the following international organizations: CE, CEI, EAPC, EBRD, ECE, EU, FAO, G11, IADB, IAEA, IBRD, ICAO, ICC, ICRM, IDA, IFAD, IFC, IFRCS, IHO, ILO, IMF, IMO, Inmarsat, Intelsat, Interpol, IOC, IOM, ISO, ITU, ITUC, NAM (observer), NATO, OAS (observer), OPCW, OSCE, PCA, PFP, SECI, UN, UNAMSIL, UNCTAD, UNESCO, UNIDO, UNMEE, UNMOGIP, UPU, WCO, WEU (associate), WHO, WIPO, WMO, WToO, WTO There exists a Permanent Representative of Croatia to the United Nations. Foreign support Croatia receives support from donor programs of: European Bank for Reconstruction and Development (EBRD) European Union International Bank for Reconstruction and Development International Monetary Fund USAID Between 1991 and 2003, the EBRD had directly invested a total of 1,212,039,000 EUR into projects in Croatia. In 1998, U.S. support to Croatia came through the Southeastern European Economic Development Program (SEED), whose funding in Croatia totaled $23.25 million. More than half of that money was used to fund programs encouraging sustainable returns of refugees and displaced persons. About one-third of the assistance was used for democratization efforts, and another 5% funded financial sector restructuring. In 2003 USAID considered Croatia to be on a "glide path for graduation" along with Bulgaria. Its 2002/2003/2004 funding includes around $10 million for economic development, up to $5 million for the development of democratic institutions, about $5 million for the return of population affected by war and between 2 and 3 million dollars for the "mitigation of adverse social conditions and trends". A rising amount of funding is given to cross-cutting programs in anti-corruption, slightly under one million dollars. The European Commission has proposed to assist Croatia's efforts to join the European Union with 245 million euros from PHARE, ISPA and SAPARD aid programs over the course of 2005 and 2006. International disputes Relations with neighbouring states have normalized somewhat since the breakup of Yugoslavia. Work has begun — bilaterally and within the Stability Pact for South Eastern Europe since 1999 — on political and economic cooperation in the region. Bosnia and Herzegovina Discussions continue between Croatia and Bosnia and Herzegovina on various sections of the border, the longest border with another country for each of these countries. Sections of the Una river and villages at the base of Mount Plješevica are in Croatia, while some are in Bosnia, which causes an excessive number of border crossings on a single route and impedes any serious development in the region. The Zagreb-Bihać-Split railway line is still closed for major traffic due to this issue. The border on the Una river between Hrvatska Kostajnica on the northern, Croatian side of the river, and Bosanska Kostajnica on the southern, Bosnian side, is also being discussed. A river island between the two towns is under Croatian control, but is also claimed by Bosnia. A shared border crossing point has been built and has been functioning since 2003, and is used without hindrance by either party. The Herzegovinian municipality of Neum in the south makes the southernmost part of Croatia an exclave and the two countries are negotiating special transit rules through Neum to compensate for that. Recently Croatia has opted to build a bridge to the Pelješac peninsula to connect the Croatian mainland with the exclave but Bosnia and Herzegovina has protested that the bridge will close | EBRD had directly invested a total of 1,212,039,000 EUR into projects in Croatia. In 1998, U.S. support to Croatia came through the Southeastern European Economic Development Program (SEED), whose funding in Croatia totaled $23.25 million. More than half of that money was used to fund programs encouraging sustainable returns of refugees and displaced persons. About one-third of the assistance was used for democratization efforts, and another 5% funded financial sector restructuring. In 2003 USAID considered Croatia to be on a "glide path for graduation" along with Bulgaria. Its 2002/2003/2004 funding includes around $10 million for economic development, up to $5 million for the development of democratic institutions, about $5 million for the return of population affected by war and between 2 and 3 million dollars for the "mitigation of adverse social conditions and trends". A rising amount of funding is given to cross-cutting programs in anti-corruption, slightly under one million dollars. The European Commission has proposed to assist Croatia's efforts to join the European Union with 245 million euros from PHARE, ISPA and SAPARD aid programs over the course of 2005 and 2006. International disputes Relations with neighbouring states have normalized somewhat since the breakup of Yugoslavia. Work has begun — bilaterally and within the Stability Pact for South Eastern Europe since 1999 — on political and economic cooperation in the region. Bosnia and Herzegovina Discussions continue between Croatia and Bosnia and Herzegovina on various sections of the border, the longest border with another country for each of these countries. Sections of the Una river and villages at the base of Mount Plješevica are in Croatia, while some are in Bosnia, which causes an excessive number of border crossings on a single route and impedes any serious development in the region. The Zagreb-Bihać-Split railway line is still closed for major traffic due to this issue. The border on the Una river between Hrvatska Kostajnica on the northern, Croatian side of the river, and Bosanska Kostajnica on the southern, Bosnian side, is also being discussed. A river island between the two towns is under Croatian control, but is also claimed by Bosnia. A shared border crossing point has been built and has been functioning since 2003, and is used without hindrance by either party. The Herzegovinian municipality of Neum in the south makes the southernmost part of Croatia an exclave and the two countries are negotiating special transit rules through Neum to compensate for that. Recently Croatia has opted to build a bridge to the Pelješac peninsula to connect the Croatian mainland with the exclave but Bosnia and Herzegovina has protested that the bridge will close its access to international waters (although Croatian territory and territorial waters surround Bosnian-Herzegovinian territory and waters completely) and has suggested that the bridge must be higher than 55 meters for free passage of all types of ships. Negotiations are still being held. Italy The relations between Croatia and Italy have been largely cordial and friendly, although occasional incidents do arise on issues such as the Istrian–Dalmatian exodus or the Ecological and Fisheries Protection Zone. Montenegro Croatia and Montenegro have a largely latent border dispute over the Prevlaka peninsula. Serbia The Danube border between Croatia and Serbia is in dispute, particularly in Baranja, the Island of Vukovar and the Island of Šarengrad. Slovenia Croatia and Slovenia have several land and maritime boundary disputes, mainly in the Gulf of Piran, regarding Slovenian access to international waters, a small number of pockets of land on the right-hand side of the river Dragonja, and around the Sveta Gera peak. Slovenia was disputing Croatia's claim |
had migrated north along the Caribbean island chain. The Taíno and Siboney were part of a cultural group commonly called the Arawak, who inhabited parts of northeastern South America prior to the arrival of Europeans. Initially, they settled at the eastern end of Cuba, before expanding westward across the island. The Spanish Dominican clergyman and writer Bartolomé de las Casas estimated that the Taíno population of Cuba had reached 350,000 by the end of the 15th century. The Taíno cultivated the yuca root, harvested it and baked it to produce cassava bread. They also grew cotton and tobacco, and ate maize and sweet potatoes. According to History of the Indians, they had "everything they needed for living; they had many crops, well arranged". Spanish conquest and early colonization (1492 - 1800) Christopher Columbus, on his first Spanish-sponsored voyage to the Americas in 1492, sailed south from what is now the Bahamas to explore the northeast coast of Cuba and the northern coast of Hispaniola. Columbus, who was searching for a route to India, believed the island to be a peninsula of the Asian mainland. The first sighting of a Spanish ship approaching the island was on 27 October 1492, probably at Bariay, Holguín Province, on the eastern point of the island. During a second voyage in 1494, Columbus passed along the south coast of the island, landing at various inlets including what was to become Guantánamo Bay. With the Papal Bull of 1493, Pope Alexander VI commanded Spain to conquer, colonize and convert the pagans of the New World to Catholicism. On arrival, Columbus observed the Taíno dwellings, describing them as "looking like tents in a camp. All were of palm branches, beautifully constructed". The Spanish began to create permanent settlements on the island of Hispaniola, east of Cuba, soon after Columbus' arrival in the Caribbean, but the coast of Cuba was not fully mapped by Europeans until 1508, when Sebastián de Ocampo completed this task. In 1511, Diego Velázquez de Cuéllar set out from Hispaniola to form the first Spanish settlement in Cuba, with orders from Spain to conquer the island. The settlement was at Baracoa, but the new settlers were greeted with stiff resistance from the local Taíno population. The Taínos were initially organized by cacique (chieftain) Hatuey, who had himself relocated from Hispaniola to escape the brutalities of Spanish rule on that island. After a prolonged guerrilla campaign, Hatuey and successive chieftains were captured and burnt alive, and within three years the Spanish had gained control of the island. In 1514, a settlement was founded in what was to become Havana. Clergyman Bartolomé de las Casas observed a number of massacres initiated by the invaders as the Spanish swept over the island, notably the massacre near Camagüey of the inhabitants of Caonao. According to his account, some three thousand villagers had traveled to Manzanillo to greet the Spanish with loaves, fishes and other foodstuffs, and were "without provocation, butchered". The surviving indigenous groups fled to the mountains or the small surrounding islands before being captured and forced into reservations. One such reservation was Guanabacoa, which is today a suburb of Havana. In 1513, Ferdinand II of Aragon issued a decree establishing the encomienda land settlement system that was to be incorporated throughout the Spanish Americas. Velázquez, who had become Governor of Cuba relocating from Baracoa to Santiago de Cuba, was given the task of apportioning both the land and the indigenous peoples to groups throughout the new colony. The scheme was not a success, however, as the natives either succumbed to diseases brought from Spain such as measles and smallpox, or simply refused to work, preferring to slip away into the mountains. Desperate for labor to toil the new agricultural settlements, the Conquistadors sought slaves from surrounding islands and the continental mainland. Velazquez's lieutenant Hernán Cortés launched the Spanish conquest of the Aztec Empire in Cuba, sailing from Santiago to the Yucatán Peninsula. However, these new arrivals followed the indigenous peoples by also dispersing into the wilderness or dying of disease. Despite the difficult relations between the natives and the new Europeans, some cooperation was in evidence. The Spanish were shown by the natives how to nurture tobacco and consume it in the form of cigars. There were also many unions between the largely male Spanish colonists and indigenous women. Modern-day studies have revealed traces of DNA that renders physical traits similar to Amazonian tribes in individuals throughout Cuba, although the native population was largely destroyed as a culture and civilization after 1550. Under the Spanish New Laws of 1552, indigenous Cuban were freed from encomienda, and seven towns for indigenous peoples were set up. There are indigenous descendant Cuban (Taíno) families in several places, mostly in eastern Cuba. The indigenous community at Caridad de los Indios, Guantánamo, is one such nucleus. An association of indigenous families in Jiguani, near Santiago, is also active. The local indigenous population also left their mark on the language, with some 400 Taíno terms and place-names surviving to the present day. The name of Cuba itself, Havana, Camagüey, and many others were derived from Classic Taíno, and indigenous words such as tobacco, hurricane and canoe were transferred to English and are used today. Arrival of African slaves (1500 - 1820) The Spanish established sugar and tobacco as Cuba's primary products, and the island soon supplanted Hispaniola as the prime Spanish base in the Caribbean. Further field labor was required. African slaves were then imported to work the plantations as field labor. However, restrictive Spanish trade laws made it difficult for Cubans to keep up with the 17th and 18th century advances in processing sugar cane pioneered in Barbados, Jamaica and Saint-Domingue. Spain also restricted Cuba's access to the slave trade, instead issuing foreign merchants asientos to conduct it on Spain's behalf. The advances in the system of sugar cane refinement did not reach Cuba until the Haitian Revolution in the nearby French colony of Saint-Domingue led to thousands of refugee French planters to flee to Cuba and other islands in the West Indies, bringing their slaves and expertise in sugar refining and coffee growing into eastern Cuba in the 1790s and early 19th century. In the 19th century, Cuban sugar plantations became the most important world producer of sugar, thanks to the expansion of slavery and a relentless focus on improving the island's sugar technology. Use of modern refining techniques was especially important because the British Slave Trade Act 1807 abolished the slave trade in the British Empire (with slavery itself being abolished in the Slavery Abolition Act 1833). The British government set about trying to eliminate the transatlantic slave trade. Under British diplomatic pressure, in 1817 Spain agreed to abolish the slave trade from 1820 in exchange for a payment from London. Cubans rapidly rushed to import further slaves in the time legally left to them. Over a 100,000 new slaves were imported from Africa between 1816 and 1820. In other words, 100,000 African people were kidnapped and forced into slavery. In spite of the new restrictions a large-scale illegal slave trade continued to flourish in the following years. Many Cubans were torn between desire for the profits generated by sugar and a repugnance for slavery, which they saw as morally, politically, and racially dangerous to their society. By the end of the 19th century, slavery was abolished. However, prior to the abolition of slavery, Cuba gained great prosperity from its sugar trade. Originally, the Spanish had ordered regulations on trade with Cuba, which kept the island from becoming a dominant sugar producer. The Spanish were interested in keeping their trade routes and slave trade routes protected. Nevertheless, Cuba's vast size and abundance of natural resources made it an ideal place for becoming a booming sugar producer. When Spain opened the Cuban trade ports, it quickly became a popular place. New technology allowed a much more effective and efficient means of producing sugar. They began to use water mills, enclosed furnaces, and steam engines to produce higher-quality sugar at a much more efficient pace than elsewhere in the Caribbean. The boom in Cuba's sugar industry in the 19th century made it necessary for the country to improve its transportation infrastructure. Planters needed safe and efficient ways to transport the sugar from the plantations to the ports, in order to maximize their returns. Many new roads were built, and old roads were quickly repaired. Railroads were built relatively early, easing the collection and transportation of perishable sugar cane. It was now possible for plantations all over this large island to have their sugar shipped quickly and easily. Sugar plantations Cuba failed to prosper before the 1760s, due to Spanish trade regulations. Spain had set up a trade monopoly in the Caribbean, and their primary objective was to protect this, which they did by barring the islands from trading with any foreign ships. The resultant stagnation of economic growth was particularly pronounced in Cuba because of its great strategic importance in the Caribbean, and the stranglehold that Spain kept on it as a result. As soon as Spain opened Cuba's ports up to foreign ships, a great sugar boom began that lasted until the 1880s. The island was perfect for growing sugar, being dominated by rolling plains, with rich soil and adequate rainfall. By 1860, Cuba was devoted to growing sugar, having to import all other necessary goods. Cuba was particularly dependent on the United States, which bought 82 percent of its sugar. In 1820, Spain abolished the slave trade, hurting the Cuban economy even more and forcing planters to buy more expensive, illegal, and "troublesome" slaves (as demonstrated by the slave rebellion on the Spanish ship Amistad in 1839). Cuba under attack (1500 - 1800) Colonial Cuba was a frequent target of buccaneers, pirates and French corsairs seeking Spain's New World riches. In response to repeated raids, defenses were bolstered throughout the island during the 16th century. In Havana, the fortress of Castillo de los Tres Reyes Magos del Morro was built to deter potential invaders, which included the English privateer Francis Drake, who sailed within sight of Havana harbor but did not disembark on the island. Havana's inability to resist invaders was dramatically exposed in 1628, when a Dutch fleet led by Piet Heyn plundered the Spanish ships in the city's harbor. In 1662, English pirate Christopher Myngs captured and briefly occupied Santiago de Cuba on the eastern part of the island, in an effort to open up Cuba's protected trade with neighboring Jamaica. Nearly a century later, the British Royal Navy launched another invasion, capturing Guantánamo Bay in 1741 during the War of Jenkins' Ear with Spain. Edward Vernon, the British admiral who devised the scheme, saw his 4,000 occupying troops capitulate to raids by Spanish troops, and more critically, an epidemic, forcing him to withdraw his fleet to British Jamaica. In the War of the Austrian Succession, the British carried out unsuccessful attacks against Santiago de Cuba in 1741 and again in 1748. Additionally, a skirmish between British and Spanish naval squadrons occurred near Havana in 1748. The Seven Years' War, which erupted in 1754 across three continents, eventually arrived in the Spanish Caribbean. Spain's alliance with the French pitched them into direct conflict with the British, and in 1762 a British expedition of five warships and 4,000 troops set out from Portsmouth to capture Cuba. The British arrived on 6 June, and by August had Havana under siege. When Havana surrendered, the admiral of the British fleet, George Keppel, the 3rd Earl of Albemarle, entered the city as a new colonial governor and took control of the whole western part of the island. The arrival of the British immediately opened up trade with their North American and Caribbean colonies, causing a rapid transformation of Cuban society. Though Havana, which had become the third-largest city in the Americas, was to enter an era of sustained development and closening ties with North America during this period, the British occupation of the city proved short-lived. Pressure from London sugar merchants fearing a decline in sugar prices forced a series of negotiations with the Spanish over colonial territories. Less than a year after Havana was seized, the Peace of Paris was signed by the three warring powers, ending the Seven Years' War. The treaty gave Britain Florida in exchange for Cuba on France's recommendation to Spain, The French advised that declining the offer could result in Spain losing Mexico and much of the South American mainland to the British. In 1781, General Bernardo de Gálvez, the Spanish governor of Louisiana, reconquered Florida for Spain with Mexican, Puerto Rican, Dominican, and Cuban troops. Reformism, annexation, and independence (1800 - 1898) In the early 19th century, three major political currents took shape in Cuba: reformism, annexation and independence. In addition, spontaneous and isolated actions carried out from time to time added a current of abolitionism. The 1776 Declaration of Independence by the thirteen of the British colonies of North America and the successes of the French Revolution of 1789 influenced early Cuban liberation movements, as did the successful revolt of black slaves in Haiti in 1791. One of the first of such movements in Cuba, headed by the free black Nicolás Morales, aimed at gaining equality between "mulatto and whites" and at the abolition of sales taxes and other fiscal burdens. Morales' plot was discovered in 1795 in Bayamo, and the conspirators were jailed. Reform, autonomy and separatist movements As a result of the political upheavals caused by the Iberian Peninsular War of 1807-1814 and of Napoleon's removal of Ferdinand VII from the Spanish throne in 1808, a western separatist rebellion emerged among the Cuban Creole aristocracy in 1809 and 1810. One of its leaders, Joaquín Infante, drafted Cuba's first constitution, declaring the island a sovereign state, presuming the rule of the country's wealthy, maintaining slavery as long as it was necessary for agriculture, establishing a social classification based on skin color and declaring Catholicism the official religion. This conspiracy also failed, and the main leaders were sentenced to prison and deported to Spain. In 1812 a mixed-race abolitionist conspiracy arose, organized by José Antonio Aponte, a free-black carpenter in Havana. He and others were executed. The Spanish Constitution of 1812, and the legislation passed by the Cortes of Cádiz after it was set up in 1808, instituted a number of liberal political and commercial policies, which were welcomed in Cuba but also curtailed a number of older liberties. Between 1810 and 1814 the island elected six representatives to the Cortes, in addition to forming a locally elected Provincial Deputation. Nevertheless, the liberal regime and the Constitution proved ephemeral: Ferdinand VII suppressed them when he returned to the throne in 1814. Therefore, by the end of the 1810s, some Cubans were inspired by the successes of Simón Bolívar in South America, despite the fact that the Spanish Constitution was restored in 1820. Numerous secret-societies emerged, most notably the so-called "Soles y Rayos de Bolívar", founded in 1821 and led by José Francisco Lemus. It aimed to establish the free Republic of Cubanacán (a Taíno name for the center of the island), and it had branches in five districts of the island. In 1823 the society's leaders were arrested and condemned to exile. In the same year, King Ferdinand VII, with French help and with the approval of the Quintuple Alliance, managed to abolish constitutional rule in Spain yet again and to re-establish absolutism. As a result, the national militia of Cuba, established by the Constitution and a potential instrument for liberal agitation, was dissolved, a permanent executive military commission under the orders of the governor was created, newspapers were closed, elected provincial representatives were removed and other liberties suppressed. This suppression, and the success of independence movements in the former Spanish colonies on the North American mainland, led to a notable rise of Cuban nationalism. A number of independence conspiracies developed during the 1820s and 1830s, but all failed. Among these were the "Expedición de los Trece" (Expedition of the 13) in 1826, the "Gran Legión del Aguila Negra" (Great Legion of the Black Eagle) in 1829, the "Cadena Triangular" (Triangular Chain) and the "Soles de la Libertad" (Suns of Liberty) in 1837. Leading national figures in these years included Félix Varela (1788-1853) and Cuba's first revolutionary poet, José María Heredia (1803-1839). Between 1810 and 1826, 20,000 royalist refugees from the Latin American Revolutions arrived in Cuba. They were joined by others who left Florida when Spain ceded it to the United States in 1819. These influxes strengthened loyalist pro-Spanish sentiments on the island. Antislavery and independence movements In 1826 the first armed uprising for independence took place in Puerto Príncipe (Camagüey Province), led by Francisco de Agüero and Andrés Manuel Sánchez. Agüero, a white man, and Sánchez, a mulatto, were both executed, becoming the first popular martyrs of the Cuban independence movement. The 1830s saw a surge of activity from the reformist movement, whose main leader, José Antonio Saco, stood out for his criticism of Spanish despotism and of the slave trade. Nevertheless, this surge bore no fruit; Cubans remained deprived of the right to send representatives to the Spanish parliament, and Madrid stepped up repression. Nonetheless, Spain had long been under pressure to end the slave trade. In 1817 Ferdinand VII signed a decree, to which the Spanish Empire did not adhere. Under British diplomatic pressure, the Spanish government signed a treaty in 1835 which pledged to eventually abolish slavery and the slave trade. In this context, black revolts in Cuba increased, and were put down with mass executions. One of the most significant was the Conspiración de la Escalera (Ladder Conspiracy), which started in March 1843 and continued until 1844. The conspiracy took its name from a torture method, in which blacks were tied to a ladder and whipped until they confessed or died. The Ladder Conspiracy involved free blacks and slaves, as well as white intellectuals and professionals. It is estimated that 300 blacks and mulattos died from torture, 78 were executed, over 600 were imprisoned and over 400 expelled from the island. (See comments in new translation of Villaverde's "Cecilia Valdés".) The executed included the leading poet (1809-1844), now commonly known as "Plácido". José Antonio Saco, one of Cuba's most prominent thinkers, was expelled from Cuba. Following the 1868–1878 rebellion of the Ten Years' War, all slavery was abolished by 1886, making Cuba the second-to-last country in the Western Hemisphere to abolish slavery, with Brazil being the last. Instead of blacks, slave traders looked for others sources of cheap labour, such as Chinese colonists and Indians from Yucatán. Another feature of the population was the number of Spanish-born colonists, known as peninsulares, who were mostly adult males; they constituted between ten and twenty per cent of the population between the middle of the 19th century and the great depression of the 1930s. The possibility of annexation by the United States Black unrest and attempts by the Spanish metropolis to abolish slavery motivated many Creoles to advocate Cuba's annexation by the United States, where slavery was still legal. Other Cubans supported the idea due to their desire for American-style economic development and democratic freedom. The annexation of Cuba was repeatedly proposed by government officials in the United States. In 1805, President Thomas Jefferson considered annexing Cuba for strategic reasons, sending secret agents to the island to negotiate with Captain General Someruelos. In April 1823, U.S. Secretary of State John Quincy Adams discussed the rules of political gravitation, in a theory often referred to as the "ripe fruit theory". Adams wrote, "There are laws of political as well as physical gravitation; and if an apple severed by its native tree cannot choose but fall to the ground, Cuba, forcibly disjoined from its own unnatural connection with Spain, and incapable of self-support, can gravitate only towards the North American Union which by the same law of nature, cannot cast her off its bosom". He furthermore warned that "the transfer of Cuba to Great Britain would be an event unpropitious to the interest of this Union". Adams voiced concern that a country outside of North America would attempt to occupy Cuba upon its separation from Spain. He wrote, "The question both of our right and our power to prevent it, if necessary, by force, already obtrudes itself upon our councils, and the administration is called upon, in the performance of its duties to the nation, at least to use all the means with the competency to guard against and forfend it". On 2 December 1823, U.S. President James Monroe specifically addressed Cuba and other European colonies in his proclamation of the Monroe Doctrine. Cuba, located just from Key West, Florida, was of interest to the doctrine's founders, as they warned European forces to leave "America for the Americans". The most outstanding attempts in support of annexation were made by the Venezuelan filibuster General Narciso López, who prepared four expeditions to Cuba in the US. The first two, in 1848 and 1849, failed before departure due to U.S. opposition. The third, made up of some 600 men, managed to land in Cuba and take the central city of Cárdenas, but failed eventually due to a lack of popular support. López's fourth expedition landed in Pinar del Río province with around 400 men in August 1851; the invaders were defeated by Spanish troops and López was executed. The struggle for independence In the 1860s, Cuba had two more liberal-minded governors, Serrano and Dulce, who encouraged the creation of a Reformist Party, despite the fact that political parties were forbidden. But they were followed by a reactionary governor, Francisco Lersundi, who suppressed all liberties granted by the previous governors and maintained a pro-slavery regime. On 10 October 1868, the landowner Carlos Manuel de Céspedes declared Cuban independence and freedom for his slaves. This began the Ten Years' War, which lasted from 1868 to 1878. The Dominican Restoration War (1863–65) brought to Cuba an unemployed mass of former Dominican white and light-skinned mulattos who had served with the Spanish Army in the Dominican Republic before being evacuated to Cuba and discharged from the army. Some of these former soldiers joined the new Revolutionary Army and provided its initial training and leadership. With reinforcements and guidance from the Dominicans, the Cuban rebels defeated Spanish detachments, cut railway lines, and gained dominance over vast sections of the eastern portion of the island. The Spanish government used the Voluntary Corps to commit harsh and bloody acts against the Cuban rebels, and the Spanish atrocities fuelled the growth of insurgent forces in eastern Cuba; however, they failed to export the revolution to the west. On 11 May 1873, Ignacio Agramonte was killed by a stray bullet; Céspedes was surprised and killed on 27 February 1874. In 1875, Máximo Gómez began an invasion of Las Villas west of a fortified military line, or trocha, bisecting the island. The trocha was built between 1869 and 1872; the Spanish erected it to prevent Gómez to move westward from Oriente province. It was the largest fortification built by the Spanish in the Americas. Gómez was controversial in his calls to burn sugar plantations to harass the Spanish occupiers. After the American admiral Henry Reeve was killed in 1876, Gómez ended his campaign. By that year, the Spanish government had deployed more than 250,000 troops to Cuba, as the end of the Third Carlist War had freed up Spanish soldiers for the suppression of the revolt. On 10 February 1878, General Arsenio Martínez Campos negotiated the Pact of Zanjón with the Cuban rebels, and the rebel general Antonio Maceo's surrender on 28 May ended the war. Spain sustained 200,000 casualties, mostly from disease; the rebels sustained 100,000–150,000 dead and the island sustained over $300 million in property damage. The Pact of Zanjón promised the manumission of all slaves who had fought for Spain during the war, and slavery was legally abolished in 1880. However, dissatisfaction with the peace treaty led to the Little War of 1879–80. Conflicts in the late 19th century (1886 - 1900) Background Social, political, and economic change During the time of the so-called "Rewarding Truce", which encompassed the 17 years from the end of the Ten Years' War in 1878, fundamental changes took place in Cuban society. With the abolition of slavery in October 1886, former slaves joined the ranks of farmers and urban working class. Most wealthy Cubans lost their rural properties, and many of them joined the urban middle class. The number of sugar mills dropped and efficiency increased, with only companies and the most powerful plantation owners owning them. The numbers of campesinos and tenant farmers rose considerably. Furthermore, American capital began flowing into Cuba, mostly into the sugar and tobacco businesses and mining. By 1895, these investments totalled $50 million. Although Cuba remained Spanish politically, economically it became increasingly dependent on the United States. These changes also entailed the rise of labour movements. The first Cuban labour organization, the Cigar Makers Guild, was created in 1878, followed by the Central Board of Artisans in 1879, and many more across the island. Abroad, a new trend of aggressive American influence emerged, evident in Secretary of State James G. Blaine's expressed belief that all of Central and South America would some day fall to the US. Blaine placed particular importance on the control of Cuba. "That rich island", he wrote on 1 December 1881, "the key to the Gulf of Mexico, is, though in the hands of Spain, a part of the American commercial system…If ever ceasing to be Spanish, Cuba must necessarily become American and not fall under any other European domination". Blaine's vision did not allow the existence of an independent Cuba. Martí's Insurrection and the start of the war After his second deportation to Spain in 1878, the pro-independence Cuban activist José Martí moved to the United States in 1881, where he began mobilizing the support of the Cuban exile community in Florida, especially in Ybor City in Tampa and Key West. He sought a revolution and Cuban independence from Spain, but also lobbied to oppose U.S. annexation of Cuba, which some American and Cuban politicians desired. Propaganda efforts continued for years and intensified starting in 1895. After deliberations with patriotic clubs across the United States, the Antilles and Latin America, the Partido Revolucionario Cubano (Cuban Revolutionary Party) was officially proclaimed on 10 April 1892, with the purpose of gaining independence for both Cuba and Puerto Rico. Martí was elected delegate, the highest party position. By the end of 1894, the basic conditions for launching the revolution were set. In Foner's words, "Martí's impatience to start the revolution for independence was affected by his growing fear that the United States would succeed in annexing Cuba before the revolution could liberate the island from Spain". On 25 December 1894, three ships, the Lagonda, the Almadis and the Baracoa, set sail for Cuba from Fernandina Beach, Florida, loaded with armed men and supplies. Two of the ships were seized by U.S. authorities in early January, who also alerted the Spanish government, but the proceedings went ahead. The insurrection began on 24 February 1895, with uprisings all across the island. In Oriente the most important ones took place in Santiago, Guantánamo, Jiguaní, San Luis, El Cobre, El Caney, Alto Songo, Bayate and Baire. The uprisings in the central part of the island, such as Ibarra, Jagüey Grande and Aguada, suffered from poor co-ordination and failed; the leaders were captured, some of them deported and some executed. In the province of Havana the insurrection was discovered before it got off and the leaders detained. Thus, the insurgents further west in Pinar del Río were ordered to wait. Martí, on his way to Cuba, gave the Proclamation of Montecristi in Santo Domingo, outlining the policy for Cuba's war of independence: the war was to be waged by blacks and whites alike; participation of all blacks was crucial for victory; Spaniards who did not object to the war effort should be spared, private rural properties should not be damaged; and the revolution should bring new economic life to Cuba. On 1 and 11 April 1895, the main rebel leaders landed on two expeditions in Oriente: Major Antonio Maceo and 22 members near Baracoa and Martí, Máximo Gómez and four other members in Playitas. Around that time, Spanish forces in Cuba numbered about 80,000, of which 20,000 were regular troops, and 60,000 were Spanish and Cuban volunteers. The latter were a locally enlisted force that took care of most of the guard and police duties on the island. Wealthy landowners would volunteer a number of their slaves to serve in this force, which was under local control and not under official military command. By December, 98,412 regular troops had been sent to the island and the number of volunteers had increased to 63,000 men. By the end of 1897, there were 240,000 regulars and 60,000 irregulars on the island. The revolutionaries were far outnumbered. The rebels came to be nicknamed "Mambis" after a black Spanish officer, Juan Ethninius Mamby, who joined the Dominicans in the fight for independence in 1846. The Spanish soldiers referred to the Dominican insurgents as "the men of Mamby" and "Mambis". When the Ten Years' War broke out in 1868, some of the same soldiers were assigned to Cuba, importing what had by then become a derogatory Spanish slur. The Cubans adopted the name with pride. After the Ten Years' War, possession of weapons by private individuals was prohibited in Cuba. Thus, one of the most serious and persistent problems for the rebels was a shortage of suitable weapons. This lack of arms forced them to utilise guerrilla tactics, using the environment, the element of surprise, fast horses and simple weapons such as machetes. Most of their firearms were acquired in raids on the Spaniards. Between 11 June 1895 and 30 November 1897, 60 attempts were made to bring weapons and supplies to the rebels from outside Cuba, but only one succeeded, largely due to British naval protection. 28 of these resupply attempts were halted within U.S. territory, five were intercepted by the U.S. Navy, four by the Spanish Navy, two were wrecked, one was driven back to port by a storm, and the fate of another is unknown. Escalation of the war Martí was killed on 19 May 1895, during a reckless charge against entrenched Spanish forces, but Máximo Gómez (a Dominican) and Antonio Maceo (a mulatto) fought on, taking the war to all parts of Oriente. Gómez used scorched-earth tactics, which entailed dynamiting passenger trains and burning the Spanish loyalists' property and sugar plantations—including many owned by Americans. By the end of June all of Camagüey was at war. Continuing west, Gómez and Maceo joined up with veterans of the 1868 war, Polish internationalists, General Carlos Roloff and Serafín Sánchez in Las Villas, swelling their ranks and boosting their arsenal. In mid-September, representatives of the five Liberation Army Corps assembled in Jimaguayú, Camagüey, to approve the Jimaguayú Constitution. This constitution established a central government, which grouped the executive and legislative powers into one entity, the Government Council, which was headed by Salvador Cisneros and Bartolomé Masó. After a period of consolidation in the three eastern provinces, the liberation armies headed for Camagüey and then for Matanzas, outmanoeuvring and deceiving the Spanish Army several times. The revolutionaries defeated the Spanish general Arsenio Martínez Campos, himself the victor of the Ten Years' War, and killed his most trusted general at Peralejo. Campos tried the same strategy he had employed in the Ten Years' War, constructing a broad defensive belt across the island, about long and wide. This line, called the trocha, was intended to limit rebel activities to the eastern provinces, and consisted of a railroad, from Jucaro in the south to Moron in the north, on which armored railcars could travel. At various points along this railroad there were fortifications, at intervals of there were posts and at intervals of there was barbed wire. In addition, booby traps were placed at the locations most likely to be attacked. For the rebels, it was essential to bring the war to the western provinces of Matanzas, Havana and Pinar del Río, where the island's government and wealth was located. The Ten Years' War failed because it had not managed to proceed beyond the eastern provinces. In a successful cavalry campaign, overcoming the trochas, the rebels invaded every province. Surrounding all the larger cities and well-fortified towns, they arrived at the westernmost tip of the island on 22 January 1896, exactly three months after the invasion near Baraguá. Unable to defeat the rebels with conventional military tactics, the Spanish government sent Gen. Valeriano Weyler y Nicolau (nicknamed The Butcher), who reacted to these rebel successes by introducing terror methods: periodic executions, mass exiles, and the destruction of farms and crops. These methods reached their height on 21 October 1896, when he ordered all countryside residents and their livestock to gather in various fortified areas and towns occupied by his troops within eight days. Hundreds of thousands of people had to leave their homes, creating appalling conditions of overcrowding in the towns and cities. This was the first recorded and recognized use of concentration camps where non-combatants were removed from their land to deprive the enemy of succor and then the internees were subjected to appalling conditions. The Spanish also employed the use of concentration camps in the Philippines shortly after, again resulting in massive non-combatant fatalities. It is estimated that this measure caused the death of at least one-third of Cuba's rural population. The forced relocation policy was maintained until March 1898. Since the early 1880s, Spain had also been suppressing an independence movement in the Philippines, which was intensifying; Spain was thus now fighting two wars, which placed a heavy burden on its economy. In secret negotiations in 1896, Spain turned down the United States' offers to buy Cuba. Maceo was killed on 7 December 1896, in Havana province, while returning from the west. As the war continued, the major obstacle to Cuban success was weapons supply. Although weapons and funding came from within the United States, the supply operation violated American laws, which were enforced by the U.S. Coast Guard; of 71 resupply missions, only 27 got through, with 5 being stopped by the Spanish and 33 by the U.S. Coast Guard. In 1897, the liberation army maintained a privileged position in Camagüey and Oriente, where the Spanish only controlled a few cities. Spanish liberal leader Praxedes Sagasta admitted in May 1897: "After having sent 200,000 men and shed so much blood, we don't own more land on the island than what our soldiers are stepping on". The rebel force of 3,000 defeated the Spanish in various encounters, such as the battle of La Reforma and the surrender of Las Tunas on 30 August, and the Spaniards were kept on the defensive. Las Tunas had been guarded by over 1,000 well-armed and well-supplied men. As stipulated at the Jimaguayú Assembly two years earlier, a second Constituent Assembly met in La Yaya, Camagüey, on 10 October 1897. The newly adopted constitution decreed that a military command be subordinated to civilian rule. The government was confirmed, naming Bartolomé Masó as president and Domingo Méndez Capote as vice president. Thereafter, Madrid decided to change its policy toward Cuba, replacing Weyler, drawing up a colonial constitution for Cuba and Puerto Rico, and installing a new government in Havana. But with half the country out of its control, and the other half in arms, the new government was powerless and rejected by the rebels. The USS Maine incident The Cuban struggle for independence had captured the North American imagination for years and newspapers had been agitating for intervention with sensational stories of Spanish atrocities against the native Cuban population. Americans came to believe that Cuba's battle with Spain resembled United States's Revolutionary War. This continued even after Spain replaced Weyler and said it changed its policies, and the North American public opinion was very much in favor of intervening in favor of the Cubans. In January 1898, a riot by Cuban-Spanish loyalists against the new autonomous government broke out in Havana, leading to the destruction of the printing presses of four local newspapers which published articles critical of the Spanish Army. The U.S. Consul-General cabled Washington, fearing for the lives of Americans living in Havana. In response, the battleship was sent to Havana in the last week of January. On 15 February 1898, the Maine was destroyed by an explosion, killing 268 crewmembers. The cause of the explosion has not been clearly established to this day, but the incident focused American attention on Cuba, and President William McKinley and his supporters could not stop Congress from declaring war to "liberate" Cuba. In an attempt to appease the United States, the colonial government took two steps that had been demanded by President McKinley: it ended the forced relocation policy and offered negotiations with the independence fighters. However, the truce was rejected by the rebels and the concessions proved too late and too ineffective. Madrid asked other European powers for help; they refused and said Spain should back down. On 11 April 1898, McKinley asked Congress for authority to send U.S. Armed Forces troops to Cuba for the purpose of ending the civil war there. On 19 April, Congress passed joint resolutions (by a vote of 311 to 6 in the House and 42 to 35 in the Senate) supporting Cuban independence and disclaiming any intention to annex Cuba, demanding Spanish withdrawal, and authorizing the president to use as much military force as he thought necessary to help Cuban patriots gain independence from Spain. This was adopted by resolution of Congress and included from Senator Henry Teller the Teller Amendment, which passed unanimously, stipulating that "the island of Cuba is, and by right should be, free and independent". The amendment disclaimed any intention on the part of the United States to exercise jurisdiction or control over Cuba for other than pacification reasons, and confirmed that the armed forces would be removed once the war is over. Senate and Congress passed the amendment on 19 April, McKinley signed the joint resolution on 20 April and the ultimatum was forwarded to Spain. War was declared on 20/21 April 1898. "It's been suggested that a major reason for the U.S. war against Spain was the fierce competition emerging between Joseph Pulitzer's New York World and William Randolph Hearst's New York Journal", Joseph E. Wisan wrote in an essay titled "The Cuban Crisis As Reflected in the New York Press" (1934). He stated that "In the opinion of the writer, the Spanish–American War would not have occurred had not the appearance of Hearst in New York journalism precipitated a bitter battle for newspaper circulation." It has also been argued that the main reason the United States entered the war was the failed secret attempt, in 1896, to purchase Cuba from a weaker, war-depleted Spain. The Cuban Theatre of the Spanish–American War Hostilities started hours after the declaration of war when a U.S. contingent under Admiral William T. Sampson blockaded several Cuban ports. The Americans decided to invade Cuba and to start in Oriente where the Cubans had almost absolute control and were able to co-operate, for example, by establishing a beachhead and protecting the U.S. landing in Daiquiri. The first U.S. objective was to capture the city of Santiago de Cuba in order to destroy Linares' army and Cervera's fleet. To reach Santiago they had to pass through concentrated Spanish defences in the San Juan Hills and a small town in El Caney. Between 22 and 24 June 1898 the Americans landed under General William R. Shafter at Daiquirí and Siboney, east of Santiago, and established a base. The port of Santiago became the main target of U.S. naval operations, and the American fleet attacking Santiago needed shelter from the summer hurricane season. Nearby Guantánamo Bay, with its excellent harbour, was chosen for this purpose and attacked on 6 June. The Battle of Santiago de Cuba, on 3 July 1898, was the largest naval engagement during the Spanish–American War, and resulted in the destruction of the Spanish Caribbean Squadron. Resistance in Santiago consolidated around Fort Canosa, while major battles between Spaniards and Americans took place at Las Guasimas on 24 June, and at El Caney and San Juan Hill on 1 July, after which the American advance ground to a halt. American losses at Las Guasimas were 16 killed and 52 wounded; the Spanish lost 12 dead and 24 wounded. The Americans lost 81 killed in action and 360 wounded in action in taking El Caney, where the Spanish defenders lost 38 killed, 138 wounded and 160 captured. At San Juan, the Americans lost 144 dead, 1,024 wounded, and 72 missing; Spanish losses were 58 killed, 170 wounded, and 39 captured. Spanish troops successfully defended Fort Canosa, allowing them to stabilize their line and bar the entry to Santiago. The Americans and Cubans began a siege of the city, which surrendered on 16 July after the defeat of the Spanish Caribbean Squadron. Thus, Oriente fell under the control of Americans and the Cubans, but U.S. General Nelson A. Miles would not allow Cuban troops to enter Santiago, claiming that he wanted to prevent clashes between Cubans and Spaniards. Thus, Cuban General Calixto García, head of the mambi forces in the Eastern department, ordered his troops to hold their respective areas and resigned, writing a letter of protest to General Shafter. After losing the Philippines and Puerto Rico, which had also been invaded by the United States, and with no hope of holding on to Cuba, Spain sued for peace on 17 July 1898. On 12 August, the U.S. and Spain signed a protocol of peace, in which Spain agreed to relinquish all claim of sovereignty over and title of Cuba. On 10 December 1898, the U.S. and Spain signed the formal Treaty of Paris, recognizing continuing U. S. military occupation. Although the Cubans had participated in the liberation efforts, the United States prevented Cuba from sending representatives to the Paris peace talks or signing the treaty, which set no time limit for U.S. occupation and excluded the Isle of Pines from Cuba. Although the U.S. president had no objection to Cuba's eventual independence, U.S. General William R. Shafter refused to allow Cuban General Calixto García and his rebel forces to participate in the surrender ceremonies in Santiago de Cuba. U.S. occupation (1898 - 1902) After the last Spanish troops left the island in December 1898, the government of Cuba was temporarily handed over to the United States on 1 January 1899. The first governor was General John R. Brooke. Unlike Guam, Puerto Rico, and the Philippines, the United States did not annex Cuba because of the restrictions imposed in the Teller Amendment. Political changes The U.S. administration was undecided on Cuba's future status. Once it had been pried away from the Spaniards it was to be assured that it moved and remained in the U.S. sphere. How this was to be achieved was a matter of intense discussion and annexation was an option, not only on the mainland but also in Cuba. McKinley spoke about the links that should exist between the two nations. Brooke set up a civilian government, placed U.S. governors in seven newly created departments, and named civilian governors for the provinces as well as mayors and representatives for the municipalities. Many Spanish colonial government officials were kept in their posts. The population were ordered to disarm and, ignoring the Mambi Army, Brooke created the Rural Guard and municipal police corps at the service of the occupation forces. Cuba's judicial powers and courts remained legally based on the codes of the Spanish government. Tomás Estrada Palma, Martí's successor as delegate of the Cuban Revolutionary Party, dissolved the party a few days after the signing of the Paris Treaty in December 1898, claiming that the objectives of the party had been met. The revolutionary Assembly of Representatives was also dissolved. Thus, the three representative institutions of the national liberation movement disappeared. Economic changes Before the United States officially took over the government, it had already begun cutting tariffs on American goods entering Cuba, without granting the same rights to Cuban goods going to the United States. Government payments had to be made in U.S. dollars. In spite of the Foraker Amendment, which prohibited the U.S. occupation government from granting privileges and concessions to American investors, the Cuban economy was soon dominated by American capital. The growth of American sugar estates was so quick that in 1905 nearly 10% of Cuba's total land area belonged to American citizens. By 1902, American companies controlled 80% of Cuba's ore exports and owned most of the sugar and cigarette factories. Immediately after the war, there were several serious barriers for foreign businesses attempting to operate in Cuba. Three separate pieces of legislation—the Joint Resolution of 1898, the Teller Amendment, and the Foraker Amendment—threatened foreign investment. The Joint Resolution of 1898 stated that the Cuban people are by right free and independent, while the Teller Amendment further declared that the United States could not annex Cuba. These two pieces of legislation were crucial in appeasing anti-imperialists as the United States intervened in the war in Cuba. Similarly, the Foraker Amendment, which prohibited the U.S. military government from granting concessions to American companies, was passed to appease anti-imperialists during the occupational period. Although these three statutes enabled the United States to gain a foothold in Cuba, they presented obstacles for American businesses to acquire land and permits. Eventually, Cornelius Van Horne of the Cuba Company, an early railroad company in Cuba, found a loophole in "revocable permits" justified by preexisting Spanish legislation that effectively allowed railroads to be built in Cuba. General Leonard Wood, the governor of Cuba and a noted annexationist, used this loophole to grant hundreds of franchises, permits, and other concessions to American businesses. Once the legal barriers were overcome, American investments transformed the Cuban economy. Within two years of entering Cuba, the Cuba Company built a 350-mile railroad connecting the eastern port of Santiago to the existing railways in central Cuba. The company was the largest single foreign investment in Cuba for the first two decades of the twentieth century. By the 1910s it was the largest company in the country. The improved infrastructure allowed the sugar cane industry to spread to the previously underdeveloped eastern part of the country. As many small Cuban sugar cane producers were crippled with debt and damages from the war, American companies were able to quickly and cheaply take over the sugar cane industry. At the same time, new productive units called centrales could grind up to 2,000 tons of cane a day making large-scale operations most profitable. The large fixed cost of these centrales made them almost exclusively accessible to American companies with large capital stocks. Furthermore, the centrales required a large, steady flow of cane to remain profitable, which led to further consolidation in the industry. Cuban cane farmers who had formerly been landowners became tenants on company land, funneling raw cane to the centrales. By 1902, 40% of the country's sugar production was controlled by North Americans. With American corporate interests firmly rooted in Cuba, the U.S. tariff system was adjusted accordingly to strengthen trade between the nations. The Reciprocity Treaty of 1903 lowered the U.S. tariff on Cuban sugar by 20%. This gave Cuban sugar a competitive edge in the American marketplace. At the same time, it granted equal or greater concessions on most items imported from the United States. Cuban imports of American goods went from $17 million in the five years before the war, to $38 million in 1905, and eventually to over $200 million in 1918. Likewise, Cuban exports to the United States reached $86 million in 1905 and rose to nearly $300 million in 1918. Elections and independence Popular demands for a Constituent Assembly soon emerged. In December 1899, the U.S. War Secretary assured the Cuban populace that the occupation was temporary, that municipal and general elections would be held, that a Constituent Assembly would be set up, and that sovereignty would be handed to Cubans. Brooke was replaced by General Leonard Wood to oversee the transition. Parties were created, including the Cuban National Party, the Federal Republican Party of Las Villas, the Republican Party of Havana and the Democratic Union Party. The first elections for mayors, treasurers and attorneys of the country's 110 municipalities for a one-year-term took place on 16 June 1900, but balloting was limited to literate Cubans older than 21 and with properties worth more than $250. Only members of the dissolved Liberation Army were exempt from these conditions. Thus, the number of about 418,000 male citizens over 21 was reduced to about 151,000. 360,000 women were totally excluded. The same elections were held one year later, again for a one-year-term. Elections for 31 delegates to a Constituent Assembly were held on 15 September 1900 with the same balloting restrictions. In all three elections, pro-independence candidates, including a large number of mambi delegates, won overwhelming majorities. The Constitution was drawn up from November 1900 to February 1901 and then passed by the Assembly. It established a republican form of government, proclaimed internationally recognized individual rights and liberties, freedom of religion, separation between church and state, and described the composition, structure and functions of state powers. On 2 March 1901, the U.S. Congress passed the Army Appropriations Act, stipulating the conditions for the withdrawal of United States troops remaining in Cuba following the Spanish–American War. As a rider, this act included the Platt Amendment, which defined the terms of Cuban-U.S. relations until 1934. It replaced the earlier Teller Amendment. The amendment provided for a number of rules heavily infringing on Cuba's sovereignty: That the government of Cuba shall never enter into any treaty with any foreign power which will impair the independence of Cuba, nor in any manner permit any foreign power to obtain control over any portion of the island. That Cuba would contract no foreign debt without guarantees that the interest could be served from ordinary revenues. That Cuba consent that the United States may intervene for the preservation of Cuban independence, to protect life, property, and individual liberty, and to discharging the obligations imposed by the treaty of Paris. That the Cuban claim to the Isle of Pines (now called Isla de la Juventud) was not acknowledged and to be determined by treaty. That Cuba commit to providing the United States "lands necessary for coaling or naval stations at certain specified points to be agreed upon". As a precondition to Cuba's independence, the | level of telephone penetration in Latin America, although many telephone users were still unconnected to switchboards. Moreover, Cuba's health service was remarkably developed. By the late 1950s, it had one of the highest numbers of doctors per capita more than in the United Kingdom at that time and the third-lowest adult mortality rate in the world. According to the World Health Organization, the island had the lowest infant mortality rate in Latin America, and the 13th-lowest in the world better than in contemporary France, Belgium, West Germany, Israel, Japan, Austria, Italy, Spain, and Portugal. Additionally, Cuba's education spending in the 1950s was the highest in Latin America, relative to GDP. Cuba had the fourth-highest literacy rate in the region, at almost 80% according to the United Nations higher than that of Spain at the time. Stagnation and dissatisfaction However, the United States, rather than Latin America, was the frame of reference for educated Cubans. Cubans travelled to the United States, read American newspapers, listened to American radio, watched American television, and were attracted to American culture. Middle-class Cubans grew frustrated at the economic gap between Cuba and the US. The middle class became increasingly dissatisfied with the administration, while labour unions supported Batista until the very end. Large income disparities arose due to the extensive privileges enjoyed by Cuba's unionized workers. Cuban labour unions had established limitations on mechanization and even banned dismissals in some factories. The labour unions' privileges were obtained in large measure "at the cost of the unemployed and the peasants". Cuba's labour regulations ultimately caused economic stagnation. Hugh Thomas asserts that "militant unions succeeded in maintaining the position of unionized workers and, consequently, made it difficult for capital to improve efficiency." Between 1933 and 1958, Cuba increased economic regulation enormously. The regulation led to declining investment. The World Bank also complained that the Batista administration raised the tax burden without assessing its impact. Unemployment was high; many university graduates could not find jobs. After its earlier meteoric rise, the Cuban gross domestic product grew at only 1% annually on average between 1950 and 1958. Political repression and human rights abuses In 1940, while receiving military, financial, and logistical support from the United States, Batista suspended the 1940 Constitution and revoked most political liberties, including the right to strike. He then aligned with the wealthiest landowners who owned the largest sugar plantations, and presided over a stagnating economy that widened the gap between rich and poor Cubans. Eventually it reached the point where most of the sugar industry was in U.S. hands, and foreigners owned 70% of the arable land. As such, Batista's repressive government then began to systematically profit from the exploitation of Cuba's commercial interests, by negotiating lucrative relationships with both the American Mafia, who controlled the drug, gambling, and prostitution businesses in Havana, and with large U.S.-based multinational companies who were awarded lucrative contracts. To quell the growing discontent amongst the populace—which was subsequently displayed through frequent student riots and demonstrations—Batista established tighter censorship of the media, while also utilizing his Bureau for the Repression of Communist Activities secret police to carry out wide-scale violence, torture and public executions. These murders mounted in 1957, as Fidel Castro gained more publicity and influence. Many people were killed, with estimates ranging from hundreds to about 20,000 people killed. Rise of Communism (1947 - 1959) In 1952, Fidel Castro, a young lawyer running for a seat in the Chamber of Representatives for the Partido Ortodoxo, founded in 1947 to oppose government corruption and reform, circulated a petition to depose Batista's government on the grounds that it had illegitimately suspended the electoral process. However, the courts did not act on the petition and ignored Castro's legal challenges. Castro thus resolved to use armed force to overthrow Batista; he and his brother Raúl gathered supporters, and on 26 July 1953 led an attack on the Moncada Barracks near Santiago de Cuba. The attack ended in failurethe authorities killed several of the insurgents, captured Castro himself, tried him and sentenced him to 15 years in prison. However, the Batista government released him in 1955, when amnesty was given to many political prisoners, including the ones that assaulted the Moncada barracks. Castro and his brother subsequently went into exile in Mexico, where they met the Argentine revolutionary Ernesto "Che" Guevara. While in Mexico, Guevara and the Castros organized the 26 July Movement with the goal of overthrowing Batista. In December 1956, Fidel Castro led a group of 82 fighters to Cuba aboard the yacht Granma, landing in the eastern part of the island. Despite a pre-landing rising in Santiago by Frank País Pesqueira and his followers among the urban pro-Castro movement, Batista's forces promptly killed, dispersed or captured most of Castro's men. Castro managed to escape into the Sierra Maestra mountains with as few as 12 fighters, aided by the urban and rural opposition, including Celia Sanchez and the bandits of Cresencio Perez's family. Castro and Guevara then began a guerrilla campaign against the Batista régime, with their main forces supported by numerous poorly armed escopeteros and the well-armed fighters of Frank País' urban organization. Growing anti-Batista resistance, including a bloodily crushed rising by Cuban Navy personnel in Cienfuegos, soon led to chaos in the country. At the same time, rival guerrilla groups in the Escambray Mountains also grew more effective. Castro attempted to arrange a general strike in 1958, but could not win support among Communists or labor unions. Multiple attempts by Batista's forces to crush the rebels ended in failure. Castro's forces were able to acquire captured weapons, including 12 mortars, 2 bazookas, 12 machine guns mounted on tripods, 21 light machine guns, 142 M-1 rifles, and 200 Dominican Cristobal submachine guns. The biggest prize for the rebels was a government M4 Sherman tank, which would be used in the Battle of Santa Clara. The United States imposed trade restrictions on the Batista administration and sent an envoy who attempted to persuade Batista to leave the country voluntarily. With the military situation becoming untenable, Batista fled on 1 January 1959, and Castro took over. Within months of taking control, Castro moved to consolidate his power by marginalizing other resistance groups and figures and imprisoning and executing opponents and dissident former supporters. As the revolution became more radical and continued its marginalization of the wealthy, of landowners, and of some of those who opposed its direction, thousands of Cubans fled the island, eventually, over decades, forming a large exile community in the United States. Cuban Americans today constitute a large percentage of the population of the U.S. state of Florida, and constitute a significant voting bloc. Castro's Cuba (1959 - 2006) Politics On 1 January 1959, Che Guevara marched his troops from Santa Clara to Havana, without encountering resistance. Meanwhile, Fidel Castro marched his soldiers to the Moncada Army Barracks, where all 5,000 soldiers in the barracks defected to the Revolutionary movement. On 4 February 1959, Fidel Castro announced a massive reform plan which included a public works project, land reform granting nearly 200,000 families farmland, and also nationalization plans of various industries. The new government of Cuba soon encountered opposition from militant groups and from the United States, which had supported Batista politically and economically. Fidel Castro quickly purged political opponents from the administration. Loyalty to Castro and the revolution became the primary criterion for all appointments. Mass organisation such as labor unions that opposed the revolutionary government were made illegal. By the end of 1960, all opposition newspapers had been closed down and all radio and television stations had come under state control. Teachers and professors found to have involvement with counter-revolution were purged. Fidel's brother Raúl Castro became the commander of the Revolutionary Armed Forces. In September 1960, a system of neighborhood watch networks, known as Committees for the Defense of the Revolution (CDR), was created. In July 1961, two years after the 1959 Revolution, the Integrated Revolutionary Organizations (IRO) was formed, merging Fidel Castro's 26th of July Movement with Blas Roca's Popular Socialist Party and Faure Chomón's Revolutionary Directory 13 March. On 26 March 1962, the IRO became the United Party of the Cuban Socialist Revolution (PURSC), which, in turn, became the Communist Party on 3 October 1965, with Castro as First Secretary. In 1976 a national referendum ratified a new constitution, with 97.7% in favour. The constitution secured the Communist Party's central role in governing Cuba, but kept party affiliation out of the election process. Other smaller parties exist but have little influence and are not permitted to campaign against the program of the Communist Party. Break with the United States Castro's resentment of American influence The United States recognized the Castro government on 7 January 1959, six days after Batista fled Cuba. President Dwight D. Eisenhower sent a new ambassador, Philip Bonsal, to replace Earl E. T. Smith, who had been close to Batista. The Eisenhower administration, in agreement with the American media and Congress, did this with the assumption that "Cuba [would] remain in the U.S. sphere of influence". Foreign-policy professor Piero Gleijeses argued that if Castro had accepted these parameters, he would be allowed to stay in power. Otherwise he would be overthrown. Among the opponents of Batista, many wanted to accommodate the United States. However, Castro belonged to a faction which opposed U.S. influence. Castro did not forgive the U.S. supply of arms to Batista during the revolution. On 5 June 1958, at the height of the revolution, he had written: "The Americans are going to pay dearly for what they are doing. When the war is over, I'll start a much longer and bigger war of my own: the war I'm going to fight against them. That will be my true destiny". (The United States had stopped supplies to Batista in March 1958, but left its Military Advisory Group in Cuba). Thus, Castro had no intention to bow to the United States. "Even though he did not have a clear blueprint of the Cuba he wanted to create, Castro dreamed of a sweeping revolution that would uproot his country's oppressive socioeconomic structure and of a Cuba that would be free of the United States". Breakdown of relations Only six months after Castro seized power, the Eisenhower administration began to plot his ouster. The United Kingdom was persuaded to cancel a sale of Hawker Hunter fighter aircraft to Cuba. The US National Security Council (NSC) met in March 1959 to consider means to institute a régime-change and the Central Intelligence Agency (CIA) began arming guerillas inside Cuba in May. In January 1960 Roy R. Rubottom, Jr., Assistant Secretary of State for Inter-American Affairs, summarized the evolution of Cuba–United States relations since January 1959: "The period from January to March might be characterized as the honeymoon period of the Castro government. In April a downward trend in US–Cuban relations had been evident… In June we had reached the decision that it was not possible to achieve our objectives with Castro in power and had agreed to undertake the program referred to by Undersecretary of State Livingston T. Merchant. On 31 October in agreement with the Central Intelligence Agency, the Department had recommended to the President approval of a program along the lines referred to by Mr. Merchant. The approved program authorized us to support elements in Cuba opposed to the Castro government while making Castro's downfall seem to be the result of his own mistakes."Braddock to SecState, Havana, 1 February 1960, FRUS 1958–60, 6:778. In March 1960 the French ship La Coubre blew up in Havana Harbor as it unloaded munitions, killing dozens. The CIA blamed the explosion on the Cuban government. Relations between the United States and Cuba deteriorated rapidly as the Cuban government, in reaction to the refusal of Royal Dutch Shell, Standard Oil and Texaco to refine petroleum from the Soviet Union in Cuban refineries under their control, took control of those refineries in July 1960. The Eisenhower administration promoted a boycott of Cuba by oil companies, to which Cuba responded by nationalizing the refineries in August 1960. Both sides continued to escalate the dispute. Cuba expropriated more US-owned properties, notably those belonging to the International Telephone and Telegraph Company (ITT) and to the United Fruit Company. In the Castro government's first agrarian reform law, on 17 May 1959, the state sought to limit the size of land holdings, and to distribute that land to small farmers in "Vital Minimum" tracts. This law served as a pretext for seizing lands held by foreigners and for redistributing them to Cuban citizens. Formal disconnection The United States severed diplomatic relations with Cuba on 3 January 1961, and further restricted trade in February 1962. The Organization of American States, under pressure from the United States, suspended Cuba's membership in the body on 22 January 1962, and the U.S. government banned all U.S.–Cuban trade on 7 February. The Kennedy administration extended this ban on 8 February 1963, forbidding U.S. citizens to travel to Cuba or to conduct financial or commercial transactions with the country. At first the embargo did not extend to other countries, and Cuba traded with most European, Asian and Latin American countries and especially with Canada. However, the United States later pressured other nations and American companies with foreign subsidiaries to restrict trade with Cuba. The Helms–Burton Act of 1996 makes it very difficult for foreign companies doing business with Cuba to also do business in the United States, forcing them to choose between the two marketplaces. Bay of Pigs invasion In April 1961, less than four months into the Kennedy administration, the Central Intelligence Agency (CIA) executed a plan that had been developed under the Eisenhower administration. This military campaign to topple Cuba's revolutionary government is now known as the Bay of Pigs Invasion (or La Batalla de Girón in Cuba). The aim of the invasion was to empower existing opposition militant groups to "overthrow the Communist regime" and establish "a new government with which the United States can live in peace." The invasion was carried out by a CIA-sponsored paramilitary group of over 1,400 Cuban exiles called Brigade 2506. Arriving in Cuba by boat from Guatemala on 15 April, the brigade landed on the beach Playa Girón and initially overwhelmed Cuba's counter-offensive. But by 20 April, the brigade surrendered and was publicly interrogated before being sent back to the US. Recently inaugurated president John F. Kennedy assumed full responsibility for the operation, even though he had vetoed the reinforcements requested during the battle. The invasion helped further build popular support for the new Cuban government. The Kennedy administration thereafter began Operation Mongoose, a covert CIA campaign of sabotage against Cuba, including the arming of militant groups, sabotage of Cuban infrastructure, and plots to assassinate Castro. All this reinforced Castro's distrust of the US, and set the stage for the Cuban Missile Crisis. The Cuban Missile Crisis Tensions between the two governments peaked again during the October 1962 Cuban Missile Crisis. The United States had a much larger arsenal of long-range nuclear weapons than the Soviet Union, as well as medium-range ballistic missiles (MRBMs) in Turkey, whereas the Soviet Union had a large stockpile of medium-range nuclear weapons which were primarily located in Europe. Cuba agreed to let the Soviets secretly place SS-4 Sandal and SS-5 Skean MRBMs on their territory. Reports from inside Cuba to exile sources questioned the need for large amounts of ice going to rural areas, which led to the discovery of the missiles, confirmed by Lockheed U-2 reconnaissance photos. The United States responded by establishing a cordon in international waters to stop Soviet ships from bringing in more missiles (designated a quarantine rather than a blockade to avoid issues with international law). At the same time, Castro was getting a little too extreme for the liking of Moscow, so at the last moment the Soviets called back their ships. In addition, they agreed to remove the missiles already there in exchange for an agreement that the United States would not invade Cuba. Only after the fall of the Soviet Union was it revealed that another part of the agreement was the removal of U.S. missiles from Turkey. It also turned out that some submarines that the U.S. Navy blocked were carrying nuclear missiles and that communication with Moscow was tenuous, effectively leaving the decision of firing the missiles at the discretion of the captains of those submarines. In addition, following the dissolution of the Soviet Union, the Russian government revealed that nuclear-armed FROGs (Free Rocket Over Ground) and Ilyushin Il-28 Beagle bombers had also been deployed in Cuba. Military build-up In the 1961 New Year's Day parade, the Communist administration exhibited Soviet tanks and other weapons. Cuban officers received extended military training in the Soviet Union, becoming proficient in the use of advanced Soviet weapons systems, including MIG jet fighters, submarines, sophisticated artillery, and other ground and air defense equipment. For most of the approximately 30 years of the Cuban-Soviet military collaboration, Moscow provided the Cuban Revolutionary Armed Forces—virtually free of charge—with nearly all of its equipment, training, and supplies, worth approximately $1 billion annually. By 1982, Cuba possessed the best equipped and largest per capita armed forces in Latin America. Suppression of dissent Military Units to Aid Production or UMAPs (Unidades Militares para la Ayuda de Producción) in effect, forced labor concentration camps were established in 1965 as a way to eliminate alleged "bourgeois" and "counter-revolutionary" values in the Cuban population. In July 1968, the name "UMAP" was erased and paperwork associated with the UMAP was destroyed. The camps continued as "Military Units". By the 1970s, the standard of living in Cuba was "extremely spartan" and discontent was rife. Castro changed economic policies in the first half of the 1970s. In the 1970s unemployment reappeared as problem. The solution was to criminalize unemployment with 1971 Anti-Loafing Law; the unemployed would be put into jail. One alternative was to go fight Soviet-supported wars in Africa. In any given year, there were about 20,000 dissidents held and tortured under inhuman prison conditions. Homosexuals were imprisoned in internment camps in the 1960s, where they were subject to medical-political "reeducation". The Black Book of Communism estimates that 15,000–17,000 people were executed. The anti-Castro Archivo Cuba estimates that 4,000 people were executed. Emigration The establishment of a socialist system in Cuba led to the fleeing of many hundreds of thousands of upper- and middle-class Cubans to the United States and other countries since Castro's rise to power. By 1961, thousands of Cubans had fled Cuba for the United States. On 22 March of that year, an exile council was formed. The council planned to defeat the Communist regime and form a provisional government with José Miró Cardona, a noted leader in the civil opposition against Batista, to serve as temporary president until elections could be held. Between 1959 and 1993, some 1.2 million Cubans left the island for the United States, often by sea in small boats and fragile rafts. Between 30,000 and 80,000 Cubans are estimated to have died trying flee Cuba during this period. In the early years a number of those who could claim dual Spanish-Cuban citizenship left for Spain. Over the course of several decades, a number of Cuban Jews were allowed to emigrate to Israel after quiet negotiations; the majority of the 10,000 or so Jews who were in Cuba in 1959 eventually left the country. By the time of the collapse of the Soviet Union, Cubans were living in many different countries, some in member countries of the European Union. Spain, Italy, Mexico, and Canada have particularly large Cuban communities. On 6 November 1965, Cuba and the United States agreed to an airlift for Cubans who wanted to emigrate to the United States. The first of these so-called Freedom Flights left Cuba on 1 December 1965, and by 1971 over 250,000 Cubans had flown to the United States. In 1980 another 125,000 came to United States during a six-month period in the Mariel boatlift, including some criminals and people with psychiatric diagnoses. It was discovered that the Cuban government was using the event to rid Cuba of the unwanted segments of its society. In 2012, Cuba abolished its requirement for exit permits, allowing Cuban citizens to travel to other countries more easily. Involvement in Third World conflicts From its inception, the Cuban Revolution defined itself as internationalist, seeking to spread its revolutionary ideals abroad and gain a variety of foreign allies. Although still a developing country itself, Cuba supported African, Latin American and Asian countries in the fields of military development, health and education. These "overseas adventures" not only irritated the United States but were also quite often a source of dispute with Cuba's ostensible allies in the Kremlin. The Sandinista insurgency in Nicaragua, which led to the demise of the Somoza dictatorship in 1979, was openly supported by Cuba. However, it was on the African continent where Cuba was most active, supporting a total of 17 liberation movements or leftist governments, in countries including Angola, Equatorial Guinea, Ethiopia, Guinea-Bissau, and Mozambique. Cuba offered to send troops to Vietnam, but the initiative was turned down by the Vietnamese. Cuba had some 39,000–40,000 military personnel abroad by the late 1970s, with the bulk of the forces in Sub-Saharan Africa but with some 1,365 stationed among Algeria, Iraq, Libya, and South Yemen. Its Angolan involvement was particularly intense and noteworthy with heavy assistance given to the Marxist–Leninist MPLA in the Angolan Civil War. Cuban soldiers in Angola were instrumental in the defeat of South African and Zairian troops. Cuban soldiers also defeated the FNLA and UNITA armies and established MPLA control over most of Angola. Cuba's presence in Mozambique was more subdued, involving by the mid-1980s 700 Cuban military and 70 civilian personnel. In 1978, in Ethiopia, 16,000 Cuban combatants, along with the Soviet-supported Ethiopian Army, defeated an invasion force of Somalians. South African Defence Force soldiers were again drawn into the Angolan Civil War in 1987–88, and several inconclusive battles were fought between Cuban and South African forces. Cuban-piloted MiG-23s performed airstrikes against South African forces in South West Africa during the Battle of Cuito Cuanavale. Moscow used Cuban surrogate troops in Africa and the Middle East because they had a high level of training for combat in Third World environments, familiarity with Soviet weapons, physical toughness and a tradition of successful guerrilla warfare dating back to the uprisings against Spain in the 19th century. Cuban forces in Africa were mainly black and mulatto. An estimated 7,000–11,000 Cubans died in conflicts in Africa. Many Cuban soldiers died not as a result of hostile action but by accidents, friendly fire, or diseases such as malaria and yellow fever; many others died by suicide. Cuba was unable to pay on its own for the costs of its overseas military activities. After it lost its subsidies from the USSR, Cuba withdrew its troops from Ethiopia (1989), Nicaragua (1990), Angola (1991), and elsewhere. Angola Cuba's involvement in the Angolan Civil War began in the 1960s, when relations were established with the leftist Movement for the Popular Liberation of Angola (MPLA). The MPLA was one of three organisations struggling to gain Angola's independence from Portugal, the other two being UNITA and the National Liberation Front of Angola (FNLA). In August and October 1975, the South African Defence Force (SADF) intervened in Angola in support of the UNITA and FNLA (Operation Savannah). Cuban troops began to arrive in Angola in early October 1975. On 6 October, Cubans and the MPLA clashed with the FNLA and South African troops at Norton de Matos and were badly defeated. The Cubans blocked an advancing South African mechanized column on 4 November with 122mm rocket fire, causing the South Africans to request heavy artillery which could out-distance the rockets. Castro reacted to the presence of the South African armored column by announcing Operation Carlota, a massive resupply of Angola, on 5 November. An anti-Communist force made up of 1,500 FNLA fighters, 100 Portuguese mercenaries, and two battalions of the Zairian Army passed near the city of Quifangondo, only 30 km north of Luanda, at dawn on 10 November. The force, supported by South African Air Force aircraft and three 140 mm artillery pieces, marched in a single line along the Bengo River to face an 800-strong Cuban force across the river. Cuban and MPLA troops bombarded the FNLA with mortar and 122 mm rockets, destroying most of the FNLA's armored cars and 6 Jeeps carrying antitank rockets in the first hour of fighting. The Cuban-led force shot 2,000 rockets at the FNLA. Cubans then drove forward, launching RPG-7 rocket grenades, shooting with anti-aircraft guns, killing hundreds. The South Africans, with their aged World War II-era guns were powerless to intervene, and subsequently retreated via Ambrizete to SAS President Steyn, a South African Navy frigate. The Cuba-MPLA victory at the Battle of Quifangondo largely ended the FNLA's importance in the conflict. On 25 November, as SADF armored cars and UNITA infantry tried to cross a bridge, Cubans hidden along the banks of the river attacked; as many as 90 South African and UNITA troops were killed or wounded, and 7 or 8 SADF armored cars were destroyed. The Cubans suffered no casualties. Between 9 and 12 December, Cuban and South African troops fought between Santa Comba and Quibala, in what became known as the "Battle of Bridge 14". The Cubans were severely defeated, losing 200 killed. The SADF suffered only 4 casualties. At the same time, UNITA troops and another South African mechanized unit captured Luso. Following these defeats, the number of Cuban troops airlifted to Angola more than doubled, from about 400 per week to perhaps 1,000. The Cuban forces mounted a counter-offensive beginning in January 1976 that impelled South African withdrawal by the end of March. South Africa spent the following decade launching bombing and strafing raids from its bases in South West Africa into southern Angola. In February 1976, Cuban forces launched Operation Pañuelo Blanco (White Handkerchief) against 700 FLEC irregulars operating in the Necuto area. The irregulars laid minefields which caused the Cubans some casualties as they pursued them into the jungle. Further skirmishing continued throughout the month. In early April, the irregulars were encircled and cut off from supplies. Nearly 100 FLEC irregulars were killed over two nights as they tried to break their encirclement; a further 100 irregulars died and 300 were taken prisoner when the Cubans moved in for the kill the next day. In 1987–88, South Africa again sent military forces to Angola to stop an advance of FAPLA forces (MPLA) against UNITA, leading to the Battle of Cuito Cuanavale, where the SADF was unable to defeat the FAPLA and Cuban forces. The Cuban press described the campaign as follows: At the height of its operation, Cuba had as many as 50,000 soldiers stationed in Angola. On 22 December 1988, Angola, Cuba, and South Africa signed the Tripartite Accord in New York, arranging for the retreat of South African and Cuban troops within 30 months, and the implementation of the 10-year-old UN Security Council Resolution 435 for the independence of Namibia. The Cuban intervention, for a short time, turned Cuba into a "global player" in the midst of the Cold War. Their presence helped the MPLA retain control over large parts of Angola, and their military actions are also credited with helping secure Namibia's independence. The withdrawal of the Cubans ended 13 years of foreign military presence in Angola. At the same time, Cuba removed its troops from the Republic of the Congo and Ethiopia. Guinea-Bissau Some 40–50 Cubans fought against Portugal in Guinea-Bissau each year from 1966 until independence in 1974 (see Guinea-Bissau War of Independence). They helped in military planning and they were in charge of the artillery. Algeria As early as 1961, Cuba supported the National Liberation Front in Algeria against France. In October 1963, shortly after Algeria gained its independence, Morocco started a border dispute in which Cuba sent a battalion of 40 tanks and several hundred troops to help Algeria. However, a truce between the two North African countries was signed within the week. A memorandum issued on 20 October 1963 by Raúl Castro mandated a high standard of behavior for the troops, with strict instructions being given on their proper conduct during foreign interventions. Congo In 1964, Cuba supported the Simba Rebellion of adherents of Patrice Lumumba in Congo-Leopoldville (present-day Democratic Republic of the Congo). Among the insurgents was Laurent-Désiré Kabila, who would overthrow long-time dictator Mobutu 30 years later. However, the 1964 rebellion ended in failure. In the Mozambican Civil War and in Congo-Brazzaville (today the Republic of the Congo), Cubans acted as military advisors. Congo-Brazzaville furthermore acted as a supply base for the Angola mission. Syria In late 1973, there were 4,000 Cuban tank troops in Syria as part of an armored brigade which took part in the Yom Kippur War until May 1974. Cuba did not confirm any casualties. Ethiopia Fidel Castro was a supporter of the Marxist–Leninist dictator Mengistu Haile Mariam, whose regime killed hundreds of thousands during the Ethiopian Red Terror of the late 1970s, and who was later convicted of genocide and crimes against humanity. Cuba provided substantial military support to Mariam during the latter's conflict with the Somali dictator Siad Barre in the Ogaden War (July 1977–March 1978), stationing around 24,000 troops in Ethiopia. Castro explained this to Erich Honecker, communist dictator of East Germany, by saying that Siad Barre was "above all a chauvinist". From October 1977 until January 1978, the Somali forces attempted to capture Harar during the Battle of Harar, where 40,000 Ethiopians had regrouped and re-armed with Soviet-supplied artillery and armor; backed by 1,500 Soviet advisors (34 of whom died in Ethiopia, 1977–90) and 16,000 Cuban troops, they engaged the attackers in vicious fighting. Though the Somali forces reached the city outskirts by November, they were too exhausted to take the city and eventually had to withdraw to await the Ethiopian counterattack. The expected Ethiopian-Cuban attack occurred in early February; however, it was accompanied by a second attack that the Somalis did not expect. A column of Ethiopian and Cuban troops crossed northeast into the highlands between Jijiga and the border with Somalia, bypassing the Somali force defending the Marda Pass. Mil Mi-6 helicopters heli-lifted Cuban BMD-1 and ASU-57 armored vehicles behind enemy lines. The attackers were thus able to assault from two directions in a "pincer" action, allowing the re-capture of Jijiga in only two days while killing 3,000 defenders. The Somali defense collapsed and every major Ethiopian town was recaptured in the following weeks. Recognizing that his position was untenable, Siad Barre ordered the Somali armed forces to retreat back into Somalia on 9 March 1978. The executing of civilians and refugees, and rape of women by the Ethiopian and Cuban troops was prevalent throughout the war. Assisted by Soviet advisors, the Cubans launched a second offensive in December 1979 directed at the population's means of survival, including the poisoning and destruction of wells and the killing of cattle herds. Intelligence cooperation between Cuba and the Soviets As early as September 1959, Valdim Kotchergin, a KGB agent, was seen in Cuba. Jorge Luis Vasquez, a Cuban who was imprisoned in East Germany, states that the East German Stasi trained the personnel of the Cuban Interior Ministry (MINIT). The relationship between the KGB and the Cuban Intelligence Directorate (DI) was complex and marked by both times of close cooperation and times of extreme competition. The Soviet Union saw the new revolutionary government in Cuba as an excellent proxy agent in areas of the world where Soviet involvement was not popular on a local level. Nikolai Leonov, the KGB chief in Mexico City, was one of the first Soviet officials to recognize Fidel Castro's potential as a revolutionary, and urged the Soviet Union to strengthen ties with the new Cuban leader. The USSR saw Cuba as having far more appeal with new revolutionary movements, western intellectuals, and members of the New Left, given Cuba's perceived David and Goliath struggle against U.S. "imperialism". In 1963, shortly after the Cuban Missile Crisis, 1,500 DI agents, including Che Guevara, were invited to the USSR for intensive training in intelligence operations. Contemporary period (from 1991) Starting from the mid-1980s, Cuba experienced a crisis referred to as the "Special Period". When the Soviet Union, the country's primary source of trade, was dissolved in late 1991, a major supporter of Cuba's economy was lost, leaving it essentially paralyzed because of the economy's narrow basis, focused on just a few products with just a few buyers. National oil supplies, which were mostly imported, were severely reduced. Over 80% of Cuba's trade was lost and living conditions declined. A "Special Period in Peacetime" was declared, which included cutbacks on transport and electricity and even food rationing. In response, the United States tightened up its trade embargo, hoping it would lead to Castro's downfall. But the government tapped into a pre-revolutionary source of income and opened the country to tourism, entering into several joint ventures with foreign companies for hotel, agricultural and industrial projects. As a result, the use of U.S. dollars was legalized in 1994, with special stores being opened which only sold in dollars. There were two separate economies, dollar-economy and the peso-economy, creating a social split in the island because those in the dollar-economy made much more money (as in the tourist-industry). However, in October 2004, the Cuban government announced an end to this policy: from November U.S. dollars would no longer be legal tender in Cuba, but would instead be exchanged for convertible pesos (since April 2005 at the exchange rate of $1.08) with a 10% tax payable to the state on the exchange of U.S. dollars cash though not on other forms of exchange. A Canadian Medical Association Journal paper states that "The famine in Cuba during the Special Period was caused by political and economic factors similar to the ones that caused a famine in North Korea in the mid-1990s. Both countries were run by authoritarian regimes that denied ordinary people the food to which they were entitled when the public food distribution collapsed; priority was given to the elite classes and the military." The government did not accept American donations of food, medicines and money until 1993, forcing many Cubans to eat anything they could find. In the Havana zoo, the peacocks, the buffalo and even the rhea were reported to have disappeared during this period. Even domestic cats were reportedly eaten. Extreme food shortages and electrical blackouts led to a brief period of unrest, including numerous anti-government protests and widespread increases in urban crime. In response, the Cuban Communist Party formed hundreds of "rapid-action brigades" to confront protesters. The Communist Party's daily publication, Granma, stated that "delinquents and anti-social elements who try to create disorder and an atmosphere of mistrust and impunity in our society will receive a crushing reply from the people". In July 1994, 41 Cubans drowned attempting to flee the country aboard a tugboat; the Cuban government was later accused of sinking the vessel deliberately. Thousands of Cubans protested in Havana during the Maleconazo uprising on 5 August 1994. However, the regime's security forces swiftly dispersed them. A paper published in the Journal of Democracy states this was the closest that the Cuban opposition could come to asserting itself decisively. Continued isolation and regional engagement Although contacts between Cubans and foreign visitors were made legal in 1997, extensive censorship had isolated it from the rest of the world. In 1997, a group led by Vladimiro Roca, a decorated veteran of the Angolan war and the son of the founder of the Cuban Communist Party, sent a petition, entitled La Patria es de Todos ("the homeland belongs to all") to the Cuban general assembly, requesting democratic and human rights reforms. As a result, Roca and his three associates were sentenced to imprisonment, from which they were eventually released. In 2001, a group of Cuban activists collected thousands of signatures for the Varela Project, a petition requesting a referendum on the island's political process, which was openly supported by former U.S. President Jimmy Carter during his 2002 visit to Cuba. The petition gathered sufficient signatures to be considered by the Cuban government, but was rejected on an alleged technicality. Instead, a plebiscite was held in which it was formally proclaimed that Castro's brand of socialism would be perpetual. In 2003, Castro cracked down on independent journalists and other dissidents in an episode which became known as the "Black Spring". The government imprisoned 75 dissident thinkers, including 29 journalists, librarians, human rights activists, and democracy activists, on the basis that they were acting as agents of the United States by accepting aid from the U.S. government. Though it was largely diplomatically isolated from the West at this time, Cuba nonetheless cultivated regional allies. After the rise to power of Hugo Chávez in Venezuela in 1999, Cuba and Venezuela formed an increasingly close relationship based on their shared leftist ideologies, trade links and mutual opposition to U.S. influence in Latin America. Additionally, Cuba continued its post-revolution practice of dispatching doctors to assist poorer countries in Africa and Latin America, with over 30,000 health workers deployed overseas by 2007. End of Fidel Castro's presidency In 2006, Fidel Castro fell ill and withdrew from public life. The following year, Raúl Castro became Acting President, replacing his brother as the de facto leader of the country. In a letter dated 18 February 2008, Fidel Castro announced his formal resignation at the 2008 National Assembly meetings, saying "I will not aspire nor accept—I repeat I will not aspire or accept—the post of President of the Council of State and Commander in Chief." In the autumn of 2008, Cuba was struck by three separate hurricanes, in the most destructive hurricane season in the country's history; over 200,000 were left homeless, and over US$5 billion of property damage was caused. In March 2012, the retired Fidel Castro met Pope Benedict XVI during the latter's visit to Cuba; the two men discussed the role of the Catholic Church in Cuba, which has a large Catholic community. Improving foreign relations In July 2012, Cuba received its first American goods shipment in over 50 years, following the partial relaxation of the U.S. embargo to permit humanitarian shipments. In October 2012, Cuba announced the abolition of its much-disliked exit permit system, allowing its citizens more freedom to travel abroad. In February 2013, after his reelection as president, Raúl Castro stated that he would retire from government in 2018 as part of a broader leadership transition. In July 2013, Cuba became embroiled in a diplomatic scandal after Chong Chon Gang, a North Korean ship illegally carrying Cuban weapons, was impounded by Panama. Cuba and Venezuela maintained their alliance after Hugo Chávez's death in March 2013, but the severe economic strife suffered by Venezuela in the mid-2010s lessened its ability to support Cuba, and may ultimately have contributed to the thawing of Cuban-American relations. In December 2014, after a highly publicized exchange of political prisoners between the United States and Cuba, U.S. President Barack Obama announced plans to re-establish diplomatic relations with Cuba after over five decades of severance. He stated that the U.S. government intended to establish an embassy in Havana and improve economic ties with the country. Obama's proposal received both strong criticism and praise from different elements of the Cuban American community. In April 2015, the U.S. government announced that Cuba would be removed from its list of state sponsors of terrorism, on which it had been included since 1982. The U.S. embassy in Havana was formally reopened in August 2015. In 2017, staffing levels at the embassy were reduced following |
to do it and, second, that they exploited the rest of the economy by receiving large amounts of resources . . . while there are factories that could have improved with a better distribution of those resources that were allocated to the Ten-Million-Ton plan”. During the Revolutionary period, Cuba was one of the few developing countries to provide foreign aid to other countries. Foreign aid began with the construction of six hospitals in Peru in the early 1970s. It expanded later in the 1970s to the point where some 8000 Cubans worked in overseas assignments. Cubans built housing, roads, airports, schools and other facilities in Angola, Ethiopia, Laos, Guinea, Tanzania and other countries. By the end of 1985, 35,000 Cuban workers had helped build projects in some 20 Asian, African and Latin American countries. For Nicaragua in 1982, Cuba pledged to provide over $130 million worth of agricultural and machinery equipment, as well as some 4000 technicians, doctors and teachers. In 1986, Cuba defaulted on its $10.9 billion debt to the Paris Club. In 1987, Cuba stopped making payments on that debt. In 2002, Cuba defaulted on $750 million in Japanese loans. Special Period The Cuban gross domestic product declined at least 35% between 1989 and 1993 due to the loss of 80% of its trading partners and Soviet subsidies. This loss of subsidies coincided with a collapse in world sugar prices. Sugar had done well from 1985 to 1990 and crashed precipitously in 1990–91 and did not recover for five years. Cuba had been insulated from world sugar prices by Soviet price guarantees. However, the Cuban economy began to improve once again following a rapid improvement in trade and diplomatic relations between Cuba and Venezuela following the election of Hugo Chávez in Venezuela in 1998, who became Cuba's most important trading partner and diplomacy ally. This era was referred to as the "Special Period in Peacetime" later shortened to "Special Period". A Canadian Medical Association Journal paper claimed, "The famine in Cuba during the Special Period was caused by political and economic factors similar to the ones that caused a famine in North Korea in the mid-1990s, on the grounds that both countries were run by authoritarian regimes that denied ordinary people the food to which they were entitled to when the public food distribution collapsed and priority was given to the elite classes and the military." Other reports painted an equally dismal picture, describing Cubans having to resort to eating anything they could find, from Havana Zoo animals to domestic cats. But although the collapse of centrally planned economies in the Soviet Union and other countries of the Eastern bloc subjected Cuba to severe economic difficulties, which led to a drop in calories per day from 3052 in 1989 to 2600 in 2006, mortality rates were not strongly affected thanks to the priority given on maintaining a social safety net. The government undertook several reforms to stem excess liquidity, increase labor incentives and alleviate serious shortages of food, consumer goods and services. To alleviate the economic crisis, the government introduced a few market-oriented reforms including opening to tourism, allowing foreign investment, legalizing the U.S. dollar and authorizing self-employment for some 150 occupations. (This policy was later partially reversed so that while the U.S. dollar is no longer accepted in businesses, it remains legal for Cubans to hold the currency.) These measures resulted in modest economic growth. The liberalized agricultural markets introduced in October 1994, at which state and private farmers sell above-quota production at free market prices, broadened legal consumption alternatives and reduced black market prices. Government efforts to lower subsidies to unprofitable enterprises and to shrink the money supply caused the semi-official exchange rate for the Cuban peso to move from a peak of 120 to the dollar in the summer of 1994 to 21 to the dollar by year-end 1999. The drop in GDP apparently halted in 1994, when Cuba reported 0.7% growth, followed by increases of 2.5% in 1995 and 7.8% in 1996. Growth slowed again in 1997 and 1998 to 2.5% and 1.2% respectively. One of the key reasons given was the failure to notice that sugar production had become uneconomic. Reflecting on the Special period Cuban president Fidel Castro later admitted that many mistakes had been made, "The country had many economists and it is not my intention to criticize them, but I would like to ask why we hadn’t discovered earlier that maintaining our levels of sugar production would be impossible. The Soviet Union had collapsed, oil was costing $40 a barrel, sugar prices were at basement levels, so why did we not rationalize the industry?" Living conditions in 1999 remained well below the 1989 level. Recovery Due to the continued growth of tourism, growth began in 1999 with a 6.2% increase in GDP. Growth then picked up, with a growth in GDP of 11.8% in 2005 according to government figures. In 2007 the Cuban economy grew by 7.5%, higher than the Latin American average. Accordingly, the cumulative growth in GDP since 2004 stood at 42.5%. However, from 1996, the State started to impose income taxes on self-employed Cubans. Cuba ranked third in the region in 1958 in GDP per capita, surpassed only by Venezuela and Uruguay. It had descended to 9th, 11th or 12th place in the region by 2007. Cuban social indicators suffered less. Every year the United Nations holds a vote asking countries to choose if the United States is justified in their economic embargo against Cuba and whether it should be lifted. 2016 was the first year that the United States abstained from the vote, rather than voting no, "since 1992 the US and Israel have constantly voted against the resolution – occasionally supported by the Marshall Islands, Palau, Uzbekistan, Albania and Romania". Post-Fidel Castro reforms In 2011, "[t]he new economic reforms were introduced, effectively creating a new economic system", which the Brookings Institution dubbed the "New Cuban Economy". Since then, over 400,000 Cubans have signed up to become entrepreneurs. the government listed 181 official jobs no longer under their control—such as taxi driver, construction worker and shopkeeper. Workers must purchase licenses to work for some roles, such as a mule driver, palm-tree trimmer, or well-digger. Despite these openings, Cuba maintains nationalized companies for the distribution of all essential amenities (water, power, ...) and other essential services to ensure a healthy population (education, health-care). Around 2000, half the country's sugar mills closed. Prior to reforms, imports were double exports, doctors earned £15 per month and families supplemented incomes with extra jobs. After reforms, more than 150,000 farmers could lease land from the government for surplus crop-production. Before the reforms, the only real-estate transactions involved home-owners swapping properties; reforms legalized the buying and selling of real estate and created a real-estate boom in the country. In 2012 a Havana fast-food burger/pizza restaurant, La Pachanga, started in the owner's home; it served 1,000 meals on a Saturday at £3 each. Tourists can now ride factory steam-locomotives through closed sugar mills In 2008, Raúl Castro's administration hinted that the purchase of computers, DVD players, and microwaves would become legal; however, monthly wages remain less than 20 U.S. dollars. Mobile phones, which had been restricted to Cubans working for foreign companies and government officials, were legalized in 2008. In 2010 Fidel Castro, in agreement with Raúl Castro's reformist sentiment, admitted that the Cuban model based on the old Soviet model of centralized planning was no longer sustainable. The brothers encouraged the development of a co-operative variant of socialism - where the state plays a less active role in the economy - and the formation of worker-owned co-operatives and self-employment enterprises. To remedy Cuba's economic structural distortions and inefficiencies, the Sixth Congress approved expansion of the internal market and access to global markets on April 18, 2011. A comprehensive list of changes is: expenditure adjustments (education, healthcare, sports, culture) change in the structure of employment; reducing inflated payrolls and increasing work in the non-state sector legalizing of 201 different personal business licenses fallow state land in usufruct leased to residents incentives for non-state employment, as a re-launch of self-employment proposals for the formation of non-agricultural cooperatives legalization of the sale and private ownership of homes and cars greater autonomy for state firms search for food self-sufficiency, the gradual elimination of universal rationing and change to targeting the poorest population possibility to rent state-run enterprises (including state restaurants) to self-employed persons separation of state and business functions tax-policy update easier travel for Cubans strategies for external debt restructuring On December 20, 2011, a new credit policy allowed Cuban banks to finance entrepreneurs and individuals wishing to make major purchases to do home improvements in addition to farmers. "Cuban banks have long provided loans to farm cooperatives, they have offered credit to new recipients of farmland in usufruct since 2008 and in 2011 they began making loans to individuals for business and other purposes". The system of rationed food distribution in Cuba was known as the Libreta de Abastecimiento ("Supplies booklet"). ration books at bodegas still procured rice, oil, sugar and matches, above government average wage £15 monthly. Raul Castro signed Law 313 in September 2013 in order to set up a special economic zone, the first in the country, in the port city of Mariel. On 22 October 2013 the government announced it intention to end the dual-currency system eventually. The CUC ceased circulation on 1 January 2021. The achievements of the radical social policy of socialist Cuba, which enabled social advancement for the formerly underprivileged classes, were curbed by the economic crisis and the low wages of recent decades. The socialist leadership is reluctant to tackle this problem because it touches a core aspect of its revolutionary legitimacy. As a result, Cuba's National Bureau of Statistics (ONE) publishes little data on the growing socio-economic divide. A nationwide scientific survey shows that social inequalities have become increasingly visible in everyday life and that the Afro-Cuban population is structurally disadvantaged. The report notes that while 58 percent of white Cubans have incomes of less than $3,000 a year, among Afro-Cubans that proportion reaches 95 percent. Afro-Cubans, moreover, receive a very limited portion of family remittances from the Cuban-American community in South Florida, which is mostly white. Remittances from family members from abroad serve often as starting capital for the emerging private sector. The most lucrative branches of business such as restaurants and lodgings are run by white people in particular. In February 2019 Cuban voters approved a new constitution granting the right to private property and greater access to free markets, while also maintaining Cuba's status as a socialist state. In June 2019, the 16th ExpoCaribe trade fair took place in Santiago. Since 2014, the Cuban economy has seen a dramatic uptick in foreign investment. In November 2019, Cuba's state newspaper, Granma, published an article acknowledging that despite the deterioration in relations between the U.S. and Cuban governments, the Cuban government continued to make efforts to attract foreign investment in 2018. In December 2018, 525 Foreign Direct Investment projects were reported in Cuba, a dramatic increase from the 246 projects which were reported in 2014. In February 2021 the Cuban Cabinet authorised private initiative in more than 1800 occupations. International debt negotiations Raul Castro's regime began a concerted effort to restructure and to ask for forgiveness of loans and debts with creditor countries, many in the billions of dollars and long in arrears from loans and debts incurred under Fidel Castro in the 1970s and 1980s. In 2011 China forgave $6 billion in debt owed to it by Cuba. In 2013 Mexico's Finance Minister Luis Videgaray announced a loan issued by Mexico's foreign trade development bank Bancomext to Cuba more than 15 years prior was worth $487 million. The governments agreed to "waive" 70% of it, approximately $340.9 million. Cuba would repay the remaining $146.1 million over ten years. In 2014, before making a diplomatic visit to Cuba, Russian President Vladimir Putin forgave over 90% of the debt owed to Russia by Cuba. The forgiveness totaled $32 billion. A remaining $3.2 billion would be paid over ten years. In 2015 Cuba entered into negotiations over its $11.1 billion debt to 14 members of the Paris Club. In December 2015, the parties announced an agreement - Paris Club nations agreed to forgive $8.5 billion of the $11.1 billion total debt, mostly by waiving interest, service-charges and penalties accrued over the more than two decades of non-payment. The 14 countries party to the agreement were: Austria, Australia, Belgium, Canada, Denmark, Finland, France, Italy, Japan, Spain, Sweden, Switzerland, the Netherlands, and the United Kingdom. The payment for the remaining $2.6 billion would be made over 18 years with annual payments due by October 31st of every year. The payments would phase in gradually, increasing from an initial 1.6 percent of the total owed until the last payment of 8.9 percent in 2033. Interest would be forgiven for 2015-2020 and thereafter would be just 1.5 percent of the total debt still due. The agreement contained a penalty clause: should Cuba again not make payments on schedule (by 31 October of any year), it would be charged 9 percent interest until payment, as well as late interest on the portion in arrears. The agreement was viewed favorably by the regime, with the objective of resolving the long-standing issues and building business confidence, increasing direct foreign investment and as a preliminary step to gaining access to credit lines in Europe. In 2019 Cuba once again defaulted on its Paris Club debt. Of the estimated payment due in 2019 of $80 million, Cuba made only a partial payment made that left $30 million owed for that year. Cuban Deputy Prime Minister Ricardo Cabrisas wrote a letter to Odile Renaud-Basso, President of the Paris Club, noting that Cuba was aware that "circumstances dictated that we were not able to honour our commitments with certain creditor countries as agreed in the multilateral Minute signed by the parties in December 2015". He maintained that they had "the intention of settling" the payments in arrears by 31 May 2020. In May 2020, with payments still not made, Deputy PM Cabrisas sent a letter to the fourteen Paris Club countries in the agreement requesting "a moratorium (of payments) for 2019, 2020 and 2021 and a return to paying in 2022". Sectors Energy production As of 2011, 96% of electricity was produced from fossil fuels. Solar panels were introduced in some rural areas to reduce blackouts, brownouts and use of kerosene. Citizens were encouraged to swap inefficient lamps with newer models to reduce consumption. A power tariff reduced inefficient use. As of August 2012, off-shore petroleum exploration of promising formations in the Gulf of Mexico had been unproductive with two failures reported. Additional exploration is planned. In 2007, Cuba produced an estimated 16.89 billion kWh of electricity and consumed 13.93 billion kWh with no exports or imports. In a 1998 estimate, 89.52% of its energy production is fossil fuel, 0.65% is hydroelectric and 9.83% is other production. In both 2007 and 2008 estimates, the country produced 62,100 bbl/d of oil and consumes 176,000 bbl/d with 104,800 bbl/d of imports, as well as 197,300,000 bbl proved reserves of oil. Venezuela is Cuba's primary source of oil. In 2017, Cuba produced and consumed an estimated 1189 million m3 of natural gas, with no m3 of exports or imports and 70.79 billion m3 of proved reserves. Energy sector The Energy Revolution is a program executed by Cuba in 2006. This program focused on developing the country's socio-economic status and transition Cuba into an energy-efficient economy with diverse energy resources. Cuba's energy sector lacks the resources to produce optimal amounts of power. In fact, one of the issues the Energy Revolution program faces comes from Cuba's power production suffering from the absence of investment and the ongoing trade sanctions imposed by the United States. Likewise, the energy sector has received a multimillion-dollar investment distributed among a network of power resources. However, customers are experiencing rolling blackouts of power from energy companies in order to preserve electricity during Cuba's economic crisis. Furthermore, an outdated electricity grid that's been damaged by hard-hitting hurricanes, caused the energy crisis in 2004 and continued to be a major issue during the Energy Revolution. Cuba responded to this situation by providing a variety of different types of energy resources. In fact, 6000 small diesel generators, 416 Fuel Oil Generators, 893 diesel generators, 9.4 million incandescent bulbs for energy-saving lamps, 1.33 million fans, 5.5 million electric pressure cookers, 3.4 million electric rice cookers, 0.2 million electric water pumps, 2.04 million domestic refrigerators and 0.1 million televisions were distributed among territories. The electrical grid was restored to only 90% until 2009. Alternative energy has emerged as a major priority as the government has promoted wind and solar power. The crucial challenge the Energy Revolution program will face is developing sustainable energy in Cuba but, take into account a country that's continuing to develop, an economic sanction and the detrimental effects of hurricanes that hit this country. Agriculture Cuba produces sugarcane, tobacco, citrus, coffee, rice, potatoes, beans and livestock. As of 2015, Cuba imported about 70–80% of its food. and 80–84% of the food it rations to the public. Raúl Castro ridiculed the bureaucracy that shackled the agriculture sector. Industry In total, industrial production accounted for almost 37% of Cuban GDP, or US$6.9 billion and employed 24% of the population, or 2,671,000 people, in 1996. A rally in sugar prices in 2009 stimulated investment and development of sugar processing. In 2003 Cuba's biotechnology and pharmaceutical industry was gaining in importance. Among the products sold internationally are vaccines against various viral and bacterial pathogens. For example, the drug Heberprot-P was developed as a cure for diabetic foot ulcer and had success in many developing countries. Cuba has also done pioneering work on the development of drugs for cancer treatment. Scientists such as V. Verez-Bencomo were awarded international prizes for their contributions in biotechnology and sugar cane. Services Tourism In the mid-1990s tourism surpassed sugar, long the mainstay of the Cuban economy, as the primary source of foreign exchange. Havana devotes significant resources to building tourist facilities and renovating historic structures. Cuban officials estimate roughly | of the century, fueled by the sale of sugar to the United States. Prior to the Cuban Revolution, in 1958, Cuba had a per-capita GDP of $2,363, which placed it in the middle of Latin American countries. According to the UN, between 1950 and 1955, Cuba had a life expectancy of 59.4 years, which placed it in 56th place in the global ranking. Its proximity to the United States made it a familiar holiday destination for wealthy Americans. Their visits for gambling, horse racing and golfing made tourism an important economic sector. Tourism magazine Cabaret Quarterly described Havana as "a mistress of pleasure, the lush and opulent goddess of delights." Cuban dictator Fulgencio Batista had plans to line the Malecon, Havana's famous walkway by the water, with hotels and casinos to attract even more tourists. Today Hotel Havana Riviera is the only hotel that was built before the revolutionary government took control. Cuban Revolution On March 3, 1959, Fidel Castro seized control of the Cuban Telephone Company, which was a subsidiary of the International Telephone and Telecommunications Corporation. This was the first of many nationalizations made by the new government, the assets seized totaled US$9 billion. After the 1959 Revolution, citizens were not required to pay a personal income tax (their salaries being regarded as net of any taxes). The government also began to subsidize healthcare and education for all citizens; this action created strong national support for the new revolutionary government. After the USSR and Cuba reestablished their diplomatic relations in May 1960, the USSR began to buy Cuban sugar in exchange for oil. When oil refineries like Shell, Texaco, and Esso refused to refine Soviet oil, Castro nationalized that industry as well, taking over the refineries on the island. Days later in response, the United States cut the Cuban Sugar Quota completely; Eisenhower was quoted saying "This action amounts to economic sanctions against Cuba. Now we must look ahead to other moves — economic, diplomatic, and strategic." On February 7, 1962, Kennedy expanded the United States' embargo to cover almost all U.S. imports. In 1970, Fidel Castro attempted to motivate the Cuban people to harvest 10 million tons of sugar, in Spanish known as La Zafra, in order to increase their exports and grow their economy. With the help of the majority of the Cuban population, the country was able to produce 7.56 million tons of sugar. In July 1970, after the harvest was over, Castro took responsibility for the failure and later that same year he blamed the Sugar Industry Minister saying “Those technocrats, geniuses, super-scientists assured me that they knew what to do in order to produce the ten million tons. But it was proven, first, that they did not know how to do it and, second, that they exploited the rest of the economy by receiving large amounts of resources . . . while there are factories that could have improved with a better distribution of those resources that were allocated to the Ten-Million-Ton plan”. During the Revolutionary period, Cuba was one of the few developing countries to provide foreign aid to other countries. Foreign aid began with the construction of six hospitals in Peru in the early 1970s. It expanded later in the 1970s to the point where some 8000 Cubans worked in overseas assignments. Cubans built housing, roads, airports, schools and other facilities in Angola, Ethiopia, Laos, Guinea, Tanzania and other countries. By the end of 1985, 35,000 Cuban workers had helped build projects in some 20 Asian, African and Latin American countries. For Nicaragua in 1982, Cuba pledged to provide over $130 million worth of agricultural and machinery equipment, as well as some 4000 technicians, doctors and teachers. In 1986, Cuba defaulted on its $10.9 billion debt to the Paris Club. In 1987, Cuba stopped making payments on that debt. In 2002, Cuba defaulted on $750 million in Japanese loans. Special Period The Cuban gross domestic product declined at least 35% between 1989 and 1993 due to the loss of 80% of its trading partners and Soviet subsidies. This loss of subsidies coincided with a collapse in world sugar prices. Sugar had done well from 1985 to 1990 and crashed precipitously in 1990–91 and did not recover for five years. Cuba had been insulated from world sugar prices by Soviet price guarantees. However, the Cuban economy began to improve once again following a rapid improvement in trade and diplomatic relations between Cuba and Venezuela following the election of Hugo Chávez in Venezuela in 1998, who became Cuba's most important trading partner and diplomacy ally. This era was referred to as the "Special Period in Peacetime" later shortened to "Special Period". A Canadian Medical Association Journal paper claimed, "The famine in Cuba during the Special Period was caused by political and economic factors similar to the ones that caused a famine in North Korea in the mid-1990s, on the grounds that both countries were run by authoritarian regimes that denied ordinary people the food to which they were entitled to when the public food distribution collapsed and priority was given to the elite classes and the military." Other reports painted an equally dismal picture, describing Cubans having to resort to eating anything they could find, from Havana Zoo animals to domestic cats. But although the collapse of centrally planned economies in the Soviet Union and other countries of the Eastern bloc subjected Cuba to severe economic difficulties, which led to a drop in calories per day from 3052 in 1989 to 2600 in 2006, mortality rates were not strongly affected thanks to the priority given on maintaining a social safety net. The government undertook several reforms to stem excess liquidity, increase labor incentives and alleviate serious shortages of food, consumer goods and services. To alleviate the economic crisis, the government introduced a few market-oriented reforms including opening to tourism, allowing foreign investment, legalizing the U.S. dollar and authorizing self-employment for some 150 occupations. (This policy was later partially reversed so that while the U.S. dollar is no longer accepted in businesses, it remains legal for Cubans to hold the currency.) These measures resulted in modest economic growth. The liberalized agricultural markets introduced in October 1994, at which state and private farmers sell above-quota production at free market prices, broadened legal consumption alternatives and reduced black market prices. Government efforts to lower subsidies to unprofitable enterprises and to shrink the money supply caused the semi-official exchange rate for the Cuban peso to move from a peak of 120 to the dollar in the summer of 1994 to 21 to the dollar by year-end 1999. The drop in GDP apparently halted in 1994, when Cuba reported 0.7% growth, followed by increases of 2.5% in 1995 and 7.8% in 1996. Growth slowed again in 1997 and 1998 to 2.5% and 1.2% respectively. One of the key reasons given was the failure to notice that sugar production had become uneconomic. Reflecting on the Special period Cuban president Fidel Castro later admitted that many mistakes had been made, "The country had many economists and it is not my intention to criticize them, but I would like to ask why we hadn’t discovered earlier that maintaining our levels of sugar production would be impossible. The Soviet Union had collapsed, oil was costing $40 a barrel, sugar prices were at basement levels, so why did we not rationalize the industry?" Living conditions in 1999 remained well below the 1989 level. Recovery Due to the continued growth of tourism, growth began in 1999 with a 6.2% increase in GDP. Growth then picked up, with a growth in GDP of 11.8% in 2005 according to government figures. In 2007 the Cuban economy grew by 7.5%, higher than the Latin American average. Accordingly, the cumulative growth in GDP since 2004 stood at 42.5%. However, from 1996, the State started to impose income taxes on self-employed Cubans. Cuba ranked third in the region in 1958 in GDP per capita, surpassed only by Venezuela and Uruguay. It had descended to 9th, 11th or 12th place in the region by 2007. Cuban social indicators suffered less. Every year the United Nations holds a vote asking countries to choose if the United States is justified in their economic embargo against Cuba and whether it should be lifted. 2016 was the first year that the United States abstained from the vote, rather than voting no, "since 1992 the US and Israel have constantly voted against the resolution – occasionally supported by the Marshall Islands, Palau, Uzbekistan, Albania and Romania". Post-Fidel Castro reforms In 2011, "[t]he new economic reforms were introduced, effectively creating a new economic system", which the Brookings Institution dubbed the "New Cuban Economy". Since then, over 400,000 Cubans have signed up to become entrepreneurs. the government listed 181 official jobs no longer under their control—such as taxi driver, construction worker and shopkeeper. Workers must purchase licenses to work for some roles, such as a mule driver, palm-tree trimmer, or well-digger. Despite these openings, Cuba maintains nationalized companies for the distribution of all essential amenities (water, power, ...) and other essential services to ensure a healthy population (education, health-care). Around 2000, half the country's sugar mills closed. Prior to reforms, imports were double exports, doctors earned £15 per month and families supplemented incomes with extra jobs. After reforms, more than 150,000 farmers could lease land from the government for surplus crop-production. Before the reforms, the only real-estate transactions involved home-owners swapping properties; reforms legalized the buying and selling of real estate and created a real-estate boom in the country. In 2012 a Havana fast-food burger/pizza restaurant, La Pachanga, started in the owner's home; it served 1,000 meals on a Saturday at £3 each. Tourists can now ride factory steam-locomotives through closed sugar mills In 2008, Raúl Castro's administration hinted that the purchase of computers, DVD players, and microwaves would become legal; however, monthly wages remain less than 20 U.S. dollars. Mobile phones, which had been restricted to Cubans working for foreign companies and government officials, were legalized in 2008. In 2010 Fidel Castro, in agreement with Raúl Castro's reformist sentiment, admitted that the Cuban model based on the old Soviet model of centralized planning was no longer sustainable. The brothers encouraged the development of a co-operative variant of socialism - where the state plays a less active role in the economy - and the formation of worker-owned co-operatives and self-employment enterprises. To remedy Cuba's economic structural distortions and inefficiencies, the Sixth Congress approved expansion of the internal market and access to global markets on April 18, 2011. A comprehensive list of changes is: expenditure adjustments (education, healthcare, sports, culture) change in the structure of employment; reducing inflated payrolls and increasing work in the non-state sector legalizing of 201 different personal business licenses fallow state land in usufruct leased to residents incentives for non-state employment, as a re-launch of self-employment proposals for the formation of non-agricultural cooperatives legalization of the sale and private ownership of homes and cars greater autonomy for state firms search for food self-sufficiency, the gradual elimination of universal rationing and change to targeting the poorest population possibility to rent state-run enterprises (including state restaurants) to self-employed persons separation of state and business functions tax-policy update easier travel for Cubans strategies for external debt restructuring On December 20, 2011, a new credit policy allowed Cuban banks to finance entrepreneurs and individuals wishing to make major purchases to do home improvements in addition to farmers. "Cuban banks have long provided loans to farm cooperatives, they have offered credit to new recipients of farmland in usufruct since 2008 and in 2011 they began making loans to individuals for business and other purposes". The system of rationed food distribution in Cuba was known as the Libreta de Abastecimiento ("Supplies booklet"). ration books at bodegas still procured rice, oil, sugar and matches, above government average wage £15 monthly. Raul Castro signed Law 313 in September 2013 in order to set up a special economic zone, the first in the country, in the port city of Mariel. On 22 October 2013 the government announced it intention to end the dual-currency system eventually. The CUC ceased circulation on 1 January 2021. The achievements of the radical social policy of socialist Cuba, which enabled social advancement for the formerly underprivileged classes, were curbed by the economic crisis and the low wages of recent decades. The socialist leadership is reluctant to tackle this problem because it touches a core aspect of its revolutionary legitimacy. As a result, Cuba's National Bureau of Statistics (ONE) publishes little data on the growing socio-economic divide. A nationwide scientific survey shows that social inequalities have become increasingly visible in everyday life and that the Afro-Cuban population is structurally disadvantaged. The report notes that while 58 percent of white Cubans have incomes of less than $3,000 a year, among Afro-Cubans that proportion reaches 95 percent. Afro-Cubans, moreover, receive a very limited portion of family remittances from the Cuban-American community in South Florida, which is mostly white. Remittances from family members from abroad serve often as starting capital for the emerging private sector. The most lucrative branches of business such as restaurants and lodgings are run by white people in particular. In February 2019 Cuban voters approved a new constitution granting the right to private property and greater access to free markets, while also maintaining Cuba's status as a socialist state. In June 2019, the 16th ExpoCaribe trade fair took place in Santiago. Since 2014, the Cuban economy has seen a dramatic uptick in foreign investment. In November 2019, Cuba's state newspaper, Granma, published an article acknowledging that despite the deterioration in relations between the U.S. and Cuban governments, the Cuban government continued to make efforts to attract foreign investment in 2018. In December 2018, 525 Foreign Direct Investment projects were reported in Cuba, a dramatic increase from the 246 projects which were reported in 2014. In February 2021 the Cuban Cabinet authorised private initiative in more than 1800 occupations. International debt negotiations Raul Castro's regime began a concerted effort to restructure and to ask for forgiveness of loans and debts with creditor countries, many in the billions of dollars and long in arrears from loans and debts incurred under Fidel Castro in the 1970s and 1980s. In 2011 China forgave $6 billion in debt owed to it by Cuba. In 2013 Mexico's Finance Minister Luis Videgaray announced a loan issued by Mexico's foreign trade development bank Bancomext to Cuba more than 15 years prior was worth $487 million. The governments agreed to "waive" 70% of it, approximately $340.9 million. Cuba would repay the remaining $146.1 million over ten years. In 2014, before making a diplomatic visit to Cuba, Russian President Vladimir Putin forgave over 90% of the debt owed to Russia by Cuba. The forgiveness totaled $32 billion. A remaining $3.2 billion would be paid over ten years. In 2015 Cuba entered into negotiations over its $11.1 billion debt to 14 members of the Paris Club. In December 2015, the parties announced an agreement - Paris Club nations agreed to forgive $8.5 billion of the $11.1 billion total debt, mostly by waiving interest, service-charges and penalties accrued over the more than two decades of non-payment. The 14 countries party to the agreement were: Austria, Australia, Belgium, Canada, Denmark, Finland, France, Italy, Japan, Spain, Sweden, Switzerland, the Netherlands, and the United Kingdom. The payment for the remaining $2.6 billion would be made over 18 years with annual payments due by October 31st of every year. The payments would phase in gradually, increasing from an initial 1.6 percent of the total owed until the last payment of 8.9 percent in 2033. Interest would be forgiven for 2015-2020 and thereafter would be just 1.5 percent of the total debt still due. The agreement contained a penalty clause: should Cuba again not make payments on schedule (by 31 October of any year), it would be charged 9 percent interest until payment, as well as late interest on the portion in arrears. The agreement was viewed favorably by the regime, with the objective of resolving the long-standing issues and building business confidence, increasing direct foreign investment and as a preliminary step to gaining access to credit lines in Europe. In 2019 Cuba once again defaulted on its Paris Club debt. Of the estimated payment due in 2019 of $80 million, Cuba made only a partial payment made that left $30 million owed for that year. Cuban Deputy Prime Minister Ricardo Cabrisas wrote a letter to Odile Renaud-Basso, President of the Paris Club, noting that Cuba was aware that "circumstances dictated that we were not able to honour our commitments with certain creditor countries as agreed in the multilateral Minute signed by the parties in December 2015". He maintained that they had "the intention of settling" the payments in arrears by 31 May 2020. In May 2020, with payments still not made, Deputy PM Cabrisas sent a letter to the fourteen Paris Club countries in the agreement requesting "a moratorium (of payments) for 2019, 2020 and 2021 and a return to paying in 2022". Sectors Energy production As of 2011, 96% of electricity was produced from fossil fuels. Solar panels were introduced in some rural areas to reduce blackouts, brownouts and use of kerosene. Citizens were encouraged to swap inefficient lamps with newer models to reduce consumption. A power tariff reduced inefficient use. As of August 2012, off-shore petroleum exploration of promising formations in the Gulf of Mexico had been |
ZIL and the KrAZ; and Cuba also bought cars from European and Asian companies. In 2004, it was estimated that there were some 173,000 cars in Cuba. Old American cars in Cuba Most new vehicles came to Cuba from the United States until the 1960 United States embargo against Cuba ended importation of both cars and their parts. As many as 60,000 American vehicles are in use, nearly all in private hands. Of Cuba's vintage American cars, many have been modified with newer engines, disc brakes and other parts, often scavenged from Soviet cars, and most bear the marks of decades of use. Pre-1960 vehicles remain the property of their original owners and descendants, and can be sold to other Cubans providing the proper traspaso certificate is in place. However, the old American cars on the road today have "relatively high inefficiencies" due in large part to the lack of modern technology. This resulted in increased fuel consumption as well as adding to the economic plight of their owners. With these inefficiencies, noticeable drop in travel occurred from an "average of nearly 3000 km/year in the mid-1980s to less than 800 km/year in 2000–2001". As the Cuban people try to save as much money as possible, when traveling is done, the cars are usually loaded past the maximum allowable weight and travel on the decaying roads, resulting in even more abuse to the already under-maintained vehicles. Hitchhiking and carpooling As a result of the "Special Period" in 1991 (a period of food and energy shortages caused by the loss of the Soviet Union as a trading partner), hitchhiking and carpooling became important parts of Cuba's transportation system and society in general. In 1999, an article in Time magazine claimed "In Cuba[...] hitchhiking is custom. Hitchhiking is essential. Hitchhiking is what makes Cuba move." Changes in the 2000s For many years, Cubans could only acquire new cars with special permission. In 2011, the Cuban government legalized the purchase and sale of used post-1959 autos. In December 2013, Cubans were allowed to buy new cars from state-run dealerships - previously this had not been permitted. In 2020, this was further extended with cars being sold in convertible currencies. Waterways Cauto River Sagua la Grande River Ports and harbors Cienfuegos Havana Manzanillo Mariel Matanzas Nuevitas Santiago de Cuba Merchant marine Total: 3 ships Ships by type Cargo ships (1) Passenger ship (1) Refrigerated cargo ships (1) Registered in other countries: 5 Airlines Besides the state owned airline Cubana (Cubana de Aviación), the two other major Cuban airlines are Aero Caribbean and Aerogaviota, both of whom operate modern European and Russian aircraft. One other airline is Aerotaxi. Airports Total: 133 Airports with paved runways total: 64 over 3,047 m: 7 2,438 to 3,047 | Urban buses In Havana, urban transportation used to be provided by a colorful selection of buses imported from the Soviet Union or Canada. Many of these vehicles were second hand, such as the 1,500 decommissioned Dutch buses that the Netherlands donated to Cuba in the mid-1990s as well as GM fishbowl buses from Montreal. Despite the United States trade embargo, American-style yellow school buses (imported second-hand from Canada) are also increasingly common sights. Since 2008, service on seven key lines in and out of the city is provided by Chinese Zhengzhou Yutong Buses. These replaced the famous camellos ("camels" or "dromedaries", after their "humps") trailer buses that hauled as many as two hundred passengers in a passenger-carrying trailer. After the upgrading of Seville's public bus fleet to CNG-powered vehicles, many of the decommissioned ones were donated to the city of Havana. These bright orange buses still display the name of Transportes Urbanos de Sevilla, S.A.M., their former owner, and Seville's coat of arms as a sign of gratitude. In recent years (2016), urban transport in Havana consists entirely of modern Yutong diesel buses. Seville and Ikarus buses are gone. Automobiles Since 2009, Cuba has imported sedans from Chinese automaker Geely to serve as police cars, taxis and rental vehicles. Previously, the Soviet Union supplied Volgas, Moskvichs, and Ladas, as well as heavy trucks like the ZIL and the KrAZ; and Cuba also bought cars from European and Asian companies. In 2004, it was estimated that there were some 173,000 cars in Cuba. Old American cars in Cuba Most new vehicles came to Cuba from the United States until the 1960 United States embargo against Cuba ended importation of both cars and their parts. As many as 60,000 American vehicles are in use, nearly all in private hands. Of Cuba's vintage American cars, many have been modified with newer engines, disc brakes and other parts, often scavenged from Soviet cars, and most bear the marks of decades of use. Pre-1960 vehicles remain the property of their original owners and descendants, and can be sold to other Cubans providing the proper traspaso certificate is in place. However, the old American cars on the road today have "relatively high inefficiencies" due in large |
large element of African descent. More important, it made Cuban revolutionary traditions a worldwide model, and the more often that model was followed, the stronger Cuba would be in terms of prestige and untouchability. Wolf Grabendorff says, "Most African states view Cuban intervention in Africa as help in achieving independence through self-help rather than as a step toward the type of dependence which would result from a similar commitment by the super-powers." Starting in the 1970s, Cuba's intervention in Africa targeted 17 different nations and three insurgencies. It soon leading Cuban soldiers engaging in frontline military combat. In doing so Castro aligned Cuba with African insurgencies against colonial vestiges and specifically against South Africa. Furthermore, by providing military aid Cuba won trading partners for the Soviet bloc and potential converts to Marxism. In the 1970s, Cuba expanded military aid programs to Africa in the Middle East, sending military missions to Sierra Leone in 1972, South Yemen in 1973, Equatorial Guinea in 1973, and Somalia in 1974. It sent combat troops to Syria in 1973 to fight against Israel. Cuba was following the general Soviet policy of détente with the West, and secret discussions were opened with the United States about peaceful coexistence. They ended abruptly when Cuba sent combat troops to fight in Angola. Intervention in Angola On November 4, 1975, Castro ordered the deployment of Cuban troops to Angola to aid the Marxist MPLA against UNITA forces, which were being supported by the People's Republic of China, and later the United States, Israel, and South Africa (see: Cuba in Angola). After two months on their own, Moscow aided the Cuban mission with the USSR engaging in a massive airlift of Cuban forces into Angola. On this, Nelson Mandela is said to have remarked "Cuban internationalists have done so much for African independence, freedom, and justice." Cuban troops were also sent to Marxist Ethiopia to assist Ethiopian forces in the Ogaden War with Somalia in 1977. Cuba sent troops along with the Soviet Union to aid the FRELIMO and MPLA governments in Mozambique and Angola, respectively, while they were fighting U.S. and South African-backed insurgent groups RENAMO (supported by Rhodesia as well) and UNITA. He also aided the government of Mengistu Haile Mariam in Ethiopia during its conflict with Somalia. Castro never disclosed the number of casualties in Soviet African wars, but one estimate is 14,000, a high number for the small country. Intervention in Latin America In addition, Castro extended support to Marxist Revolutionary movements throughout Latin America, such as aiding the Sandinistas in overthrowing the Somoza government in Nicaragua in 1979. It has been claimed by the Carthage Foundation-funded Center for a Free Cuba that an estimated 14,000 Cubans were killed in Cuban military actions abroad. Leadership of non-aligned movement In the 1970s, Cuba made a major effort to assume a leadership role in the world's nonalignment movement, which represented over 90 Third World nations. Its combat troops in Angola greatly impressed fellow non-aligned nations. Cuba also established military advisory missions, and economic and social reform programs. Apart from interventions in revolutionary conflicts and civil wars, Cuba made world-wide commitments to social-and economic programs in 40 poor countries. This was made possible by the improved Cuban economy in the 1970s. The largest programs involved major construction projects, in which 8,000 Cubans provided technical advice, planning, and training of engineers. Educational programs involved 3,500 teachers. In addition thousands of specialists, technicians, and engineers were sent as advisors to agricultural mining and transportation sectors around the globe. Cuba hosted 10,000 foreign students, chiefly from Africa and Latin America, in health programs and technical schools. Cuba's extensive program of medical support to international attention. A 2007 study reported: Since the early 1960s, 28,422 Cuban health workers have worked in 37 Latin American countries, 31,181 in 33 African countries, and 7,986 in 24 Asian countries. Throughout a period of four decades, Cuba sent 67,000 health workers to structural cooperation programs, usually for at least two years, in 94 countries ... an average of 3,350 health workers working abroad every year between 1960 and 2000. The 1976 world conference of the Nonaligned Movement applauded Cuban internationalism, "which assisted the people of Angola in frustrating the expansionist and colonialist strategy of South Africa's racist regime and its allies." The next nonaligned conference was scheduled for Havana in 1979, to be chaired by Castro, with his becoming the de facto spokesman for the Movement. The conference in September 1979 marked the zenith of Cuban prestige. The nonaligned nations believed that Cuba was not aligned with the Soviet camp in the Cold War. However, in December 1979, the Soviet Union invaded Afghanistan, an active member of the Nonaligned Movement. At the United Nations, Nonaligned members voted 56 to 9, with 26 abstaining, to condemn the Soviet Union. Cuba in fact was deeply in debt financially and politically to Moscow, and voted against the resolution. It lost its reputation as nonaligned in the Cold War. Castro, instead of becoming a high-profile spokesman for the Movement, remain quiet and inactive, and in 1983 leadership passed to India, which had abstained on the UN vote. Cuba lost its bid to become a member of the United Nations Security Council. Cuba's ambitions for a role in global leadership had totally collapsed. Post–Cold War relations In the post–Cold War environment Cuban support for guerrilla warfare in Latin America has largely subsided, though the Cuban government continued to provide political assistance and support for left leaning groups and parties in the developing Western Hemisphere. When Soviet leader Mikhail Gorbachev visited Cuba in 1989, the ideological relationship between Havana and Moscow was strained by Gorbachev's implementation of economic and political reforms in the USSR. "We are witnessing sad things in other socialist countries, very sad things", lamented Castro in November 1989, in reference to the changes that were sweeping such communist allies as the Soviet Union, East Germany, Hungary, and Poland. The subsequent dissolution of the Soviet Union in 1991 had an immediate and devastating effect on Cuba. Cuba today works with a growing bloc of Latin American politicians opposed to the "Washington consensus", the American-led doctrine that free trade, open markets, and privatization will lift poor third world countries out of economic stagnation. The Cuban government condemned neoliberalism as a destructive force in the developing world, creating an alliance with Presidents Hugo Chávez of Venezuela and Evo Morales of Bolivia in opposing such policies. Currently, Cuba has diplomatically friendly relationships with Presidents Nicolás Maduro of Venezuela and Daniel Ortega of Nicaragua, with Maduro as perhaps the country's staunchest ally in the post-Soviet era. Cuba has sent thousands of teachers and medical personnel to Venezuela to assist Maduro's socialist oriented economic programs. Maduro, in turn provides Cuba with lower priced petroleum. Cuba's debt for oil to Venezuela is believed to be on the order of one billion US dollars. Bilateral relations Africa Americas Cuba has supported a number of leftist groups and parties in Latin America and the Caribbean since the 1959 revolution. In the 1960s Cuba established close ties with the emerging Guatemalan social movement led by Luis Augusto Turcios Lima, and supported the establishment of the URNG, a militant organization that has evolved into one of Guatemala's current political parties. In the 1980s Cuba backed both the Sandinistas in Nicaragua and the FMLN in El Salvador, providing military and intelligence training, weapons, guidance, and organizational support. Asia Europe Oceania Cuba has two embassies in Oceania, located in Wellington (opened in November 2007) and also one in Canberra opened October 24, 2008. It also has a Consulate General in Sydney. However, Cuba has official diplomatic relations with Nauru since 2002 and the Solomon Islands since 2003, and maintains relations with other Pacific countries by providing aid. In 2008, Cuba will reportedly be sending doctors to the Solomon Islands, Vanuatu, Tuvalu, Nauru and Papua New Guinea, while seventeen medical students from Vanuatu will study in Cuba. It may also provide training for Fiji doctors. Indeed, Fiji's ambassador to the United Nations, Berenado Vunibobo, has stated that his country may seek closer relations with Cuba, and in particular medical assistance, following a decline in Fiji's relations with New Zealand. International organizations and groups ACS • ALBA • AOSIS • CELAC • CTO • ECLAC • G33 • G77 • IAEA • ICAO • | to Salvador Allende, was seen by those on the political right as proof to support their view that "The Chilean Way to Socialism" was an effort to put Chile on the same path as Cuba. Intervention in Cold War conflicts Africa was the target for Cuba's entry into a leadership role in world affairs. It was chosen in part to represent Cuban solidarity with its own large element of African descent. More important, it made Cuban revolutionary traditions a worldwide model, and the more often that model was followed, the stronger Cuba would be in terms of prestige and untouchability. Wolf Grabendorff says, "Most African states view Cuban intervention in Africa as help in achieving independence through self-help rather than as a step toward the type of dependence which would result from a similar commitment by the super-powers." Starting in the 1970s, Cuba's intervention in Africa targeted 17 different nations and three insurgencies. It soon leading Cuban soldiers engaging in frontline military combat. In doing so Castro aligned Cuba with African insurgencies against colonial vestiges and specifically against South Africa. Furthermore, by providing military aid Cuba won trading partners for the Soviet bloc and potential converts to Marxism. In the 1970s, Cuba expanded military aid programs to Africa in the Middle East, sending military missions to Sierra Leone in 1972, South Yemen in 1973, Equatorial Guinea in 1973, and Somalia in 1974. It sent combat troops to Syria in 1973 to fight against Israel. Cuba was following the general Soviet policy of détente with the West, and secret discussions were opened with the United States about peaceful coexistence. They ended abruptly when Cuba sent combat troops to fight in Angola. Intervention in Angola On November 4, 1975, Castro ordered the deployment of Cuban troops to Angola to aid the Marxist MPLA against UNITA forces, which were being supported by the People's Republic of China, and later the United States, Israel, and South Africa (see: Cuba in Angola). After two months on their own, Moscow aided the Cuban mission with the USSR engaging in a massive airlift of Cuban forces into Angola. On this, Nelson Mandela is said to have remarked "Cuban internationalists have done so much for African independence, freedom, and justice." Cuban troops were also sent to Marxist Ethiopia to assist Ethiopian forces in the Ogaden War with Somalia in 1977. Cuba sent troops along with the Soviet Union to aid the FRELIMO and MPLA governments in Mozambique and Angola, respectively, while they were fighting U.S. and South African-backed insurgent groups RENAMO (supported by Rhodesia as well) and UNITA. He also aided the government of Mengistu Haile Mariam in Ethiopia during its conflict with Somalia. Castro never disclosed the number of casualties in Soviet African wars, but one estimate is 14,000, a high number for the small country. Intervention in Latin America In addition, Castro extended support to Marxist Revolutionary movements throughout Latin America, such as aiding the Sandinistas in overthrowing the Somoza government in Nicaragua in 1979. It has been claimed by the Carthage Foundation-funded Center for a Free Cuba that an estimated 14,000 Cubans were killed in Cuban military actions abroad. Leadership of non-aligned movement In the 1970s, Cuba made a major effort to assume a leadership role in the world's nonalignment movement, which represented over 90 Third World nations. Its combat troops in Angola greatly impressed fellow non-aligned nations. Cuba also established military advisory missions, and economic and social reform programs. Apart from interventions in revolutionary conflicts and civil wars, Cuba made world-wide commitments to social-and economic programs in 40 poor countries. This was made possible by the improved Cuban economy in the 1970s. The largest programs involved major construction projects, in which 8,000 Cubans provided technical advice, planning, and training of engineers. Educational programs involved 3,500 teachers. In addition thousands of specialists, technicians, and engineers were sent as advisors to agricultural mining and transportation sectors around the globe. Cuba hosted 10,000 foreign students, chiefly from Africa and Latin America, in health programs and technical schools. Cuba's extensive program of medical support to international attention. A 2007 study reported: Since the early 1960s, 28,422 Cuban health workers have worked in 37 Latin American countries, 31,181 in 33 African countries, and 7,986 in 24 Asian countries. Throughout a period of four decades, Cuba sent 67,000 health workers to structural cooperation programs, usually for at least two years, in 94 countries ... an average of 3,350 health workers working abroad every year between 1960 and 2000. The 1976 world conference of the Nonaligned Movement applauded Cuban internationalism, "which assisted the people of Angola in frustrating the expansionist and colonialist strategy of South Africa's racist regime and its allies." The next nonaligned conference was scheduled for Havana in 1979, to be chaired by Castro, with his becoming the de facto spokesman for the Movement. The conference in September 1979 marked the zenith of Cuban prestige. The nonaligned nations believed that Cuba was not aligned with the Soviet camp in the Cold War. However, in December 1979, the Soviet Union invaded Afghanistan, an active member of the Nonaligned Movement. At the United Nations, Nonaligned members voted 56 to 9, with 26 abstaining, to condemn the Soviet Union. Cuba in fact was deeply in debt financially and politically to Moscow, and voted against the resolution. It lost its reputation as nonaligned in the Cold War. Castro, instead of becoming a high-profile spokesman for the Movement, remain quiet and inactive, and in 1983 leadership passed to India, which had abstained on the UN vote. Cuba lost its bid to become a member of the United Nations Security Council. Cuba's ambitions for a role in global leadership had totally collapsed. Post–Cold War relations In the post–Cold War environment Cuban support for guerrilla warfare in Latin America has largely subsided, though the Cuban government continued to provide political assistance and support for left leaning groups and parties in the developing Western Hemisphere. When Soviet leader Mikhail Gorbachev visited Cuba in 1989, the ideological relationship between Havana and Moscow was strained by Gorbachev's implementation of economic and political reforms in the USSR. "We are witnessing sad things in other socialist countries, very sad things", lamented Castro in November 1989, in reference to the changes that were sweeping such communist allies as the Soviet Union, East Germany, Hungary, and Poland. The subsequent dissolution of the Soviet Union in 1991 had an immediate and devastating effect on Cuba. Cuba today works with a growing bloc of Latin American politicians opposed to the "Washington consensus", the American-led doctrine that free trade, open markets, and privatization will lift poor third world countries out of economic stagnation. The Cuban government condemned neoliberalism as a destructive force in the developing world, creating an alliance with Presidents Hugo Chávez of Venezuela |
TRNC adopted a constitution and held its first elections. The United Nations recognises the sovereignty of the Republic of Cyprus over the entire island of Cyprus. The House of Representatives currently has 59 members elected for a five-year term, 56 members by proportional representation and three observer members representing the Armenian, Latin and Maronite minorities. 24 seats are allocated to the Turkish community but remain vacant since 1964. The political environment is dominated by the communist AKEL, the liberal conservative Democratic Rally, the centrist Democratic Party, the social-democratic EDEK and the centrist EURO.KO. In 2008, Dimitris Christofias became the country's first Communist head of state. Due to his involvement in the 2012–13 Cypriot financial crisis, Christofias did not run for re-election in 2013. The Presidential election in 2013 resulted in Democratic Rally candidate Nicos Anastasiades winning 57.48% of the vote. As a result, Anastasiades was sworn in on and has been president since 28 February 2013. Anastasiades was re-elected with 56% of the vote in the 2018 presidential election. Administrative divisions The Republic of Cyprus is divided into six districts: Nicosia, Famagusta, Kyrenia, Larnaca, Limassol and Paphos. Exclaves and enclaves Cyprus has four exclaves, all in territory that belongs to the British Sovereign Base Area of Dhekelia. The first two are the villages of Ormidhia and Xylotymvou. The third is the Dhekelia Power Station, which is divided by a British road into two parts. The northern part is the EAC refugee settlement. The southern part, even though located by the sea, is also an exclave because it has no territorial waters of its own, those being UK waters. The UN buffer zone runs up against Dhekelia and picks up again from its east side off Ayios Nikolaos and is connected to the rest of Dhekelia by a thin land corridor. In that sense the buffer zone turns the Paralimni area on the southeast corner of the island into a de facto, though not de jure, exclave. Foreign relations The Republic of Cyprus is a member of the following international groups: Australia Group, CN, CE, CFSP, EBRD, EIB, EU, FAO, IAEA, IBRD, ICAO, ICC, ICCt, ITUC, IDA, IFAD, IFC, IHO, ILO, IMF, IMO, Interpol, IOC, IOM, IPU, ITU, MIGA, NAM, NSG, OPCW, OSCE, PCA, UN, UNCTAD, UNESCO, UNHCR, UNIDO, UPU, WCL, WCO, WFTU, WHO, WIPO, WMO, WToO, WTO. Armed forces The Cypriot National Guard is the main military institution of the Republic of Cyprus. It is a combined arms force, with land, air and naval elements. Historically all men were required to spend 24 months serving in the National Guard after their 17th birthday, but in 2016 this period of compulsory service was reduced to 14 months. Annually, approximately 10,000 persons are trained in recruit centres. Depending on their awarded speciality the conscript recruits are then transferred to speciality training camps or to operational units. While until 2016 the armed forces were mainly conscript based, since then a large Professional Enlisted institution has been adopted (ΣΥΟΠ), which combined with the reduction of conscript service produces an approximate 3:1 ratio between conscript and professional enlisted. Law, justice and human rights The Cyprus Police (Greek: , ) is the only National Police Service of the Republic of Cyprus and is under the Ministry of Justice and Public Order since 1993. In "Freedom in the World 2011", Freedom House rated Cyprus as "free". In January 2011, the Report of the Office of the United Nations High Commissioner for Human Rights on the question of Human Rights in Cyprus noted that the ongoing division of Cyprus continues to affect human rights throughout the island "... including freedom of movement, human rights pertaining to the question of missing persons, discrimination, the right to life, freedom of religion, and economic, social and cultural rights." The constant focus on the division of the island can sometimes mask other human rights issues. In 2014, Turkey was ordered by the European Court of Human Rights to pay well over $100m in compensation to Cyprus for the invasion; Ankara announced that it would ignore the judgment. In 2014, a group of Cypriot refugees and a European parliamentarian, later joined by the Cypriot government, filed a complaint to the International Court of Justice, accusing Turkey of violating the Geneva Conventions by directly or indirectly transferring its civilian population into occupied territory. Other violations of the Geneva and the Hague Conventions—both ratified by Turkey—amount to what archaeologist Sophocles Hadjisavvas called "the organized destruction of Greek and Christian heritage in the north". These violations include looting of cultural treasures, deliberate destruction of churches, neglect of works of art, and altering the names of important historical sites, which was condemned by the International Council on Monuments and Sites. Hadjisavvas has asserted that these actions are motivated by a Turkish policy of erasing the Greek presence in Northern Cyprus within a framework of ethnic cleansing. But some perpetrators are just motivated by greed and are seeking profit. Art law expert Alessandro Chechi has classified the connection of cultural heritage destruction to ethnic cleansing as the "Greek Cypriot viewpoint", which he reports as having been dismissed by two PACE reports. Chechi asserts joint Greek and Turkish Cypriot responsibility for the destruction of cultural heritage in Cyprus, noting the destruction of Turkish Cypriot heritage in the hands of Greek Cypriot extremists. Economy In the early 21st century the Cypriot economy has diversified and become prosperous. However, in 2012 it became affected by the Eurozone financial and banking crisis. In June 2012, the Cypriot government announced it would need € in foreign aid to support the Cyprus Popular Bank, and this was followed by Fitch downgrading Cyprus's credit rating to junk status. Fitch said Cyprus would need an additional € to support its banks and the downgrade was mainly due to the exposure of Bank of Cyprus, Cyprus Popular Bank and Hellenic Bank, Cyprus's three largest banks, to the Greek financial crisis. The 2012–2013 Cypriot financial crisis led to an agreement with the Eurogroup in March 2013 to split the country's second largest bank, the Cyprus Popular Bank (also known as Laiki Bank), into a "bad" bank which would be wound down over time and a "good" bank which would be absorbed by the Bank of Cyprus. In return for a €10 billion bailout from the European Commission, the European Central Bank and the International Monetary Fund, often referred to as the "troika", the Cypriot government was required to impose a significant haircut on uninsured deposits, a large proportion of which were held by wealthy Russians who used Cyprus as a tax haven. Insured deposits of €100,000 or less were not affected. According to the 2017 International Monetary Fund estimates, its per capita GDP (adjusted for purchasing power) at $36,442 is below the average of the European Union. Cyprus has been sought as a base for several offshore businesses for its low tax rates. Tourism, financial services and shipping are significant parts of the economy. Economic policy of the Cyprus government has focused on meeting the criteria for admission to the European Union. The Cypriot government adopted the euro as the national currency on 1 January 2008. Cyprus is the last EU member fully isolated from energy interconnections and it is expected that it will be connected to European network via EuroAsia Interconnector, 2000 MW HVDC undersea power cable. EuroAsia Interconnector will connect Greek, Cypriot, and Israeli power grids. It is a leading Project of Common Interest of the European Union and also priority Electricity Highway Interconnector Project. In recent years significant quantities of offshore natural gas have been discovered in the area known as Aphrodite (at the exploratory drilling block 12) in Cyprus's exclusive economic zone (EEZ), about south of Limassol at 33°5'40″N and 32°59'0″E. However, Turkey's offshore drilling companies have accessed both natural gas and oil resources since 2013. Cyprus demarcated its maritime border with Egypt in 2003, with Lebanon in 2007, and with Israel in 2010. In August 2011, the US-based firm Noble Energy entered into a production-sharing agreement with the Cypriot government regarding the block's commercial development. Turkey, which does not recognise the border agreements of Cyprus with its neighbours, threatened to mobilise its naval forces if Cyprus proceeded with plans to begin drilling at Block 12. Cyprus's drilling efforts have the support of the US, EU, and UN, and on 19 September 2011 drilling in Block 12 began without any incidents being reported. Because of the heavy influx of tourists and foreign investors, the property rental market in Cyprus has grown in recent years. In late 2013, the Cyprus Town Planning Department announced a series of incentives to stimulate the property market and increase the number of property developments in the country's town centres. This followed earlier measures to quickly give immigration permits to third country nationals investing in Cyprus property. Transport Available modes of transport are by road, sea and air. Of the of roads in the Republic of Cyprus in 1998, were paved, and were unpaved. In 1996 the Turkish-occupied area had a similar ratio of paved to unpaved, with approximately of paved road and unpaved. Cyprus is one of only three EU nations in which vehicles drive on the left-hand side of the road, a remnant of British colonisation (the others being Ireland and Malta). A series of motorways runs along the coast from Paphos east to Ayia Napa, with two motorways running inland to Nicosia, one from Limassol and one from Larnaca. Per capita private car ownership is the 29th-highest in the world. There were approximately 344,000 privately owned vehicles, and a total of 517,000 registered motor vehicles in the Republic of Cyprus in 2006. In 2006, plans were announced to improve and expand bus services and other public transport throughout Cyprus, with the financial backing of the European Union Development Bank. In 2010 the new bus network was implemented. Cyprus has several heliports and two international airports: Larnaca International Airport and Paphos International Airport. A third airport, Ercan International Airport, operates in the Turkish Cypriot administered area with direct flights only to Turkey (Turkish Cypriot ports are closed to international traffic apart from Turkey). Nicosia International Airport has been closed since 1974. The main harbours of the island are Limassol and Larnaca, which service cargo, passenger and cruise ships. Communications Cyta, the state-owned telecommunications company, manages most telecommunications and Internet connections on the island. However, following deregulation of the sector, a few private telecommunications companies emerged, including epic, Cablenet, OTEnet Telecom, Omega Telecom and PrimeTel. In the Turkish-controlled area of Cyprus, two different companies administer the mobile phone network: Turkcell and KKTC Telsim. Demographics According to the CIA World Factbook, in 2001 Greek Cypriots comprised 77%, Turkish Cypriots 18%, and others 5% of the Cypriot population. At the time of the 2011 government census, there were 10,520 people of Russian origin living in Cyprus. According to the first population census after the declaration of independence, carried out in December 1960 and covering the entire island, Cyprus had a total population of 573,566, of whom 442,138 (77.1%) were Greeks, 104,320 (18.2%) Turkish, and 27,108 (4.7%) others. Due to the inter-communal ethnic tensions between 1963 and 1974, an island-wide census was regarded as impossible. Nevertheless, the Cypriot government conducted one in 1973, without the Turkish Cypriot populace. According to this census, the Greek Cypriot population was 482,000. One year later, in 1974, the Cypriot government's Department of Statistics and Research estimated the total population of Cyprus at 641,000; of whom 506,000 (78.9%) were Greeks, and 118,000 (18.4%) Turkish. After the partition of the island in 1974, the government of Cyprus conducted four more censuses: in 1976, 1982, 1992 and 2001; these excluded the Turkish population which was resident in the northern part of the island. According to the Republic of Cyprus's latest estimate, in 2005, the number of Cypriot citizens currently living in the Republic of Cyprus is around 871,036. In addition to this, the Republic of Cyprus is home to 110,200 foreign permanent residents and an estimated 10,000–30,000 undocumented illegal immigrants currently living in the south of the island. According to the 2006 census carried out by Northern Cyprus, there were 256,644 (de jure) people living in Northern Cyprus. 178,031 were citizens of Northern Cyprus, of whom 147,405 were born in Cyprus (112,534 from the north; 32,538 from the south; 371 did not indicate what part of Cyprus they were from); 27,333 born in Turkey; 2,482 born in the UK and 913 born in Bulgaria. Of the 147,405 citizens born in Cyprus, 120,031 say both parents were born in Cyprus; 16,824 say both parents born in Turkey; 10,361 have one parent born in Turkey and one parent born in Cyprus. In 2010, the International Crisis Group estimated that the total population of Cyprus was 1.1 million, of which there was an estimated 300,000 residents in the north, perhaps half of whom were either born in Turkey or are children of such settlers. The villages of Rizokarpaso (in Northern Cyprus), Potamia (in Nicosia district) and Pyla (in Larnaca District) are the only settlements remaining with a mixed Greek and Turkish Cypriot population. Y-Dna haplogroups are found at the following frequencies in Cyprus: J (43.07% including 6.20% J1), E1b1b (20.00%), R1 (12.30% including 9.2% R1b), F (9.20%), I (7.70%), K (4.60%), A (3.10%). J, K, F and E1b1b haplogroups consist of lineages with differential distribution within Middle East, North Africa and Europe while R1 and I are typical in European populations. Outside Cyprus there are significant and thriving diasporas - both a Greek Cypriot diaspora and a Turkish Cypriot diaspora - in the United Kingdom, Australia, Canada, the United States, Greece and Turkey. Functional urban areas Religion The majority of Greek Cypriots identify as Christians, specifically Greek Orthodox, whereas most Turkish Cypriots are adherents of Sunni Islam. According to Eurobarometer 2005, Cyprus was the second most religious state in the European Union at that time, after Malta (although in 2005 Romania wasn't in the European Union; currently Romania is the most religious state in the EU) (see Religion in the European Union). The first President of Cyprus, Makarios III, was an archbishop, and the Vice President of Cyprus was Fazıl Küçük. The current leader of the Greek Orthodox Church of Cyprus is Archbishop Chrysostomos II. Hala Sultan Tekke, situated near the Larnaca Salt Lake is an object of pilgrimage for Muslims. According to the 2001 census carried out in the Government-controlled area, 94.8% of the population were Eastern Orthodox, 0.9% Armenians and Maronites, 1.5% Roman Catholics, 1.0% Church of England, and 0.6% Muslims. There is also a Jewish community on Cyprus. The remaining 1.3% adhered to other religious denominations or did not state their religion. Languages Cyprus has two official languages, Greek and Turkish. Armenian and Cypriot Maronite Arabic are recognised as minority languages. Although without official status, English is widely spoken and it features widely on road signs, public notices, and in advertisements, etc. English was the sole official language during British colonial rule and the lingua franca until 1960, and continued to be used (de facto) in courts of law until 1989 and in legislation until 1996. 80.4% of Cypriots are proficient in the English language as a second language. Russian is widely spoken among the country's minorities, residents and citizens of post-Soviet countries, and Pontic Greeks. Russian, after English and Greek, is the third language used on many signs of shops and restaurants, particularly in Limassol and Paphos. In addition to these languages, 12% speak French and 5% speak German. The everyday spoken language of Greek Cypriots is Cypriot Greek and that of Turkish Cypriots is Cypriot Turkish. These vernaculars both differ from their standard registers significantly. Education Cyprus has a highly developed system of primary and secondary education offering both public and private education. The high quality of instruction can be attributed in part to the fact that nearly 7% of the GDP is spent on education which makes Cyprus one of the top three spenders of education in the EU along with Denmark and Sweden. State schools are generally seen as equivalent in quality of education to private-sector institutions. However, the value of a state high-school diploma is limited by the fact that the grades obtained account for only around 25% of the final grade for each topic, with the remaining 75% assigned by the teacher during the semester, in a minimally transparent way. Cypriot universities (like universities in Greece) ignore high school grades almost entirely for admissions purposes. While a high-school diploma is mandatory for university attendance, admissions are decided almost exclusively on the basis of scores at centrally administered university entrance examinations that all university candidates are required to take. The majority of Cypriots receive their higher education at Greek, British, Turkish, other European and North American universities. Cyprus currently has the highest percentage of citizens of working age who have higher-level education in the EU at 30% which is ahead of Finland's 29.5%. In addition, 47% of its population aged 25–34 have tertiary education, which is the highest in the EU. The body of Cypriot students is highly mobile, with 78.7% studying in a university outside Cyprus. Culture Greek Cypriots and Turkish Cypriots share a lot in common in their culture due to cultural exchanges but also have differences. Several traditional food (such as souvla and halloumi) and beverages are similar, as well as expressions and ways of life. Hospitality and buying or offering food and drinks for guests or others are common among both. In both communities, music, dance and art are integral parts of social life and many artistic, verbal and nonverbal expressions, traditional dances such as tsifteteli, similarities in dance costumes and importance placed on social activities are shared between the communities. However, the two communities have distinct religions and religious cultures, with the Greek Cypriots traditionally being Greek Orthodox and Turkish Cypriots traditionally being Sunni Muslims, which has partly hindered cultural exchange. Greek Cypriots have influences from Greece and Christianity, while Turkish Cypriots have influences from Turkey and Islam. The Limassol Carnival Festival is an annual carnival which is held at Limassol, in Cyprus. The event which is very popular in Cyprus was introduced in the 20th century. Arts The art history of Cyprus can be said to stretch back up to 10,000 years, following the discovery of a series of Chalcolithic period carved figures in the villages of Khoirokoitia and Lempa. The island is the home to numerous examples of high quality religious icon painting from the Middle Ages as well as many painted churches. Cypriot architecture was heavily influenced by French Gothic and Italian renaissance introduced in the island during the era of Latin domination (1191–1571). A well known traditional art that dates at least from the 14th century is the Lefkara lace (also known as "Lefkaratika", which originates from the village Lefkara. Lefkara lace is recognised as an intangible cultural heritage (ICH) by Unesco, and it is characterised by distinct design patterns, and its intricate, time-consuming production process. A genuine Lefkara lace with full embroidery can take typically hundreds of hours to be made, and that is why it is usually priced quite high. Another local form of art that originated from Lefkara is the production of Cypriot Filigree (locally known as Trifourenio), a type of jewellery that is made with twisted threads of silver. In Lefkara village there is government funded centre named Lefkara Handicraft Centre the mission of which is to educate and teach the art of making the embroidery and silver jewellery. There's also the Museum of Traditional Embroidery and Silversmithing located in the village which has large collection of local handmade art. In modern times Cypriot art history begins with the painter Vassilis Vryonides (1883–1958) who studied at the Academy of Fine Arts in Venice. Arguably the two founding fathers of modern Cypriot art were Adamantios Diamantis (1900–1994) who studied at London's Royal College of Art and Christopheros Savva (1924–1968) who also studied in London, at Saint Martin's School of Art. In 1960, Savva founded, together with Welsh artist Glyn Hughes, Apophasis [Decision], the first independent cultural centre of the newly established Republic of Cyprus. In 1968, Savva was among the artists representing Cyprus in its inaugural Pavilion at the 34th Venice Biennale. English Cypriot Artist Glyn HUGHES 1931–2014. In many ways these two artists set the template for subsequent Cypriot art and both their artistic styles and the patterns of their education remain influential to this day. In particular the majority of Cypriot artists still train in England while others train at art schools in Greece and local art institutions such as the Cyprus College of Art, University of Nicosia and the Frederick Institute of Technology. One of the features of Cypriot art is a tendency towards figurative painting although conceptual art is being rigorously promoted by a number of art "institutions" and most notably the Nicosia Municipal Art Centre. Municipal art galleries exist in all the main towns and there is a large and lively commercial art scene. Cyprus was due to host the international art festival Manifesta in 2006 but this was cancelled at the last minute following a dispute between the Dutch organizers of Manifesta and the Cyprus Ministry of Education and Culture over the location of some of the Manifesta events in the Turkish sector of the capital Nicosia. There were also complaints from some Cypriot artists that the Manifesta organisation was importing international artists to take part in the event while treating members of the local art community in Cyprus as 'ignorant' and 'uncivilized natives' who need to be taught 'how to make proper art'. Other notable Greek Cypriot artists include Helene Black, Kalopedis family, Panayiotis Kalorkoti, Nicos Nicolaides, Stass Paraskos, Arestís Stasí, Telemachos Kanthos, Konstantia Sofokleous and Chris Achilleos, and Turkish Cypriot artists include İsmet Güney, Ruzen Atakan and Mutlu Çerkez. Music The traditional folk music of Cyprus has several common elements with Greek, Turkish, and Arabic Music, all of which have descended from Byzantine music, including Greek Cypriot and Turkish Cypriot dances such as the sousta, syrtos, zeibekikos, | It is also the world's 80th largest by area and world's 51st largest by population. It measures long from end to end and wide at its widest point, with Turkey to the north. It lies between latitudes 34° and 36° N, and longitudes 32° and 35° E. Other neighboring territories include Syria and Lebanon to the east and southeast (, respectively), Israel to the southeast, The Gaza Strip 427 kilometres (265 mi) to the southeast, Egypt to the south, and Greece to the northwest: to the small Dodecanesian island of Kastellorizo (Megisti), to Rhodes and to the Greek mainland. Sources alternatively place Cyprus in Europe, or Western Asia and the Middle East. The physical relief of the island is dominated by two mountain ranges, the Troodos Mountains and the smaller Kyrenia Range, and the central plain they encompass, the Mesaoria. The Mesaoria plain is drained by the Pedieos River, the longest on the island. The Troodos Mountains cover most of the southern and western portions of the island and account for roughly half its area. The highest point on Cyprus is Mount Olympus at , located in the centre of the Troodos range. The narrow Kyrenia Range, extending along the northern coastline, occupies substantially less area, and elevations are lower, reaching a maximum of . The island lies within the Anatolian Plate. Cyprus contains the Cyprus Mediterranean forests ecoregion. It had a 2018 Forest Landscape Integrity Index mean score of 7.06/10, ranking it 59th globally out of 172 countries. Geopolitically, the island is subdivided into four main segments. The Republic of Cyprus occupies the southern two-thirds of the island (59.74%). The Turkish Republic of Northern Cyprus occupies the northern third (34.85%), and the United Nations-controlled Green Line provides a buffer zone that separates the two and covers 2.67% of the island. Lastly, two bases under British sovereignty are located on the island: Akrotiri and Dhekelia, covering the remaining 2.74%. Climate Cyprus has a subtropical climate – Mediterranean and semi-arid type (in the north-eastern part of the island) – Köppen climate classifications Csa and BSh, with very mild winters (on the coast) and warm to hot summers. Snow is possible only in the Troodos Mountains in the central part of island. Rain occurs mainly in winter, with summer being generally dry. Cyprus has one of the warmest climates in the Mediterranean part of the European Union. The average annual temperature on the coast is around during the day and at night. Generally, summers last about eight months, beginning in April with average temperatures of during the day and at night, and ending in November with average temperatures of during the day and at night, although in the remaining four months temperatures sometimes exceed . Among all cities in the Mediterranean part of the European Union, Limassol has one of the warmest winters, in the period January – February average temperature is during the day and at night, in other coastal locations in Cyprus is generally during the day and at night. During March, Limassol has average temperatures of during the day and at night, in other coastal locations in Cyprus is generally during the day and at night. The middle of summer is hot – in July and August on the coast the average temperature is usually around during the day and around at night (inland, in the highlands average temperature exceeds ) while in the June and September on the coast the average temperature is usually around during the day and around at night in Limassol, while is usually around during the day and around at night in Paphos. Large fluctuations in temperature are rare. Inland temperatures are more extreme, with colder winters and hotter summers compared with the coast of the island. Average annual temperature of sea is , from in February to in August (depending on the location). In total 7 months – from May to November – the average sea temperature exceeds . Sunshine hours on the coast are around 3,200 per year, from an average of 5–6 hours of sunshine per day in December to an average of 12–13 hours in July. This is about double that of cities in the northern half of Europe; for comparison, London receives about 1,540 per year. In December, London receives about 50 hours of sunshine while coastal locations in Cyprus about 180 hours (almost as much as in May in London). Water supply Cyprus suffers from a chronic shortage of water. The country relies heavily on rain to provide household water, but in the past 30 years average yearly precipitation has decreased. Between 2001 and 2004, exceptionally heavy annual rainfall pushed water reserves up, with supply exceeding demand, allowing total storage in the island's reservoirs to rise to an all-time high by the start of 2005. However, since then demand has increased annually – a result of local population growth, foreigners moving to Cyprus and the number of visiting tourists – while supply has fallen as a result of more frequent droughts. Dams remain the principal source of water both for domestic and agricultural use; Cyprus has a total of 107 dams (plus one currently under construction) and reservoirs, with a total water storage capacity of about . Water desalination plants are gradually being constructed to deal with recent years of prolonged drought. The Government has invested heavily in the creation of water desalination plants which have supplied almost 50 per cent of domestic water since 2001. Efforts have also been made to raise public awareness of the situation and to encourage domestic water users to take more responsibility for the conservation of this increasingly scarce commodity. Turkey has built a water pipeline under the Mediterranean Sea from Anamur on its southern coast to the northern coast of Cyprus, to supply Northern Cyprus with potable and irrigation water (see Northern Cyprus Water Supply Project). Politics Cyprus is a presidential republic. The head of state and of the government is elected by a process of universal suffrage for a five-year term. Executive power is exercised by the government with legislative power vested in the House of Representatives whilst the Judiciary is independent of both the executive and the legislature. The 1960 Constitution provided for a presidential system of government with independent executive, legislative and judicial branches as well as a complex system of checks and balances including a weighted power-sharing ratio designed to protect the interests of the Turkish Cypriots. The executive was led by a Greek Cypriot president and a Turkish Cypriot vice-president elected by their respective communities for five-year terms and each possessing a right of veto over certain types of legislation and executive decisions. Legislative power rested on the House of Representatives who were also elected on the basis of separate voters' rolls. Since 1965, following clashes between the two communities, the Turkish Cypriot seats in the House remain vacant. In 1974 Cyprus was divided de facto when the Turkish army occupied the northern third of the island. The Turkish Cypriots subsequently declared independence in 1983 as the Turkish Republic of Northern Cyprus but were recognised only by Turkey. In 1985 the TRNC adopted a constitution and held its first elections. The United Nations recognises the sovereignty of the Republic of Cyprus over the entire island of Cyprus. The House of Representatives currently has 59 members elected for a five-year term, 56 members by proportional representation and three observer members representing the Armenian, Latin and Maronite minorities. 24 seats are allocated to the Turkish community but remain vacant since 1964. The political environment is dominated by the communist AKEL, the liberal conservative Democratic Rally, the centrist Democratic Party, the social-democratic EDEK and the centrist EURO.KO. In 2008, Dimitris Christofias became the country's first Communist head of state. Due to his involvement in the 2012–13 Cypriot financial crisis, Christofias did not run for re-election in 2013. The Presidential election in 2013 resulted in Democratic Rally candidate Nicos Anastasiades winning 57.48% of the vote. As a result, Anastasiades was sworn in on and has been president since 28 February 2013. Anastasiades was re-elected with 56% of the vote in the 2018 presidential election. Administrative divisions The Republic of Cyprus is divided into six districts: Nicosia, Famagusta, Kyrenia, Larnaca, Limassol and Paphos. Exclaves and enclaves Cyprus has four exclaves, all in territory that belongs to the British Sovereign Base Area of Dhekelia. The first two are the villages of Ormidhia and Xylotymvou. The third is the Dhekelia Power Station, which is divided by a British road into two parts. The northern part is the EAC refugee settlement. The southern part, even though located by the sea, is also an exclave because it has no territorial waters of its own, those being UK waters. The UN buffer zone runs up against Dhekelia and picks up again from its east side off Ayios Nikolaos and is connected to the rest of Dhekelia by a thin land corridor. In that sense the buffer zone turns the Paralimni area on the southeast corner of the island into a de facto, though not de jure, exclave. Foreign relations The Republic of Cyprus is a member of the following international groups: Australia Group, CN, CE, CFSP, EBRD, EIB, EU, FAO, IAEA, IBRD, ICAO, ICC, ICCt, ITUC, IDA, IFAD, IFC, IHO, ILO, IMF, IMO, Interpol, IOC, IOM, IPU, ITU, MIGA, NAM, NSG, OPCW, OSCE, PCA, UN, UNCTAD, UNESCO, UNHCR, UNIDO, UPU, WCL, WCO, WFTU, WHO, WIPO, WMO, WToO, WTO. Armed forces The Cypriot National Guard is the main military institution of the Republic of Cyprus. It is a combined arms force, with land, air and naval elements. Historically all men were required to spend 24 months serving in the National Guard after their 17th birthday, but in 2016 this period of compulsory service was reduced to 14 months. Annually, approximately 10,000 persons are trained in recruit centres. Depending on their awarded speciality the conscript recruits are then transferred to speciality training camps or to operational units. While until 2016 the armed forces were mainly conscript based, since then a large Professional Enlisted institution has been adopted (ΣΥΟΠ), which combined with the reduction of conscript service produces an approximate 3:1 ratio between conscript and professional enlisted. Law, justice and human rights The Cyprus Police (Greek: , ) is the only National Police Service of the Republic of Cyprus and is under the Ministry of Justice and Public Order since 1993. In "Freedom in the World 2011", Freedom House rated Cyprus as "free". In January 2011, the Report of the Office of the United Nations High Commissioner for Human Rights on the question of Human Rights in Cyprus noted that the ongoing division of Cyprus continues to affect human rights throughout the island "... including freedom of movement, human rights pertaining to the question of missing persons, discrimination, the right to life, freedom of religion, and economic, social and cultural rights." The constant focus on the division of the island can sometimes mask other human rights issues. In 2014, Turkey was ordered by the European Court of Human Rights to pay well over $100m in compensation to Cyprus for the invasion; Ankara announced that it would ignore the judgment. In 2014, a group of Cypriot refugees and a European parliamentarian, later joined by the Cypriot government, filed a complaint to the International Court of Justice, accusing Turkey of violating the Geneva Conventions by directly or indirectly transferring its civilian population into occupied territory. Other violations of the Geneva and the Hague Conventions—both ratified by Turkey—amount to what archaeologist Sophocles Hadjisavvas called "the organized destruction of Greek and Christian heritage in the north". These violations include looting of cultural treasures, deliberate destruction of churches, neglect of works of art, and altering the names of important historical sites, which was condemned by the International Council on Monuments and Sites. Hadjisavvas has asserted that these actions are motivated by a Turkish policy of erasing the Greek presence in Northern Cyprus within a framework of ethnic cleansing. But some perpetrators are just motivated by greed and are seeking profit. Art law expert Alessandro Chechi has classified the connection of cultural heritage destruction to ethnic cleansing as the "Greek Cypriot viewpoint", which he reports as having been dismissed by two PACE reports. Chechi asserts joint Greek and Turkish Cypriot responsibility for the destruction of cultural heritage in Cyprus, noting the destruction of Turkish Cypriot heritage in the hands of Greek Cypriot extremists. Economy In the early 21st century the Cypriot economy has diversified and become prosperous. However, in 2012 it became affected by the Eurozone financial and banking crisis. In June 2012, the Cypriot government announced it would need € in foreign aid to support the Cyprus Popular Bank, and this was followed by Fitch downgrading Cyprus's credit rating to junk status. Fitch said Cyprus would need an additional € to support its banks and the downgrade was mainly due to the exposure of Bank of Cyprus, Cyprus Popular Bank and Hellenic Bank, Cyprus's three largest banks, to the Greek financial crisis. The 2012–2013 Cypriot financial crisis led to an agreement with the Eurogroup in March 2013 to split the country's second largest bank, the Cyprus Popular Bank (also known as Laiki Bank), into a "bad" bank which would be wound down over time and a "good" bank which would be absorbed by the Bank of Cyprus. In return for a €10 billion bailout from the European Commission, the European Central Bank and the International Monetary Fund, often referred to as the "troika", the Cypriot government was required to impose a significant haircut on uninsured deposits, a large proportion of which were held by wealthy Russians who used Cyprus as a tax haven. Insured deposits of €100,000 or less were not affected. According to the 2017 International Monetary Fund estimates, its per capita GDP (adjusted for purchasing power) at $36,442 is below the average of the European Union. Cyprus has been sought as a base for several offshore businesses for its low tax rates. Tourism, financial services and shipping are significant parts of the economy. Economic policy of the Cyprus government has focused on meeting the criteria for admission to the European Union. The Cypriot government adopted the euro as the national currency on 1 January 2008. Cyprus is the last EU member fully isolated from energy interconnections and it is expected that it will be connected to European network via EuroAsia Interconnector, 2000 MW HVDC undersea power cable. EuroAsia Interconnector will connect Greek, Cypriot, and Israeli power grids. It is a leading Project of Common Interest of the European Union and also priority Electricity Highway Interconnector Project. In recent years significant quantities of offshore natural gas have been discovered in the area known as Aphrodite (at the exploratory drilling block 12) in Cyprus's exclusive economic zone (EEZ), about south of Limassol at 33°5'40″N and 32°59'0″E. However, Turkey's offshore drilling companies have accessed both natural gas and oil resources since 2013. Cyprus demarcated its maritime border with Egypt in 2003, with Lebanon in 2007, and with Israel in 2010. In August 2011, the US-based firm Noble Energy entered into a production-sharing agreement with the Cypriot government regarding the block's commercial development. Turkey, which does not recognise the border agreements of Cyprus with its neighbours, threatened to mobilise its naval forces if Cyprus proceeded with plans to begin drilling at Block 12. Cyprus's drilling efforts have the support of the US, EU, and UN, and on 19 September 2011 drilling in Block 12 began without any incidents being reported. Because of the heavy influx of tourists and foreign investors, the property rental market in Cyprus has grown in recent years. In late 2013, the Cyprus Town Planning Department announced a series of incentives to stimulate the property market and increase the number of property developments in the country's town centres. This followed earlier measures to quickly give immigration permits to third country nationals investing in Cyprus property. Transport Available modes of transport are by road, sea and air. Of the of roads in the Republic of Cyprus in 1998, were paved, and were unpaved. In 1996 the Turkish-occupied area had a similar ratio of paved to unpaved, with approximately of paved road and unpaved. Cyprus is one of only three EU nations in which vehicles drive on the left-hand side of the road, a remnant of British colonisation (the others being Ireland and Malta). A series of motorways runs along the coast from Paphos east to Ayia Napa, with two motorways running inland to Nicosia, one from Limassol and one from Larnaca. Per capita private car ownership is the 29th-highest in the world. There were approximately 344,000 privately owned vehicles, and a total of 517,000 registered motor vehicles in the Republic of Cyprus in 2006. In 2006, plans were announced to improve and expand bus services and other public transport throughout Cyprus, with the financial backing of the European Union Development Bank. In 2010 the new bus network was implemented. Cyprus has several heliports and two international airports: Larnaca International Airport and Paphos International Airport. A third airport, Ercan International Airport, operates in the Turkish Cypriot administered area with direct flights only to Turkey (Turkish Cypriot ports are closed to international traffic apart from Turkey). Nicosia International Airport has been closed since 1974. The main harbours of the island are Limassol and Larnaca, which service cargo, passenger and cruise ships. Communications Cyta, the state-owned telecommunications company, manages most telecommunications and Internet connections on the island. However, following deregulation of the sector, a few private telecommunications companies emerged, including epic, Cablenet, OTEnet Telecom, Omega Telecom and PrimeTel. In the Turkish-controlled area of Cyprus, two different companies administer the mobile phone network: Turkcell and KKTC Telsim. Demographics According to the CIA World Factbook, in 2001 Greek Cypriots comprised 77%, Turkish Cypriots 18%, and others 5% of the Cypriot population. At the time of the 2011 government census, there were 10,520 people of Russian origin living in Cyprus. According to the first population census after the declaration of independence, carried out in December 1960 and covering the entire island, Cyprus had a total population of 573,566, of whom 442,138 (77.1%) were Greeks, 104,320 (18.2%) Turkish, and 27,108 (4.7%) others. Due to the inter-communal ethnic tensions between 1963 and 1974, an island-wide census was regarded as impossible. Nevertheless, the Cypriot government conducted one in 1973, without the Turkish Cypriot populace. According to this census, the Greek Cypriot population was 482,000. One year later, in 1974, the Cypriot government's Department of Statistics and Research estimated the total population of Cyprus at 641,000; of whom 506,000 (78.9%) were Greeks, and 118,000 (18.4%) Turkish. After the partition of the island in 1974, the government of Cyprus conducted four more censuses: in 1976, 1982, 1992 and 2001; these excluded the Turkish population which was resident in the northern part of the island. According to the Republic of Cyprus's latest estimate, in 2005, the number of Cypriot citizens currently living in the Republic of Cyprus is around 871,036. In addition to this, the Republic of Cyprus is home to 110,200 foreign permanent residents and an estimated 10,000–30,000 undocumented illegal immigrants currently living in the south of the island. According to the 2006 census carried out by Northern Cyprus, there were 256,644 (de jure) people living in Northern Cyprus. 178,031 were citizens of Northern Cyprus, of whom 147,405 were born in Cyprus (112,534 from the north; 32,538 from the south; 371 did not indicate what part of Cyprus they were from); 27,333 born in Turkey; 2,482 born in the UK and 913 born in Bulgaria. Of the 147,405 citizens born in Cyprus, 120,031 say both parents were born in Cyprus; 16,824 say both parents born in Turkey; 10,361 have one parent born in Turkey and one parent born in Cyprus. In 2010, the International Crisis Group estimated that the total population of Cyprus was 1.1 million, of which there was an estimated 300,000 residents in the north, perhaps half of whom were either born in Turkey or are children of such settlers. The villages of Rizokarpaso (in Northern Cyprus), Potamia (in Nicosia district) and Pyla (in Larnaca District) are the only settlements remaining with a mixed Greek and Turkish Cypriot population. Y-Dna haplogroups are found at the following frequencies in Cyprus: J (43.07% including 6.20% J1), E1b1b (20.00%), R1 (12.30% including 9.2% R1b), F (9.20%), I (7.70%), K (4.60%), A (3.10%). J, K, F and E1b1b haplogroups consist of lineages with differential distribution within Middle East, North Africa and Europe while R1 and I are typical in European populations. Outside Cyprus there are significant and thriving diasporas - both a Greek Cypriot diaspora and a Turkish Cypriot diaspora - in the United Kingdom, Australia, Canada, the United States, Greece and Turkey. Functional urban areas Religion The majority of Greek Cypriots identify as Christians, specifically Greek Orthodox, whereas most Turkish Cypriots are adherents of Sunni Islam. According to Eurobarometer 2005, Cyprus was the second most religious state in the European Union at that time, after Malta (although in 2005 Romania wasn't in the European Union; currently Romania is the most religious state in the EU) (see Religion in the European Union). The first President of Cyprus, Makarios III, was an archbishop, and the Vice President of Cyprus was Fazıl Küçük. The current leader of the Greek Orthodox Church of Cyprus is Archbishop Chrysostomos II. Hala Sultan Tekke, situated near the Larnaca Salt Lake is an object of pilgrimage for Muslims. According to the 2001 census carried out in the Government-controlled area, 94.8% of the population were Eastern Orthodox, 0.9% Armenians and Maronites, 1.5% Roman Catholics, 1.0% Church of England, and 0.6% Muslims. There is also a Jewish community on Cyprus. The remaining 1.3% adhered to other religious denominations or did not state their religion. Languages Cyprus has two official languages, Greek and Turkish. Armenian and Cypriot Maronite Arabic are recognised as minority languages. Although without official status, English is widely spoken and it features widely on road signs, public notices, and in advertisements, etc. English was the sole official language during British colonial rule and the lingua franca until 1960, and continued to be used (de facto) in courts of law until 1989 and in legislation until 1996. 80.4% of Cypriots are proficient in the English language as a second language. Russian is widely spoken among the country's minorities, residents and citizens of post-Soviet countries, and Pontic Greeks. Russian, after English and Greek, is the third language used on many signs of shops and restaurants, particularly in Limassol and Paphos. In addition to these languages, 12% speak French and 5% speak German. The everyday spoken language of Greek Cypriots is Cypriot Greek and that of Turkish Cypriots is Cypriot Turkish. These vernaculars both differ from their standard registers significantly. Education Cyprus has a highly developed system of primary and secondary education offering both public and private education. The high quality of instruction can be attributed in part to the fact that nearly 7% of the GDP is spent on education which makes Cyprus one of the top three spenders of education in the EU along with Denmark and Sweden. State schools are generally seen as equivalent in quality of education to private-sector institutions. However, the value of a state high-school diploma is limited by the fact that the grades obtained account for only around 25% of the final grade for each topic, with the remaining 75% assigned by the teacher during the semester, in a minimally transparent way. Cypriot universities (like universities in Greece) ignore high school grades almost entirely for admissions purposes. While a high-school diploma is mandatory for university attendance, admissions are decided almost exclusively on the basis of scores at centrally administered university entrance examinations that all university candidates are required to take. The majority of Cypriots receive their higher education at Greek, British, Turkish, other European and North American universities. Cyprus currently has the highest percentage of citizens of working age who have higher-level education in the EU at 30% which is ahead of Finland's 29.5%. In addition, 47% of its population aged 25–34 have tertiary education, which is the highest in the EU. The body of Cypriot students is highly mobile, with 78.7% studying in a university outside Cyprus. Culture Greek Cypriots and Turkish Cypriots share a lot in common in their culture due to cultural exchanges but also have differences. Several traditional food (such as souvla and halloumi) and beverages are similar, as well as expressions and ways of life. Hospitality and buying or offering food and drinks for guests or others are common among both. In both communities, music, dance and art are integral parts of social life and many artistic, verbal and nonverbal expressions, traditional dances such as tsifteteli, similarities in dance costumes and importance placed on social activities are shared between the communities. However, the two communities have distinct religions and religious cultures, with the Greek Cypriots traditionally being Greek Orthodox and Turkish Cypriots traditionally being Sunni Muslims, which has partly hindered cultural exchange. Greek Cypriots have influences from Greece and Christianity, while Turkish Cypriots have influences from Turkey and Islam. The Limassol Carnival Festival is an annual carnival which is held at Limassol, in Cyprus. The event which is very popular in Cyprus was introduced in the 20th century. Arts The art history of Cyprus can be said to stretch back up to 10,000 years, following the discovery of a series of Chalcolithic period carved figures in the villages of Khoirokoitia and Lempa. The island is the home to numerous examples of high quality religious icon painting from the Middle Ages as well as many painted churches. Cypriot architecture was heavily influenced by French Gothic and Italian renaissance introduced in the island during the era of Latin domination (1191–1571). A well known traditional art that dates at least from the 14th century is the Lefkara lace (also known as "Lefkaratika", which originates from the village Lefkara. Lefkara lace is recognised as an intangible cultural heritage (ICH) by Unesco, and it is characterised by distinct design patterns, and its intricate, time-consuming production process. A genuine Lefkara lace with full embroidery can take typically hundreds of hours to be made, and that is why it is usually priced quite high. Another local form of art that originated from Lefkara is the production of Cypriot Filigree (locally known as Trifourenio), a type of jewellery that is made with twisted threads of silver. In Lefkara village there is government funded centre named Lefkara Handicraft Centre the mission of which is to educate and teach the art of making the embroidery and silver jewellery. There's also the Museum of Traditional Embroidery and Silversmithing located in the village which has large collection of local handmade art. In modern times Cypriot art history begins with the painter Vassilis Vryonides (1883–1958) who studied at the Academy of Fine Arts in Venice. Arguably the two founding fathers of modern Cypriot art were Adamantios Diamantis (1900–1994) who studied at London's Royal College of Art and Christopheros Savva (1924–1968) who also studied in London, at Saint Martin's School of Art. In 1960, Savva founded, together with Welsh artist Glyn Hughes, Apophasis [Decision], the first independent cultural centre of the newly established Republic of Cyprus. In 1968, Savva was among the artists representing Cyprus in its inaugural Pavilion at the 34th Venice Biennale. English Cypriot Artist Glyn HUGHES 1931–2014. In many ways these two artists set the template for subsequent Cypriot art and both their artistic styles and the patterns of their education remain influential to this day. In particular |
grown under irrigation. Little evidence remains that this broad, central plain, open to the sea at either end, was once covered with rich forests whose timber was coveted by ancient conquerors for their sailing vessels. The now-divided capital of the island, Nicosia, lies in the middle of this central plain. Natural vegetation Despite its small size, Cyprus has a variety of natural vegetation. This includes forests of conifers and broadleaved trees such as pine (Pinus brutia), cedar, cypresses and oaks. Ancient authors write that most of Cyprus, even Messaoria, was heavily forested, and there are still considerable forests on the Troodos and Kyrenia ranges, and locally at lower altitudes. About 17% of the whole island is classified as woodland. Where there is no forest, tall shrub communities of golden oak (Quercus alnifolia), strawberry tree (Arbutus andrachne), terebinth (Pistacia terebinthus), olive (Olea europaea), kermes oak (Quercus coccifera) and styrax (Styrax officinalis) are found, but such maquis is uncommon. Over most of the island untilled ground bears a grazed covering of garrigue, largely composed of low bushes of Cistus, Genista sphacelata, Calicotome villosa, Lithospermum hispidulum, Phaganalon rupestre and, locally, Pistacia lentiscus. Where grazing is excessive this covering is soon reduced, and an impoverished batha remains, consisting principally of Thymus capitatus, Sarcopoterium spinosum, and a few stunted herbs. Climate The Mediterranean climate, warm and rather dry, with rainfall mainly between November and March, favors agriculture. In general, the island experiences mild wet winters and dry hot summers. Variations in temperature and rainfall are governed by altitude and, to a lesser extent, distance from the coast. Hot, dry summers from mid-May to mid-September and rainy, rather changeable winters from November to mid-March are separated by short autumn and spring seasons. Area and boundaries Area: Total: 9,251 km2 (of which are under the control of the Republic of Cyprus and of which are under the administration of the de facto Turkish Republic of Northern Cyprus) Land: 9,241 km2 Water: 10 km2 Land boundaries: 0 km Coastline: 648 km Maritime claims: Territorial sea: Continental shelf: 200 m depth or to the depth of exploitation Exclusive Economic Zone: Elevation extremes: Lowest point: Mediterranean Sea 0 m Highest point: Olympus 1,952 m Resource and land use Natural resources: copper, pyrite, asbestos, gypsum, timber, salt, marble, clay earth pigment Land use: arable land: 9.90% permanent crops: 3.24% other: 86.86% (2012) | the coastal plain. While the Troodos Mountains are a massif formed of molten igneous rock, the Kyrenia Range is a narrow limestone ridge that rises suddenly from the plains. Its easternmost extension becomes a series of foothills on the Karpas Peninsula. That peninsula points toward Asia Minor, to which Cyprus belongs geologically. The Kyrenia Range is also known as the Pentadactylon Mountains, due to a summit resembling five fingers. Even the highest peaks of the Kyrenia Range are hardly more than half the height of the great dome of the Troodos massif, Mount Olympus (), but their seemingly inaccessible, jagged slopes make them considerably more spectacular. British writer Lawrence Durrell, in Bitter Lemons, wrote of the Troodos as "an unlovely jumble of crags and heavyweight rocks" and of the Kyrenia Range as belonging to "the world of Gothic Europe, its lofty crags studded with crusader castles." Rich copper deposits were discovered in antiquity on the slopes of the Troodos. The massive sulfide deposits formed as a part of an ophiolite complex at a spreading center under the Mediterranean Sea which was tectonically uplifted during the Pleistocene and emplaced in its current location. Drainage In much of the island, access to a year-round supply of water is difficult. This is traditionally attributed to deforestation which damaged the island's drainage system through erosion, but Grove and Rackham question this view. A network of winter rivers rises in the Troodos Mountains and flows out from them in all directions. The Yialias River and the Pedhieos River flow eastward across the Mesaoria into Famagusta Bay; the Serraghis River flows northwest through the Morphou plain. All of the island's rivers, however, are dry in the summer. An extensive system of dams and waterways has been constructed to bring water to farming areas. The Mesaoria is the agricultural heartland of the island, but its productiveness for wheat and barley depends very much on winter rainfall; other crops are grown under irrigation. Little evidence remains that this broad, central plain, open to the sea at either end, was once covered with rich forests whose timber was coveted by ancient conquerors for their sailing vessels. The now-divided capital of the island, Nicosia, lies in the middle of this central plain. Natural vegetation Despite its small size, Cyprus has a variety of natural vegetation. This includes forests of conifers and broadleaved trees such as pine (Pinus brutia), cedar, cypresses and oaks. Ancient authors write that most of Cyprus, even Messaoria, was heavily forested, and there are |
(Data refer to government controlled areas): Historical population Turkish Cypriots were the majority of the population registered for taxation between 1777 and 1800. However, it is likely that the Muslim population never exceeded 35-40 per cent of the total population of Cyprus. Rather, many Orthodox Christians registered as Muslims in order to reduce taxation from the government. In the census from 1881 to 1960, all Muslims are counted as Turks, only Greek Orthodox are counted as Greeks. There were small populations of Greek-speaking Muslims and Turkish-speaking Greek Orthodox. In total, between 1955 and 1973, 16,519 Turks and 71,036 Greeks emigrated from the country. Of the emigrated Turkish Cypriots in this period, only 290 went to Turkey. In the 2011 census, 208 people stated their ethnic origin as being Latin. Immigration Large-scale demographic changes have been caused since 1964 by the movements of peoples across the island and the later influx of settlers from Turkey to Northern Cyprus. According to the 2011 Census there are 170,383 non-citizens living in Cyprus, of whom 106,270 are EU citizens and 64,113 are from third countries. The largest EU groups by nationality are Greeks (29,321), British (24,046), Romanians (23,706) and Bulgarians (18,536). The largest non-EU groups are Filipinos (9,413), Russians (8,164), Sri Lankans (7,269) and Vietnamese (7,028). There are an estimated 20–25,000 undocumented migrants from third countries also living in the Republic, though migrant rights groups dispute these figures. The demographic changes in society have led to some racist incidents,<ref>"Teen says beaten and mocked by police in racist incident"</ </ref> and the formation of the charity KISA in response. The demographic character of Northern Cyprus changed after the Turkish invasion in 1974 and especially during the last 10–15 years. TRNC census carried out in April 2006 showed that out of a total population of 256,644 in Northern Cyprus, 132,635, or 52%, were Turkish Cypriots in the sense that they were born in Cyprus of at least one Cyprus-born parent (for 120,007 of these both parents were Cyprus-born). In addition, 43,062 so called TRNC citizens (17%) had at least one non-Cypriot Turkish-born parent, 2,334 so called TRNC citizens (1%) had parents born in other countries, 70,525 residents (27%) had Turkish citizenship, and 8,088 (3%) were citizens of other countries (mainly UK, Bulgaria, and Iran). Based on these census data, it is estimated that 113,687 Northern Cyprus residents, or 44% of the population, are not Turkish Cypriots properly speaking, but are in fact "Turkish immigrants" or "Turkish settlers" from Anatolia. Alternative sources suggest that there are 146,122 Turkish settlers from Anatolia in Northern Cyprus (2007 figures) and that the Turkish Cypriots in Northern Cyprus are today outnumbered by the Turkish settlers, contrary to the picture presented by the 2006 so called TRNC census. Almost one-third of the Turkish settlers in Northern Cyprus have been granted TRNC citizenship by the authorities of Northern Cyprus and have thus been naturalized. . Settlement in Northern Cyprus, especially if accompanied by naturalization, is in violation of article 49 of the Geneva Conventions Protocol of 1977, since the Turkish occupation has been declared illegal by the UN. The UN General Assembly have stated the settlement of Turkish mainlanders, "constitute[s] a form of colonialism and attempt to change illegally the demographic structure of Cyprus". The Republic of Cyprus considers these Turkish immigrants to be "illegal settlers" and does not include them in the population estimates for the entire island published by the Republic of Cyprus Statistical Service. Emigration Nationality group Greek 98.8%, Other 1% (includes Maronite, Armenian, Turkish-Cypriot) Unspecified 0.2% Languages Greek and Turkish are the official languages according to Article 3 of the Constitution of Cyprus. In Northern Cyprus, the official language is Turkish (Article 2 of the 1983 Constitution of Northern Cyprus). English is widely spoken on the island. Religion The Greek Cypriot community adheres to the Autocephalous Greek Orthodox | Greek Orthodox are counted as Greeks. There were small populations of Greek-speaking Muslims and Turkish-speaking Greek Orthodox. In total, between 1955 and 1973, 16,519 Turks and 71,036 Greeks emigrated from the country. Of the emigrated Turkish Cypriots in this period, only 290 went to Turkey. In the 2011 census, 208 people stated their ethnic origin as being Latin. Immigration Large-scale demographic changes have been caused since 1964 by the movements of peoples across the island and the later influx of settlers from Turkey to Northern Cyprus. According to the 2011 Census there are 170,383 non-citizens living in Cyprus, of whom 106,270 are EU citizens and 64,113 are from third countries. The largest EU groups by nationality are Greeks (29,321), British (24,046), Romanians (23,706) and Bulgarians (18,536). The largest non-EU groups are Filipinos (9,413), Russians (8,164), Sri Lankans (7,269) and Vietnamese (7,028). There are an estimated 20–25,000 undocumented migrants from third countries also living in the Republic, though migrant rights groups dispute these figures. The demographic changes in society have led to some racist incidents,<ref>"Teen says beaten and mocked by police in racist incident"</ </ref> and the formation of the charity KISA in response. The demographic character of Northern Cyprus changed after the Turkish invasion in 1974 and especially during the last 10–15 years. TRNC census carried out in April 2006 showed that out of a total population of 256,644 in Northern Cyprus, 132,635, or 52%, were Turkish Cypriots in the sense that they were born in Cyprus of at least one Cyprus-born parent (for 120,007 of these both parents were Cyprus-born). In addition, 43,062 so called TRNC citizens (17%) had at least one non-Cypriot Turkish-born parent, 2,334 so called TRNC citizens (1%) had parents born in other countries, 70,525 residents (27%) had Turkish citizenship, and 8,088 (3%) were citizens of other countries (mainly UK, Bulgaria, and Iran). Based on these census data, it is estimated that 113,687 Northern Cyprus residents, or 44% of the population, are not Turkish Cypriots properly speaking, but are in fact "Turkish immigrants" or "Turkish settlers" from Anatolia. Alternative sources suggest that there are 146,122 Turkish settlers from Anatolia in Northern Cyprus (2007 figures) and that the Turkish Cypriots in Northern Cyprus are today outnumbered by the Turkish settlers, contrary to the picture presented by the 2006 so called TRNC census. Almost one-third of the Turkish settlers in Northern Cyprus have been granted TRNC citizenship by the authorities of Northern Cyprus and have thus been naturalized. . Settlement in Northern Cyprus, especially if accompanied by naturalization, is in violation of article 49 of the Geneva Conventions Protocol of 1977, since the Turkish occupation has been declared illegal by the UN. The UN General Assembly have stated the settlement of Turkish mainlanders, "constitute[s] a form of colonialism and attempt to change illegally the demographic structure of Cyprus". The Republic of Cyprus considers these Turkish immigrants to be "illegal settlers" and does not include them in the population estimates for the entire island published by the Republic of Cyprus Statistical Service. Emigration Nationality group Greek 98.8%, Other 1% (includes Maronite, Armenian, Turkish-Cypriot) Unspecified 0.2% Languages Greek and Turkish are the official languages according to Article 3 of the Constitution of Cyprus. In Northern Cyprus, the official language is Turkish (Article 2 of the 1983 Constitution of Northern Cyprus). English is widely spoken on the island. Religion The Greek Cypriot community adheres to the Autocephalous Greek Orthodox Church of Cyprus and the Turkish Cypriot community adheres to Islam. The religious groups of Armenians, Maronites and Latins (about 9,000 people in total) opted, in accordance with the 1960 constitution, to belong to the Greek Cypriot community. According to the 2001 census |
border between the two sides, and the failure of an attempt to reunify the island under the terms of a United Nations-sponsored initiative guided by the UN Secretary-General, Kofi Annan. None of the Greek Cypriot parties has been able to elect a president by itself or dominate the 56-seat House of Representatives. The 165,000 Greek Cypriot refugees are also a potent political force, along with the independent Orthodox Church of Cyprus, which has some influence in temporal as well as ecclesiastical matters. The working of the Cypriot state was fraught with difficulty from the very early days after independence in 1960, and intercommunal tension and occasionally violence was a feature of the first decade of Cypriot independence. In 1963, the Cypriot president, Makarios, proposed 13 amendments to the Constitution in order to “remove obstacles to the smooth functioning and development of the state.” This was done with the encouragement of the British High Commissioner in Cyprus, who considered the amendments “a reasonable basis for discussion.” Violence erupted between Greek and Turkish Cypriots in December 1963 and by the following year the United Nations agreed to undertake peacekeeping operations UNFICYP. UN-sponsored negotiations to develop institutional arrangements acceptable to the Greek Cypriot and Turkish Cypriot communities began in 1968; several sets of negotiations and other initiatives followed. After the 1974 invasion following a Greek junta-based coup attempt, Makarios secured international recognition of his Greek Cypriot government as the sole legal authority on Cyprus, which has proved to be a very significant strategic advantage for the Greek Cypriots in the decades since. Negotiations continued in the years after 1974 with varying degrees of regularity and success, but none resulted in a full reunification. On 15 November 1983 the Turkish Cypriot North declared independence and the formation of the Turkish Republic of Northern Cyprus (TRNC), which has been recognized only by Turkey. Both sides publicly call for the resolution of intercommunal differences and creation of a new federal system of government. Following the 1998 presidential election, Klerides tried to form a government of national unity, by including six ministers from Klerides' Democratic Rally party, two ministers from the socialist EDEK, three from the Democratic Party (who broke ranks with party leader Spyros Kyprianou) and one from the United Democrats. However, a national unity government was not achieved due to the leftist AKEL and centrist Democratic Party rejecting the offer, preferring to remain opposition parties. Reunification, the Annan Plan and EU entry The results of early negotiations between the Greek Cypriot and Turkish Cypriot politicians resulted in a broad agreement in principle to reunification as a bicameral, bi-zonal federation with territory allocated to the Greek and Turkish Cypriot communities within a united island. However, agreement was never reached on the finer details, and the two sides often met deadlock over the following points, among others: The Greek Cypriot side: took a strong line on the right of return for refugees to properties vacated in the 1974 displacement of Cypriots on both sides, which was based on both UN Resolutions and decisions of the European Court of Human Rights; took a dim view of any proposals which did not allow for the repatriation of Turkish settlers from the mainland who had emigrated to Cyprus since 1974; and supported a stronger central government. The Turkish Cypriot side: favoured a weak central government presiding over two sovereign states in voluntary association, a legacy of earlier fears of domination by the majority Greek Cypriots; and opposed plans for demilitarisation, citing security concerns. The continued difficulties in finding a settlement presented a potential obstacle to Cypriot entry to the European Union, for which the government had applied in 1997. UN-sponsored talks between the Greek and Turkish Cypriot leaders, Glafkos Klerides and Rauf Denktaş, continued intensively in 2002, but without resolution. In December 2002, the EU formally invited Cyprus to join in 2004, insisting that EU membership would apply to the whole island and hoping that it would provide a significant enticement for reunification resulting from the outcome of ongoing talks. However, weeks before the UN deadline, Klerides was defeated in presidential elections by centre candidate Tassos Papadopoulos. Papadopoulos had a reputation as a hard-liner on reunification and based his stance on international law and human rights. By mid-March, the UN declared that the talks had failed. A United Nations plan sponsored by Secretary-General Kofi Annan was announced on 31 March 2004, based on what progress had been made during the talks in Switzerland and fleshed out by the UN, | and held elections—an arrangement recognized only by Turkey. For information pertaining to this, see Politics of the Turkish Republic of Northern Cyprus. The Organisation of the Islamic Conference (now the Organisation of Islamic Cooperation) granted it observer member status under the name of "Turkish Cypriot State". Political conditions The division of Cyprus has remained an intractable political problem plaguing relations between Greece and Turkey, and drawing in NATO, of which both Greece and Turkey are members, and latterly the European Union, which has admitted Greece and Cyprus and which Turkey has been seeking to join for over twenty years. The most recent developments on the island have included the reopening of the border between the two sides, and the failure of an attempt to reunify the island under the terms of a United Nations-sponsored initiative guided by the UN Secretary-General, Kofi Annan. None of the Greek Cypriot parties has been able to elect a president by itself or dominate the 56-seat House of Representatives. The 165,000 Greek Cypriot refugees are also a potent political force, along with the independent Orthodox Church of Cyprus, which has some influence in temporal as well as ecclesiastical matters. The working of the Cypriot state was fraught with difficulty from the very early days after independence in 1960, and intercommunal tension and occasionally violence was a feature of the first decade of Cypriot independence. In 1963, the Cypriot president, Makarios, proposed 13 amendments to the Constitution in order to “remove obstacles to the smooth functioning and development of the state.” This was done with the encouragement of the British High Commissioner in Cyprus, who considered the amendments “a reasonable basis for discussion.” Violence erupted between Greek and Turkish Cypriots in December 1963 and by the following year the United Nations agreed to undertake peacekeeping operations UNFICYP. UN-sponsored negotiations to develop institutional arrangements acceptable to the Greek Cypriot and Turkish Cypriot communities began in 1968; several sets of negotiations and other initiatives followed. After the 1974 invasion following a Greek junta-based coup attempt, Makarios secured international recognition of his Greek Cypriot government as the sole legal authority on Cyprus, which has proved to be a very significant strategic advantage for the Greek Cypriots in the decades since. Negotiations continued in the years after 1974 with varying degrees of regularity and success, but none resulted in a full reunification. On 15 November 1983 the Turkish Cypriot North declared independence and the formation of the Turkish Republic of Northern Cyprus (TRNC), which has been recognized only by Turkey. Both sides publicly call for the resolution of intercommunal differences and creation of a new federal system of government. Following the 1998 presidential election, Klerides tried to form a government of national unity, by including six ministers from Klerides' Democratic Rally party, two ministers from the socialist EDEK, three from the Democratic Party (who broke ranks with party leader Spyros Kyprianou) and one from the United Democrats. However, a national unity government was not achieved due to the leftist AKEL and centrist Democratic Party rejecting the offer, preferring to remain opposition parties. Reunification, the Annan Plan and EU entry The results of early negotiations between the Greek Cypriot and Turkish Cypriot politicians resulted in a broad agreement in principle to reunification as a bicameral, bi-zonal federation with territory allocated to the Greek and Turkish Cypriot communities within a united island. However, agreement was never reached on the finer details, and the two sides often met deadlock over the following points, among others: The Greek Cypriot side: took a strong line on the right of return for refugees to properties vacated in the 1974 displacement of Cypriots on both sides, which was based on both UN Resolutions and decisions of the European Court of Human Rights; took a dim view of any proposals which did not allow for the repatriation of Turkish settlers from the mainland who had emigrated to Cyprus since 1974; and supported a stronger central government. The Turkish Cypriot side: favoured a weak central government presiding over two sovereign states in voluntary association, a legacy of earlier fears of domination by the majority Greek Cypriots; and opposed plans for demilitarisation, citing security concerns. The continued difficulties in finding a settlement presented a potential obstacle to Cypriot entry to the European Union, for which the government had applied in 1997. UN-sponsored talks between the Greek and Turkish Cypriot leaders, Glafkos Klerides and Rauf Denktaş, continued intensively in 2002, but without resolution. In December 2002, the EU formally invited Cyprus to join in 2004, insisting that EU membership would apply to the whole island and hoping that it would provide a significant enticement for reunification resulting from the outcome of ongoing talks. However, weeks before the UN deadline, Klerides was defeated in presidential elections by centre candidate Tassos Papadopoulos. Papadopoulos had a reputation as a hard-liner on reunification and based his stance on international law and human rights. By mid-March, the UN declared that the talks had failed. A United Nations plan sponsored by Secretary-General Kofi Annan was announced on 31 March 2004, based on what progress had been made during the talks in Switzerland and fleshed out by the UN, was put for the first time to civilians on both sides in separate referendums on 24 April 2004. The Greek side overwhelmingly rejected the Annan Plan, and the Turkish side voted in favour. In May 2004, Cyprus entered the EU, still divided, although in practice membership only applies to the southern part of the island which is in the control of the Republic of Cyprus. In acknowledgment of the Turkish Cypriot community's support for reunification, however, the EU made it clear that trade concessions would be reached to stimulate economic growth in the north, and remains committed to reunification under acceptable terms. Though some trade restrictions were lifted on the north to alleviate economic isolation for the Turkish Cypriots, further negotiations have not been a priority. There is now a focus on convincing Turkey to recognise the government of Cyprus, a requirement for Turkish admission advocated most strongly by Cyprus and France. Constitution The 16 August 1960 constitution envisioned power sharing between the Greek Cypriots and Turkish Cypriots. Efforts to amends the constitution sparked the intercommunal strife in 1963. This constitution is still in force, though there is no Turkish Cypriot presence in the Cypriot government. Executive branch |President |Nicos Anastasiades |Democratic Rally |28 February 2013 |} The president, elected by popular vote for a five-year term, is both the chief of state and head of government; post of vice president is currently vacant; under the 1960 constitution, the |
However, after more than three decades of unbroken growth, the Cypriot economy contracted in 2009. This reflected the exposure of Cyprus to the Great Recession and European debt crisis. In recent times, concerns have been raised about the state of public finances and spiralling borrowing costs. Furthermore, Cyprus was dealt a severe blow by the Evangelos Florakis Naval Base explosion in July 2011, with the cost to the economy estimated at €1–3 billion, or up to 17% of GDP. The economic achievements of Cyprus during the preceding decades have been significant, bearing in mind the severe economic and social dislocation created by the Turkish invasion of 1974 and the continuing occupation of the northern part of the island by Turkey. The Turkish invasion inflicted a serious blow to the Cyprus economy and in particular to agriculture, tourism, mining and Quarrying: 70 percent of the island's wealth-producing resources were lost, the tourist industry lost 65 percent of its hotels and tourist accommodation, the industrial sector lost 46 percent, and mining and quarrying lost 56 percent of production. The loss of the port of Famagusta, which handled 83 percent of the general cargo, and the closure of Nicosia International Airport, in the buffer zone, were additional setbacks. The success of Cyprus in the economic sphere has been attributed, inter alia, to the adoption of a market-oriented economic system, the pursuance of sound macroeconomic policies by the government as well as the existence of a dynamic and flexible entrepreneurship and a highly educated labor force. Moreover, the economy benefited from the close cooperation between the public and private sectors. In the past 30 years, the economy has shifted from agriculture to light manufacturing and services. The services sector, including tourism, contributes almost 80% to GDP and employs more than 70% of the labor force. Industry and construction account for approximately one-fifth of GDP and labor, while agriculture is responsible for 2.1% of GDP and 8.5% of the labor force. Potatoes and citrus are the principal export crops. After robust growth rates in the 1980s (average annual growth was 6.1%), economic performance in the 1990s was mixed: real GDP growth was 9.7% in 1992, 1.7% in 1993, 6.0% in 1994, 6.0% in 1995, 1.9% in 1996 and 2.3% in 1997. This pattern underlined the economy's vulnerability to swings in tourist arrivals (i.e., to economic and political conditions in Cyprus, Western Europe, and the Middle East) and the need to diversify the economy. Declining competitiveness in tourism and especially in manufacturing are expected to act as a drag on growth until structural changes are effected. Overvaluation of the Cypriot pound prior to the adoption of the euro in 2008 had kept inflation in check. Trade is vital to the Cypriot economy — the island is not self-sufficient in food and until the recent offshore gas discoveries had few known natural resources – and the trade deficit continues to grow. Cyprus must import fuels, most raw materials, heavy machinery, and transportation equipment. More than 50% of its trade is with the rest of the European Union, especially Greece and the United Kingdom, while the Middle East receives 20% of exports. In 1991, Cyprus introduced a value-added tax (VAT), which is at 19% as of 13 January 2014. Cyprus ratified the new world trade agreement (General Agreement on Tariffs and Trade, GATT) in 1995 and began implementing it fully on 1 January 1996. EU accession negotiations started on 31 March 1998, and concluded when Cyprus joined the organization as a full member in 2004. Investment climate The Cyprus legal system is founded on English law, and is therefore familiar to most international financiers. Cyprus's legislation was aligned with EU norms in the period leading up to EU accession in 2004. Restrictions on foreign direct investment were removed, permitting 100% foreign ownership in many cases. Foreign portfolio investment in the Cyprus Stock Exchange was also liberalized. In 2002 a modern, business-friendly tax system was put in place with a 12.5% corporate tax rate, one of the lowest in the EU. Cyprus has concluded treaties on double taxation with more than 40 countries, and, as a member of the Eurozone, has no exchange restrictions. Non-residents and foreign investors may freely repatriate proceeds from investments in Cyprus. Role as a financial hub In the years following the dissolution of the Soviet Union it gained great popularity as a portal for investment from the West into Russia and Eastern Europe, becoming for companies of that origin the most common tax haven. More recently, there have been increasing investment flows from the West through Cyprus into Asia, particularly China and India, South America and the Middle East. In addition, businesses from outside the EU use Cyprus as their entry-point for investment into Europe. The business services sector remains the fastest growing sector of the economy, and had overtaken all other sectors in importance. CIPA has been fundamental towards this trend. As of 2016, CySEC (the Financial Regulator), regulates many of the world's biggest brands in retail forex as they generally see it as an efficient way to get an EU operating license and industry know-how. Agriculture Cyprus produced in 2018: 106 thousand tons of potato; 37 thousand tons of tangerine; 23 thousand tons of grape; 20 thousand tons of orange; 19 thousand tons of grapefruit; 19 thousand tons of olive; 18 thousand tons of wheat; 18 thousand tons of barley; 15 thousand tons of tomato; 13 thousand tons of watermelon; 10 thousand tons of melon; In addition to smaller productions of other agricultural products. Oil and gas Surveys suggest more than 100 trillion cubic feet (2.831 trillion cubic metres) of reserves lie untapped in the eastern Mediterranean basin between Cyprus and Israel – almost equal to the world's total annual consumption of natural gas. In 2011, Noble Energy estimated that a pipeline to Leviathan gas field could be in operation as soon as 2014 or 2015. In January 2012, Noble Energy announced a natural gas field discovery. It attracted Shell, Delek and Avner as partners. Several production sharing contracts for exploration were signed with international companies, including Eni, KOGAS, Total, ExxonMobil and Qatar Petroleum. It is necessary to develop infrastructure for landing the gas in Cyprus and for liquefaction for export. Role as a shipping hub Cyprus constitutes one of the largest ship management centers in the world; around 50 ship management companies and marine-related foreign enterprises are conducting their international activities in the country while the majority of the largest ship management companies in the world have established fully fledged offices on the island. Its geographical position at the crossroads of three continents and its proximity to the Suez Canal has promoted merchant shipping as an important industry for the island nation. Cyprus has the tenth-largest registered fleet in the world, with 1,030 vessels accounting for 31,706,000 dwt as of 1 January 2013. Tourism Tourism is an important factor of the island state's economy, culture, and overall brand development. With over 2 million tourist arrivals per year, it is the 40th most popular destination in the world. However, per capita of local population, it ranks 17th. The industry has been honored with various international awards, spanning from the Sustainable Destinations Global Top 100, VISION on Sustainable Tourism, Totem Tourism and Green Destination titles bestowed to Limassol and Paphos in December 2014. The island beaches have been awarded with 57 Blue Flags. Cyprus became a full member of the World Tourism Organization when it was created in 1975. According to the World Economic Forum's 2013 Travel and Tourism Competitiveness Index, Cyprus' tourism industry ranks 29th in the world in terms of overall competitiveness. In terms of Tourism Infrastructure, in relation to the tourism industry Cyprus ranks 1st in the world. The Cyprus Tourism Organization has a status of a semi-governmental organisation charged with overseeing the industry practices and promoting the island worldwide. Trade In 2008 fiscal aggregate value of goods and services exported by Cyprus was in the region of $1.53 billion. It primarily exported goods and services such as citrus fruits, cement, potatoes, clothing and pharmaceuticals. At that same period total financial value of goods and services imported by Cyprus was about $8.689 billion. Prominent goods and services imported by Cyprus in 2008 were consumer goods, machinery, petroleum and other lubricants, transport equipment and intermediate goods. Cypriot trade partners Traditionally Greece has been a major export and import partner of Cyprus. In fiscal 2007, it amounted for 21.1 percent of total exports of Cyprus. At that same period it was responsible for 17.7 percent of goods and services imported by Cyprus. Some other important names in this regard are UK and Italy. Eurozone crisis In 2012, Cyprus became affected by the Eurozone financial and banking crisis. In June | Central Bank and the Ministry of Finance. They also set a core Tier 1 ratio – a measure of financial strength – of 9% by the end of 2013 for banks, which could then rise to 10% in 2014. In 2014, Harris Georgiades pointed that exiting the Memorandum with the European troika required a return to the markets. This he said, required "timely, effective and full implementation of the program." The Finance Minister stressed the need to implement the Memorandum of understanding without an additional loan. In 2015, Cyprus was praised by the President of the European Commission for adopting the austerity measures and not hesitating to follow a tough reform program. In 2016, Moody's Investors Service changed its outlook on the Cypriot banking system to positive from stable, reflecting the view that the recovery will restore banks to profitability and improve asset quality. The quick economic recovery was driven by tourism, business services and increased consumer spending. Creditor confidence was also strengthened, allowing Bank of Cyprus to reduce its Emergency Liquidity Assistance to €2.0 billion (from €9.4 billion in 2013). Within the same period, Bank of Cyprus chairman Josef Ackermann urged the European Union to pledge financial support for a permanent solution to the Cyprus dispute. Economy of Northern Cyprus The economy of Turkish-occupied northern Cyprus is about one-fifth the size of the economy of the government-controlled area, while GDP per capita is around half. Because the de facto administration is recognized only by Turkey, it has had much difficulty arranging foreign financing, and foreign firms have hesitated to invest there. The economy mainly revolves around the agricultural sector and government service, which together employ about half of the work force. The tourism sector also contributes substantially into the economy. Moreover, the small economy has seen some downfalls because the Turkish lira is legal tender. To compensate for the economy's weakness, Turkey has been known to provide significant financial aid. In both parts of the island, water shortage is a growing problem, and several desalination plants are planned. The economic disparity between the two communities is pronounced. Although the economy operates on a free-market basis, the lack of private and government investment, shortages of skilled labor and experienced managers, and inflation and the devaluation of the Turkish lira continue to plague the economy. Trade with Turkey Turkey is by far the main trading partner of Northern Cyprus, supplying 55% of imports and absorbing 48% of exports. In a landmark case, the European Court of Justice (ECJ) ruled on 5 July 1994 against the British practice of importing produce from Northern Cyprus based on certificates of origin and phytosanitary certificates granted by the de facto authorities. The ECJ decided that only goods bearing certificates of origin from the internationally recognized Republic of Cyprus could be imported by EU member states. The decision resulted in a considerable decrease of Turkish Cypriot exports to the EU: from $36.4 million (or 66.7% of total Turkish Cypriot exports) in 1993 to $24.7 million in 1996 (or 35% of total exports) in 1996. Even so, the EU continues to be the second-largest trading partner of Northern Cyprus, with a 24.7% share of total imports and 35% share of total exports. The most important exports of Northern Cyprus are citrus and dairy products. These are followed by rakı, scrap and clothing. Assistance from Turkey is the mainstay of the Turkish Cypriot economy. Under the latest economic protocol (signed 3 January 1997), Turkey has undertaken to provide loans totalling $250 million for the purpose of implementing projects included in the protocol related to public finance, tourism, banking, and privatization. Fluctuation in the Turkish lira, which suffered from hyperinflation every year until its replacement by the Turkish new lira in 2005, exerted downward pressure on the Turkish Cypriot standard of living for many years. The de facto authorities have instituted a free market in foreign exchange and permit residents to hold foreign-currency denominated bank accounts. This encourages transfers from Turkish Cypriots living abroad. Happiness Economic factors such as the GDP and national income strongly correlate with the happiness of a nation's citizens. In a study published in 2005, citizens from a sample of countries were asked to rate how happy or unhappy they were as a whole on a scale of 1 to 7 (Ranking: 1. Completely happy, 2. Very happy, 3. Fairly happy,4. Neither happy nor unhappy, 5. Fairly unhappy, 6. Very unhappy, 7. Completely unhappy.) Cyprus had a score of 5.29. On the question of how satisfied citizens were with their main job, Cyprus scored 5.36 on a scale of 1 to 7 (Ranking: 1. Completely satisfied, 2. Very satisfied, 3. Fairly satisfied, 4. Neither satisfied nor dissatisfied, 5. Fairly dissatisfied, 6. Very dissatisfied, 7. Completely dissatisfied.) In another ranking of happiness, Northern Cyprus ranks 58 and Cyprus ranks 61, according to the 2018 World |
it human or freight. Since the last railway was dismantled in 1952, the only remaining modes of transport are by road, by sea, and by air. Roads Of the 12,118 km of roads in the areas controlled by the Republic of Cyprus in 2006, 7,850 km were paved, while 4,268 km were unpaved. In 1996, the Turkish Cypriot area showed a close, but smaller ratio of paved to unpaved with about 1,370 km out of 2,350 km paved and 980 km unpaved. As a legacy of British rule, Cyprus is one of only three EU nations in which vehicles drive on the left. Motorways A1 Nicosia to Limassol A2 connects A1 near Pera Chorio with A3 by Larnaca A3 Larnaca Airport to Agia Napa, also serves as a circular road for Larnaca. A5 connects A1 near Kofinou with A3 by Larnaca A6 Pafos to Limassol A7 Pafos to Polis (final plans) A9 Nicosia to Astromeritis (partially under construction) A22 Dali industrial area to Anthoupolis, Lakatamia (Nicosia 3rd ring road, final plans) Public buses In 2006, extensive plans were announced to improve and expand bus services and restructure public transport throughout Cyprus, with the financial backing of the European Union Development Bank. In 2010, the new revised and expanded bus network was implemented into the system. The bus system is numbered: 1 - 33 Limassol daytime local routes 40 - 95A Limassol daytime rural routes 100 - 259 Nicosia daytime buses 300s Nicosia night network route 101/102/201/301/ 500s Famagusta/Ayia Napa district daytime route 400s Larnaca area route 600s Paphos area routes 700s Larnaca - Famagusta/Ayia Napa area routes N (prevex routes) Limassol night buses network Some bus routes are: 30 Le Meridien Hotel 1 - MY MALL up to every 10 minutes 101 Ayia Napa Waterpark - Paralimni up to every 15 minutes 610 Pafos Harbour Station - Market up to every 10 minutes 611 Pafos Harbour Station - Waterpark up to every 10 minutes 615 Pafos Harbour Station - Coral bay up to every 10 minutes 618 Pafos Harbour Station - Pafos karavella bus station Every 30 mins (Mon - Sat daytime) Licensed vehicles Road transport is the dominant form of transport on the island. Figures released by the International Road Federation in 2007 show that Cyprus holds the highest | Harbour Station - Waterpark up to every 10 minutes 615 Pafos Harbour Station - Coral bay up to every 10 minutes 618 Pafos Harbour Station - Pafos karavella bus station Every 30 mins (Mon - Sat daytime) Licensed vehicles Road transport is the dominant form of transport on the island. Figures released by the International Road Federation in 2007 show that Cyprus holds the highest car ownership rate in the world with 742 cars per 1,000 people. Public transport in Cyprus is limited to privately run bus services (except in Nicosia), taxis, and interurban 'shared' taxi services (locally referred to as service taxis). Thus, private car ownership in the country is the fifth highest per capita in the world. However, in 2006 extensive plans were announced to expand and improve bus services and restructure public transport throughout Cyprus, with the financial backing of the European Union Development Bank Ports and harbours The ports of Cyprus are operated and maintained by the Cyprus Ports Authority. Major harbours of the island are Limassol Harbour, and Larnaca Harbour, which service cargo, passenger, and cruise ships. Limassol is the larger of the two, and handles a large volume of both cargo and cruise vessels. Larnaca is primarily a cargo port but played a big part in the evacuation of foreign nationals from Lebanon in 2006, and in the subsequent humanitarian aid effort. A smaller cargo dock also exists at Vasilikos, near Zygi (a small town between Larnaca and Limassol). Smaller vessels and private yachts can dock at Marinas in Cyprus. Larnaca Marina in Larnaca () St Raphael Marina in Limassol () Paphos Harbour () List of ports and harbours: Larnaca, Limassol, Paphos, Vasilikos. Public Bicycle sharing system Bike in Action is |
with 179 states (including the Holy See and Palestinian National Authority ) and is United Nations, Union for the Mediterranean and European Union full member. It does not maintain diplomatic relations with: Azerbaijan, Kosovo, Benin, Republic of Congo, Central African Republic, Equatorial Guinea, Djibouti, South Sudan Bhutan Kiribati, Palau, Tuvalu Haiti, Saint Kitts and Nevis Cook Islands, Niue Abkhazia, South Ossetia, Somaliland, Sahrawi Arab Democratic Republic, Artsakh, Republic of China (Taiwan), Transnistria The Republic of Cyprus is not recognised by Turkey. International disputes The 1974 invasion of the Turkish army divided the island nation into two. The internationally recognised Republic of Cyprus currently has effective control in the south of the island (59% of the island's land area) while its area not under its effective control makes up 37% of the island. Turkey utilising the territory occupied during the invasion recognizes a declared separatist UDI of Turkish Cypriots in 1983, contrary to multiple United Nations Security Council Resolutions. The two territories of the Republic are separated by a United Nations Buffer Zone (4% of the island); there are two UK sovereign base areas mostly within the Greek Cypriot portion of the island. Illicit drugs Cyprus is a minor transit point for cannabis and cocaine via air routes and container traffic to Europe, especially from Lebanon; some hashish transits as well. The island | and head of the Cypriot Orthodox Church - was the Greek Cypriot Ethnarch, or de facto leader of the community. A highly influential figure well before independence, he participated in the 1955 Bandung Conference. After independence, Makarios took part in the 1961 founding meeting of the Non-Aligned Movement in Belgrade. Reasons for this neutrality may lie in the extreme pressures exerted on the infant Republic by its larger neighbours, Turkey and Greece. Intercommunal rivalries and movements for union with Greece or partial union with Turkey may have persuaded Makarios to steer clear of close affiliation with either side. In any case Cyprus became a high-profile member of the Non-Aligned Movement and retained its membership until its entry into the European Union in 2004. At the non-governmental level, Cyprus has also been a member of the popular extension of the Non-Aligned Movement, the Afro-Asian Peoples' Solidarity Organisation hosting several high-level meetings. Immediately after the 1974 Greek-sponsored coup d'état and the Turkish invasion, Makarios secured international recognition of his administration as the legitimate government of the whole island. This was disputed only by Turkey, which currently recognizes only the Turkish Republic of Northern Cyprus, established in 1983. Since the 1974 crisis, the chief aim of the foreign policy of the Republic of Cyprus has been to secure the withdrawal of Turkish forces and the reunification of the island under the most favorable constitutional and territorial settlement possible. This campaign has been pursued primarily through international forums such as the United Nations and the Non-Aligned Movement, and in recent years through the European Union. Bilateral relations Africa Americas Asia Europe Cyprus' 1990 application for full EU membership caused a storm in the Turkish Cypriot community, which argued that the move required their consent. Following the December 1997 EU Summit decisions on EU enlargement, accession negotiations began 31 March 1998. Cyprus joined the European Union on 1 May 2004. To fulfil its commitment as a member of the European Union, Cyprus withdrew from the Non-Aligned Movement on accession, retaining observer status. Oceania Overview The Republic of Cyprus maintains diplomatic relations with 179 states (including the Holy See and Palestinian National Authority ) and is United Nations, Union for the Mediterranean |
ichthyosaurs, last remaining temnospondyls, and nonmammalian were already extinct millions of years before the event occurred. Coccolithophorids and molluscs, including ammonites, rudists, freshwater snails, and mussels, as well as organisms whose food chain included these shell builders, became extinct or suffered heavy losses. For example, ammonites are thought to have been the principal food of mosasaurs, a group of giant marine reptiles that became extinct at the boundary. Omnivores, insectivores, and carrion-eaters survived the extinction event, perhaps because of the increased availability of their food sources. At the end of the Cretaceous, there seem to have been no purely herbivorous or carnivorous mammals. Mammals and birds that survived the extinction fed on insects, larvae, worms, and snails, which in turn fed on dead plant and animal matter. Scientists theorise that these organisms survived the collapse of plant-based food chains because they fed on detritus. In stream communities, few groups of animals became extinct. Stream communities rely less on food from living plants and more on detritus that washes in from land. This particular ecological niche buffered them from extinction. Similar, but more complex patterns have been found in the oceans. Extinction was more severe among animals living in the water column, than among animals living on or in the seafloor. Animals in the water column are almost entirely dependent on primary production from living phytoplankton, while animals living on or in the ocean floor feed on detritus or can switch to detritus feeding. The largest air-breathing survivors of the event, crocodilians and champsosaurs, were semiaquatic and had access to detritus. Modern crocodilians can live as scavengers and can survive for months without food and go into hibernation when conditions are unfavorable, and their young are small, grow slowly, and feed largely on invertebrates and dead organisms or fragments of organisms for their first few years. These characteristics have been linked to crocodilian survival at the end of the Cretaceous. Geologic formations The high sea level and warm climate of the Cretaceous meant large areas of the continents were covered by warm, shallow seas, providing habitat for many marine organisms. The Cretaceous was named for the extensive chalk deposits of this age in Europe, but in many parts of the world, the deposits from the Cretaceous are of marine limestone, a rock type that is formed under warm, shallow marine conditions. Due to the high sea level, there was extensive space for such sedimentation. Because of the relatively young age and great thickness of the system, Cretaceous rocks are evident in many areas worldwide. Chalk is a rock type characteristic for (but not restricted to) the Cretaceous. It consists of coccoliths, microscopically small calcite skeletons of coccolithophores, a type of algae that prospered in the Cretaceous seas. Stagnation of deep sea currents in middle Cretaceous times caused anoxic conditions in the sea water leaving the deposited organic matter undecomposed. Half of the world's petroleum reserves were laid down at this time in the anoxic conditions of what would become the Persian Gulf and the Gulf of Mexico. In many places around the world, dark anoxic shales were formed during this interval, such as the Mancos Shale of western North America. These shales are an important source rock for oil and gas, for example in the subsurface of the North Sea. Europe In northwestern Europe, chalk deposits from the Upper Cretaceous are characteristic for the Chalk Group, which forms the white cliffs of Dover on the south coast of England and similar cliffs on the French Normandian coast. The group is found in England, northern France, the low countries, northern Germany, Denmark and in the subsurface of the southern part of the North Sea. Chalk is not easily consolidated and the Chalk Group still consists of loose sediments in many places. The group also has other limestones and arenites. Among the fossils it contains are sea urchins, belemnites, ammonites and sea reptiles such as Mosasaurus. In southern Europe, the Cretaceous is usually a marine system consisting of competent limestone beds or incompetent marls. Because the Alpine mountain chains did not yet exist in the Cretaceous, these deposits formed on the southern edge of the European continental shelf, at the margin of the Tethys Ocean. North America During the Cretaceous, the present North American continent was isolated from the other continents. In the Jurassic, the North Atlantic already opened, leaving a proto-ocean between Europe and North America. From north to south across the continent, the Western Interior Seaway started forming. This inland sea separated the elevated areas of Laramidia in the west and Appalachia in the east. Three dinosaur clades found in Laramidia (troodontids, therizinosaurids and oviraptorosaurs) are absent from Appalachia from the Coniacian through the Maastrichtian. Paleogeography During the Cretaceous, the late-Paleozoic-to-early-Mesozoic supercontinent of Pangaea completed its tectonic breakup into the present-day continents, although their positions were substantially different at the time. As the Atlantic Ocean widened, the convergent-margin mountain building (orogenies) that had begun during the Jurassic continued in the North American Cordillera, as the Nevadan orogeny was followed by the Sevier and Laramide orogenies. Gondwana had begun to break up during the Jurassic Period, but its fragmentation accelerated during the Cretaceous and was largely complete by the end of the period. South America, Antarctica and Australia rifted away from Africa (though India and Madagascar remained attached to each other until around 80 million years ago); thus, the South Atlantic and Indian Oceans were newly formed. Such active rifting lifted great undersea mountain chains along the welts, raising eustatic sea levels worldwide. To the north of Africa the Tethys Sea continued to narrow. During the most of the Late Cretaceous, North America would be divided in two by the Western Interior Seaway, a large interior sea, separating Laramidia to the west and Appalachia to the east, then receded late in the period, leaving thick marine deposits sandwiched between coal beds. At the peak of the Cretaceous transgression, one-third of Earth's present land area was submerged. The Cretaceous is justly famous for its chalk; indeed, more chalk formed in the Cretaceous than in any other period in the Phanerozoic. Mid-ocean ridge activity—or rather, the circulation of seawater through the enlarged ridges—enriched the oceans in calcium; this made the oceans more saturated, as well as increased the bioavailability of the element for calcareous nanoplankton. These widespread carbonates and other sedimentary deposits make the Cretaceous rock record especially fine. Famous formations from North America include the rich marine fossils of Kansas's Smoky Hill Chalk Member and the terrestrial fauna of the late Cretaceous Hell Creek Formation. Other important Cretaceous exposures occur in Europe (e.g., the Weald) and China (the Yixian Formation). In the area that is now India, massive lava beds called the Deccan Traps were erupted in the very late Cretaceous and early Paleocene. Climate The cooling trend of the last epoch of the Jurassic continued into the first age of the Cretaceous. There is evidence that snowfalls were common in the higher latitudes, and the tropics became wetter than during the Triassic and Jurassic. Glaciation was however restricted to high-latitude mountains, though seasonal snow may have existed farther from the poles. Rafting by ice of stones into marine environments occurred during much of the Cretaceous, but evidence of deposition directly from glaciers is limited to the Early Cretaceous of the Eromanga Basin in southern Australia. After the end of the first age, however, temperatures increased again, and these conditions were almost constant until the end of the period. The warming may have been due to intense volcanic activity which produced large quantities of carbon dioxide. Between 70 and 69 Ma and 66–65 Ma, isotopic ratios indicate elevated atmospheric CO2 pressures with levels of 1000–1400 ppmV and mean annual temperatures in west Texas between . Atmospheric CO2 and temperature relations indicate a doubling of pCO2 was accompanied by a ~0.6 °C increase in temperature. The production of large quantities of magma, variously attributed to mantle plumes or to extensional tectonics, further pushed sea levels up, so that large areas of the continental crust were covered with shallow seas. The Tethys Sea connecting the tropical oceans east to west also helped to warm the | The high sea level and warm climate of the Cretaceous meant large areas of the continents were covered by warm, shallow seas, providing habitat for many marine organisms. The Cretaceous was named for the extensive chalk deposits of this age in Europe, but in many parts of the world, the deposits from the Cretaceous are of marine limestone, a rock type that is formed under warm, shallow marine conditions. Due to the high sea level, there was extensive space for such sedimentation. Because of the relatively young age and great thickness of the system, Cretaceous rocks are evident in many areas worldwide. Chalk is a rock type characteristic for (but not restricted to) the Cretaceous. It consists of coccoliths, microscopically small calcite skeletons of coccolithophores, a type of algae that prospered in the Cretaceous seas. Stagnation of deep sea currents in middle Cretaceous times caused anoxic conditions in the sea water leaving the deposited organic matter undecomposed. Half of the world's petroleum reserves were laid down at this time in the anoxic conditions of what would become the Persian Gulf and the Gulf of Mexico. In many places around the world, dark anoxic shales were formed during this interval, such as the Mancos Shale of western North America. These shales are an important source rock for oil and gas, for example in the subsurface of the North Sea. Europe In northwestern Europe, chalk deposits from the Upper Cretaceous are characteristic for the Chalk Group, which forms the white cliffs of Dover on the south coast of England and similar cliffs on the French Normandian coast. The group is found in England, northern France, the low countries, northern Germany, Denmark and in the subsurface of the southern part of the North Sea. Chalk is not easily consolidated and the Chalk Group still consists of loose sediments in many places. The group also has other limestones and arenites. Among the fossils it contains are sea urchins, belemnites, ammonites and sea reptiles such as Mosasaurus. In southern Europe, the Cretaceous is usually a marine system consisting of competent limestone beds or incompetent marls. Because the Alpine mountain chains did not yet exist in the Cretaceous, these deposits formed on the southern edge of the European continental shelf, at the margin of the Tethys Ocean. North America During the Cretaceous, the present North American continent was isolated from the other continents. In the Jurassic, the North Atlantic already opened, leaving a proto-ocean between Europe and North America. From north to south across the continent, the Western Interior Seaway started forming. This inland sea separated the elevated areas of Laramidia in the west and Appalachia in the east. Three dinosaur clades found in Laramidia (troodontids, therizinosaurids and oviraptorosaurs) are absent from Appalachia from the Coniacian through the Maastrichtian. Paleogeography During the Cretaceous, the late-Paleozoic-to-early-Mesozoic supercontinent of Pangaea completed its tectonic breakup into the present-day continents, although their positions were substantially different at the time. As the Atlantic Ocean widened, the convergent-margin mountain building (orogenies) that had begun during the Jurassic continued in the North American Cordillera, as the Nevadan orogeny was followed by the Sevier and Laramide orogenies. Gondwana had begun to break up during the Jurassic Period, but its fragmentation accelerated during the Cretaceous and was largely complete by the end of the period. South America, Antarctica and Australia rifted away from Africa (though India and Madagascar remained attached to each other until around 80 million years ago); thus, the South Atlantic and Indian Oceans were newly formed. Such active rifting lifted great undersea mountain chains along the welts, raising eustatic sea levels worldwide. To the north of Africa the Tethys Sea continued to narrow. During the most of the Late Cretaceous, North America would be divided in two by the Western Interior Seaway, a large interior sea, separating Laramidia to the west and Appalachia to the east, then receded late in the period, leaving thick marine deposits sandwiched between coal beds. At the peak of the Cretaceous transgression, one-third of Earth's present land area was submerged. The Cretaceous is justly famous for its chalk; indeed, more chalk formed in the Cretaceous than in any other period in the Phanerozoic. Mid-ocean ridge activity—or rather, the circulation of seawater through the enlarged ridges—enriched the oceans in calcium; this made the oceans more saturated, as well as increased the bioavailability of the element for calcareous nanoplankton. These widespread carbonates and other sedimentary deposits make the Cretaceous rock record especially fine. Famous formations from North America include the rich marine fossils of Kansas's Smoky Hill Chalk Member and the terrestrial fauna of the late Cretaceous Hell Creek Formation. Other important Cretaceous exposures occur in Europe (e.g., the Weald) and China (the Yixian Formation). In the area that is now India, massive lava beds called the Deccan Traps were erupted in the very late Cretaceous and early Paleocene. Climate The cooling trend of the last epoch of the Jurassic continued into the first age of the Cretaceous. There is evidence that snowfalls were common in the higher latitudes, and the tropics became wetter than during the Triassic and Jurassic. Glaciation was however restricted to high-latitude mountains, though seasonal snow may have existed farther from the poles. Rafting by ice of stones into marine environments occurred during much of the Cretaceous, but evidence of deposition directly from glaciers is limited to the Early Cretaceous of the Eromanga Basin in southern Australia. After the end of the first age, however, temperatures increased again, and these conditions were almost constant until the end of the period. The warming may have been due to intense volcanic activity which produced large quantities of carbon dioxide. Between 70 and 69 Ma and 66–65 Ma, isotopic ratios indicate elevated atmospheric CO2 pressures with levels of 1000–1400 ppmV and mean annual temperatures in west Texas between . Atmospheric CO2 and temperature relations indicate a doubling of pCO2 was accompanied by a ~0.6 °C increase in temperature. The production of large quantities of magma, variously attributed to mantle plumes or to extensional tectonics, further pushed sea levels up, so that large areas of the continental crust were covered with shallow seas. The Tethys Sea connecting the tropical oceans east to west also helped to warm the global climate. Warm-adapted plant fossils are known from localities as far north as Alaska and Greenland, while dinosaur fossils have been found within 15 degrees of the Cretaceous south pole. It was suggested that there was Antarctic marine glaciation in the Turonian Age, based on isotopic evidence. However, this has subsequently been suggested to be the result of inconsistent isotopic proxies, with evidence of polar rainforests during this time interval at 82° S. A very gentle temperature gradient from the equator to the poles meant weaker global winds, which drive the ocean currents, resulted in less upwelling and more stagnant oceans than today. This is evidenced by widespread black shale deposition and frequent anoxic events. Sediment cores show that tropical sea surface temperatures may have briefly been as warm as , warmer than at present, and that they averaged around . Meanwhile, deep ocean temperatures were as much as warmer than today's. Flora Flowering plants (angiosperms) make up around 90% of living plant species today. Prior to the rise of angiosperms, during the Jurassic and the Early Cretaceous, the higher flora was dominated by gymnosperm groups, including cycads, conifers, ginkgophytes, gnetophytes and close relatives, as well as the extinct Bennettitales. Other groups of plants included pteridosperms or "seed ferns", a collective term to refer to disparate groups of fern-like plants that produce seeds, including groups such as Corystospermaceae and Caytoniales. The exact origins of angiosperms are uncertain, although molecular evidence suggests that they are not closely related to any living group of gymnosperms. The earliest widely accepted evidence of flowering plants are monosulcate (single grooved) pollen grains from the late Valanginian (~ 134 million years ago) found in Israel, and Italy, initially at low abundance. Molecular clock estimates conflict with fossil estimates, suggesting the diversification of crown-group angiosperms during the Upper Triassic or Jurassic, but such estimates are difficult to reconcile with the heavily sampled pollen record and the distinctive tricolpate to tricolporoidate (triple grooved) pollen of eudicot angiosperms. Among the oldest records of Angiosperm macrofossils are Montsechia from the Barremian aged Las Hoyas beds of Spain and Archaefructus from the Barremian-Aptian boundary Yixian Formation in China. Tricolpate pollen distinctive of eudicots first appears in the Late Barremian, while the earliest remains of monocots are known from the Aptian. Flowering plants underwent a rapid radiation beginning during the middle Cretaceous, becoming the dominant group of land plants by the end of the period, coindicent with the decline of previously dominant groups such as conifers. The oldest known fossils of grasses are from the Albian, with the family having diversified into modern groups by the end of the Cretaceous. The oldest large angiosperm trees are known from the Turonian (c. 90 Ma) of New Jersey, with the trunk having a preserved diameter of and an estimated height of . During the Cretaceous, Polypodiales ferns, which make up 80% of living fern species, would also begin to diversify. Terrestrial fauna On land, mammals were generally small sized, but a very relevant component of the fauna, with cimolodont multituberculates outnumbering dinosaurs in some sites. Neither true marsupials nor placentals existed until the very end, but a variety of non-marsupial metatherians and non-placental eutherians had already begun to diversify greatly, ranging as carnivores (Deltatheroida), aquatic foragers (Stagodontidae) and herbivores (Schowalteria, Zhelestidae). Various "archaic" groups like eutriconodonts were common in the Early Cretaceous, but by the Late Cretaceous northern mammalian faunas were dominated by multituberculates and therians, with dryolestoids dominating South America. The apex predators were archosaurian reptiles, especially dinosaurs, which were at their most diverse stage. Avians such as the ancestors of modern day birds also diversified. They inhabited every continent, and were even found in cold polar latitudes. Pterosaurs were common in the early and middle Cretaceous, but as the Cretaceous proceeded they declined for poorly understood reasons (once thought to be due to competition with early birds, but now it is understood avian adaptive radiation is not consistent with pterosaur decline), and by the |
diagnosis may initially be suspected in a person with rapidly progressing dementia, particularly when they are also found with the characteristic medical signs and symptoms such as involuntary muscle jerking, difficulty with coordination/balance and walking, and visual disturbances. Further testing can support the diagnosis and may include: Electroencephalography – may have characteristic generalized periodic sharp wave pattern. Periodic sharp wave complexes develop in half of the people with sporadic CJD, particularly in the later stages. Cerebrospinal fluid (CSF) analysis for elevated levels of 14-3-3 protein could be supportive in the diagnosis of sCJD. However, a positive result should not be regarded as sufficient for the diagnosis. The Real-Time Quaking-Induced Conversion (RT-QuIC) assay has a diagnostic sensitivity of more than 80% and a specificity approaching 100%, tested in detecting PrPSc in CSF samples of people with CJD. It is therefore suggested as a high-value diagnostic method for the disease. MRI of the brain – often shows high signal intensity in the caudate nucleus and putamen bilaterally on T2-weighted images. In recent years, studies have shown that the tumour marker Neuron-specific enolase (NSE) is often elevated in CJD cases; however, its diagnostic utility is seen primarily when combined with a test for the 14-3-3 protein. , screening tests to identify infected asymptomatic individuals, such as blood donors, are not yet available, though methods have been proposed and evaluated. Imaging Imaging of the brain may be performed during medical evaluation, both to rule out other causes and to obtain supportive evidence for diagnosis. Imaging findings are variable in their appearance, and also variable in sensitivity and specificity. While imaging plays a lesser role in diagnosis of CJD, characteristic findings on brain MRI in some cases may precede onset of clinical manifestations. Brain MRI is the most useful imaging modality for changes related to CJD. Of the MRI sequences, diffuse-weighted imaging sequences are most sensitive. Characteristic findings are as follows: Focal or diffuse diffusion-restriction involving the cerebral cortex and/or basal ganglia. In about 24% of cases DWI shows only cortical hyperintensity; in 68%, cortical and subcortical abnormalities; and in 5%, only subcortical anomalies. The most iconic and striking cortical abnormality has been called "cortical ribboning" or "cortical ribbon sign" due to hyperintensities resembling ribbons appearing in the cortex on MRI. The involvement of the thalamus can be found in sCJD, is even stronger and constant in vCJD. Varying degree of symmetric T2 hyperintense signal changes in the basal ganglia (i.e., caudate and putamen), and to a lesser extent globus pallidus and occipital cortex. Cerebellar atrophy Brain FDG PET-CT tends to be markedly abnormal, and is increasingly used in the investigation of dementias. Patients suffering from CJD will normally have hypometabolism on FDG PET. Histopathology Testing of tissue remains the most definitive way of confirming the diagnosis of CJD, although it must be recognized that even biopsy is not always conclusive. In one-third of people with sporadic CJD, deposits of "prion protein (scrapie)," PrPSc, can be found in the skeletal muscle and/or the spleen. Diagnosis of vCJD can be supported by biopsy of the tonsils, which harbor significant amounts of PrPSc; however, biopsy of brain tissue is the definitive diagnostic test for all other forms of prion disease. Due to its invasiveness, biopsy will not be done if clinical suspicion is sufficiently high or low. A negative biopsy does not rule out CJD, since it may predominate in a specific part of the brain. The classic histologic appearance is spongiform change in the gray matter: the presence of many round vacuoles from one to 50 micrometers in the neuropil, in all six cortical layers in the cerebral cortex or with diffuse involvement of the cerebellar molecular layer. These vacuoles appear glassy or eosinophilic and may coalesce. Neuronal loss and gliosis are also seen. Plaques of amyloid-like material can be seen in the neocortex in some cases of CJD. However, extra-neuronal vacuolization can also be seen in other disease states. Diffuse cortical vacuolization occurs in Alzheimer's disease, and superficial cortical vacuolization occurs in ischemia and frontotemporal dementia. These vacuoles appear clear and punched-out. Larger vacuoles encircling neurons, vessels, and glia are a possible processing artifact. Classification Types of CJD include: Sporadic (sCJD), caused by the spontaneous misfolding of prion-protein in an individual. This accounts for 85% of cases of CJD. Familial (fCJD), caused by an inherited mutation in the prion-protein gene. This accounts for the majority of the other 15% of cases of CJD. Acquired CJD, caused by contamination with tissue from an infected person, usually as the result of a medical procedure (iatrogenic CJD). Medical procedures that are associated with the spread of this form of CJD include blood transfusion from the infected person, use of human-derived pituitary growth hormones, gonadotropin hormone therapy, and corneal and meningeal transplants. Variant Creutzfeldt–Jakob disease (vCJD) is a type of acquired CJD potentially acquired from bovine spongiform encephalopathy or caused by consuming food contaminated with prions. Treatment As of 2022, there is no cure or effective treatment for CJD. Some of the symptoms like twitching can be managed, but otherwise treatment is palliative care. Psychiatric symptoms like anxiety and depression can be treated with sedatives and antidepressants. Myoclonic jerks can be handled with clonazepam or sodium valproate. Opiates can help in pain. Seizures are very uncommon but can nevertheless be treated with antiepileptic drugs. Prognosis The condition is universally fatal. As of 1981, nobody is known to have lived longer than 2.5 years after the onset of CJD symptoms./ The longest recorded survivor of variant Creutzfeldt–Jakob disease (vCJD) was Jonathan Simms, a Northern Irish man who lived 10 years after his diagnosis. Epidemiology CDC monitors the occurrence of CJD in the United States through periodic reviews of national mortality data. According to the CDC: CJD occurs worldwide at a rate of about 1 case per million population per year. On the basis of mortality surveillance from 1979 to 1994, the annual incidence of CJD remained stable at approximately 1 case per million people in the United States. In the United States, CJD deaths among people younger than 30 years of age are extremely rare (fewer than five deaths per billion per year). The disease is found most frequently in people 55–65 years of age, but cases can occur in people older than 90 years and younger than 55 years of age. In more than 85% of cases, the duration of CJD is less than 1 year (median: four months) after the onset of symptoms. Further information from the CDC: Risk of developing CJD increases with age. CJD incidence was 3.5 cases per million among those over 50 years of age between 1979 and 2017. Approximately 85% of CJD cases are sporadic and 10-15% of CJD cases are due to inherited mutations of the prion protein gene. CJD deaths and age-adjusted death rate in the United States indicate an increasing trend in the number of deaths between 1979 and 2017. Although not fully understood, additional information suggests that CJD rates in African American and nonwhite groups are lower than in whites. While the mean onset is approximately 67 years of age, cases of sCJD have been reported as young as 17 years and over 80 years of age. Mental capabilities rapidly deteriorate and the average amount of time from onset of symptoms to death is 7 to 9 months. According to a 2020 systematic review on the international epidemiology of CJD: Surveillance studies from 2005 and later show the estimated global incidence is 1–2 cases per million population per year. Sporadic CJD (sCJD) incidence increased from the years 1990–2018 in the UK. Probable or definite sCJD deaths also increased from the years 1996–2018 in twelve additional countries. CJD incidence is greatest in those over the age of 55 years old, with an average age of 67 years old. The intensity of CJD surveillance increases the number of reported cases, often in countries where CJD epidemics have occurred in the past and where surveillance resources are greatest. An increase in surveillance and reporting of CJD is most likely in response to BSE and vCJD. Possible factors contributing to an increase of CJD incidence are an aging population, population increase, clinician awareness, and more accurate diagnostic methods. Since CJD symptoms are similar to other neurological conditions, it is also possible that CJD is mistaken for stroke, acute nephropathy, general dementia, and hyperparathyroidism. History The disease was first described by German neurologists Hans Gerhard Creutzfeldt in 1920 and shortly afterward by Alfons Maria Jakob, giving it the name Creutzfeldt–Jakob. Some of the clinical findings described in their first papers do not match current criteria for Creutzfeldt–Jakob disease, and it has been speculated that at least two of the people in initial studies were suffering from a different ailment. An early description of familial CJD stems from the German psychiatrist and neurologist Friedrich Meggendorfer (1880–1953). A study published in 1997 counted more than 100 cases worldwide of transmissible CJD and new cases continued to appear at the time. The first report of suspected iatrogenic CJD was published in 1974. Animal experiments showed that corneas of infected animals could transmit CJD, and the causative agent spreads along visual pathways. A second case of CJD associated with a corneal transplant was reported without details. In 1977, CJD transmission caused by silver electrodes previously used in the brain of a person with CJD was first reported. Transmission occurred despite the decontamination of the electrodes with ethanol and formaldehyde. Retrospective studies identified four other cases likely of similar cause. The rate of transmission from a single contaminated instrument is unknown, although it is not | Cerebrospinal fluid (CSF) analysis for elevated levels of 14-3-3 protein could be supportive in the diagnosis of sCJD. However, a positive result should not be regarded as sufficient for the diagnosis. The Real-Time Quaking-Induced Conversion (RT-QuIC) assay has a diagnostic sensitivity of more than 80% and a specificity approaching 100%, tested in detecting PrPSc in CSF samples of people with CJD. It is therefore suggested as a high-value diagnostic method for the disease. MRI of the brain – often shows high signal intensity in the caudate nucleus and putamen bilaterally on T2-weighted images. In recent years, studies have shown that the tumour marker Neuron-specific enolase (NSE) is often elevated in CJD cases; however, its diagnostic utility is seen primarily when combined with a test for the 14-3-3 protein. , screening tests to identify infected asymptomatic individuals, such as blood donors, are not yet available, though methods have been proposed and evaluated. Imaging Imaging of the brain may be performed during medical evaluation, both to rule out other causes and to obtain supportive evidence for diagnosis. Imaging findings are variable in their appearance, and also variable in sensitivity and specificity. While imaging plays a lesser role in diagnosis of CJD, characteristic findings on brain MRI in some cases may precede onset of clinical manifestations. Brain MRI is the most useful imaging modality for changes related to CJD. Of the MRI sequences, diffuse-weighted imaging sequences are most sensitive. Characteristic findings are as follows: Focal or diffuse diffusion-restriction involving the cerebral cortex and/or basal ganglia. In about 24% of cases DWI shows only cortical hyperintensity; in 68%, cortical and subcortical abnormalities; and in 5%, only subcortical anomalies. The most iconic and striking cortical abnormality has been called "cortical ribboning" or "cortical ribbon sign" due to hyperintensities resembling ribbons appearing in the cortex on MRI. The involvement of the thalamus can be found in sCJD, is even stronger and constant in vCJD. Varying degree of symmetric T2 hyperintense signal changes in the basal ganglia (i.e., caudate and putamen), and to a lesser extent globus pallidus and occipital cortex. Cerebellar atrophy Brain FDG PET-CT tends to be markedly abnormal, and is increasingly used in the investigation of dementias. Patients suffering from CJD will normally have hypometabolism on FDG PET. Histopathology Testing of tissue remains the most definitive way of confirming the diagnosis of CJD, although it must be recognized that even biopsy is not always conclusive. In one-third of people with sporadic CJD, deposits of "prion protein (scrapie)," PrPSc, can be found in the skeletal muscle and/or the spleen. Diagnosis of vCJD can be supported by biopsy of the tonsils, which harbor significant amounts of PrPSc; however, biopsy of brain tissue is the definitive diagnostic test for all other forms of prion disease. Due to its invasiveness, biopsy will not be done if clinical suspicion is sufficiently high or low. A negative biopsy does not rule out CJD, since it may predominate in a specific part of the brain. The classic histologic appearance is spongiform change in the gray matter: the presence of many round vacuoles from one to 50 micrometers in the neuropil, in all six cortical layers in the cerebral cortex or with diffuse involvement of the cerebellar molecular layer. These vacuoles appear glassy or eosinophilic and may coalesce. Neuronal loss and gliosis are also seen. Plaques of amyloid-like material can be seen in the neocortex in some cases of CJD. However, extra-neuronal vacuolization can also be seen in other disease states. Diffuse cortical vacuolization occurs in Alzheimer's disease, and superficial cortical vacuolization occurs in ischemia and frontotemporal dementia. These vacuoles appear clear and punched-out. Larger vacuoles encircling neurons, vessels, and glia are a possible processing artifact. Classification Types of CJD include: Sporadic (sCJD), caused by the spontaneous misfolding of prion-protein in an individual. This accounts for 85% of cases of CJD. Familial (fCJD), caused by an inherited mutation in the prion-protein gene. This accounts for the majority of the other 15% of cases of CJD. Acquired CJD, caused by contamination with tissue from an infected person, usually as the result of a medical procedure (iatrogenic CJD). Medical procedures that are associated with the spread of this form of CJD include blood transfusion from the infected person, use of human-derived pituitary growth hormones, gonadotropin hormone therapy, and corneal and meningeal transplants. Variant Creutzfeldt–Jakob disease (vCJD) is a type of acquired CJD potentially acquired from bovine spongiform encephalopathy or caused by consuming food contaminated with prions. Treatment As of 2022, there is no cure or effective treatment for CJD. Some of the symptoms like twitching can be managed, but otherwise treatment is palliative care. Psychiatric symptoms like anxiety and depression can be treated with sedatives and antidepressants. Myoclonic jerks can be handled with clonazepam or sodium valproate. Opiates can help in pain. Seizures are very uncommon but can nevertheless be treated with antiepileptic drugs. Prognosis The condition is universally fatal. As of 1981, nobody is known to have lived longer than 2.5 years after the onset of CJD symptoms./ The longest recorded survivor of variant Creutzfeldt–Jakob disease (vCJD) was Jonathan Simms, a Northern Irish man who lived 10 years after his diagnosis. Epidemiology CDC monitors the occurrence of CJD in the United States through periodic reviews of national mortality data. According to the CDC: CJD occurs worldwide at a rate of about 1 case per million population per year. On the basis of mortality surveillance from 1979 to 1994, the annual incidence of CJD remained stable at approximately 1 case per million people in the United States. In the United States, CJD deaths among people younger than 30 years of age are extremely rare (fewer than five deaths per billion per year). The disease is found most frequently in people 55–65 years of age, but cases can occur in people older than 90 years and younger than 55 years of age. In more than 85% of cases, the duration of CJD is less than 1 year (median: four months) after the onset of symptoms. Further information from the CDC: Risk of developing CJD increases with age. CJD incidence was 3.5 cases per million among those over 50 years of age between 1979 and 2017. Approximately 85% of CJD cases are sporadic and 10-15% of CJD cases are due to inherited mutations of the prion protein gene. CJD deaths and age-adjusted death rate in the United States indicate an increasing trend in the number of deaths between 1979 and 2017. Although not fully understood, additional information suggests that CJD rates in African American and nonwhite groups are lower than in whites. While the mean onset is approximately 67 years of age, cases of sCJD have been reported as young as 17 years and over 80 years of age. Mental capabilities rapidly deteriorate and the average amount of time from onset of symptoms to death is 7 to 9 months. According to a 2020 systematic review on the international epidemiology of CJD: Surveillance studies from 2005 and later show the estimated global incidence is 1–2 cases per million population per year. Sporadic CJD (sCJD) incidence increased from the years 1990–2018 in the UK. Probable or definite sCJD deaths also increased from the years 1996–2018 in twelve additional countries. CJD incidence is greatest in those over the age of 55 years old, with an average age of 67 years old. The intensity of CJD surveillance increases the number of reported cases, often in countries where CJD epidemics have occurred in the past and where surveillance resources are greatest. An increase in surveillance and reporting of CJD is most likely in response to BSE and vCJD. Possible |
of a design to make those books readily accessible. He suggested that the building, equipment and maintenance of the public library ought to be the responsibility of the Municipality rather than the Government. T. P. F. McNeice, the then President of the Singapore City Council, as well as leading educationists of the time, thought the suggestion "an excellent, first-class suggestion to meet a definite and urgent need." McNeice also agreed that the project ought to be the responsibility of the City Council. Also in favour of the idea was Director of Education, A. W. Frisby who thought that there ought to be branches of the library, which could be fed by the central library, Raffles Institution Principal P. F. Howitt, Canon R. K. S. Adams (Principal of St. Andrews School) and Homer Cheng, the President of the Chinese Y.M.C.A. Principal of the Anglo-Chinese School, H. H. Peterson suggested the authorities also consider a mobile school library. While Parkinson had originally suggested that this be a Municipal and not a Government undertaking, something changed. A public meeting, convened by the Friends of Singapore - Parkinson was its President - at the British Council Hall on 15 May, decided that Singapore's memorial to King George VI would take the form of a public library, possibly with mobile units and sub-libraries in the out-of-town districts. Parkinson, in addressing the assembly noted that Raffles Library was not a free library, did not have vernacular sections, and its building could not be air-conditioned. McNeice, the Municipal President then proposed a resolution be sent to Government that the meeting considered the most appropriate memorial to the late King ought to take the form of a library (or libraries) and urged Government to set up a committee with enough non-Government representation, to consider the matter. The Government got involved, and a Government spokesperson spoke to the Straits Times about this on 16 May, saying that the Singapore Government welcomed proposals from the public on the form in which a memorial to King George ought to take, whether a public library, as suggested by Parkinson, or some other form. In the middle of 1952, the Singapore Government began setting up a committee to consider the suggestions made on the form Singapore's memorial to King George VI ought to take. G. G. Thomson, the Government's Public Relations Secretary informed the Straits Times that the committee would have official and non-Government representation and added that, apart from Parkinson's suggestion of a free public library, a polytechnic had also been suggested. W. L. Blythe, the Colonial Secretary, making it clear where his vote lay, pointed out that Singapore, at that time, already had a library, the Raffles Library. From news coverage we learn that yet another committee had been formed, this time to consider what would be necessary to establish an institution along the lines of the London Polytechnic. Blythe stated that the arguments he had heard in favour of a polytechnic were very strong. Director of Raffles Library and Museum, W. M. F. Tweedie was in favour of the King George VI free public library but up to the end of November, nothing had been heard of any developments towards that end. Tweedie suggested the ground beside the British Council as being suitable for such a library, and, if the public library was built, he would suggest for all the books at the Raffles Library to be moved to the new site, so that the space thus vacated could be used for a public art gallery. Right after, the Government, who were not supposed to have been involved in the first place - the suggestion made by Parkinson and accepted by City Council President T. P. F. McNeice that this be a Municipal and not Government undertaking - approved the proposal to set up a polytechnic as a memorial to King George IV. And Singapore continued with its subscription library and was without a free public library as envisioned by Parkinson. However, his call did not go unheeded. The following year, in August, 1953, the Lee Foundation pledged a dollar-for-dollar match up to $375,000 towards the establishment of a national library, provided that it was a free, without-cost, public library, open to men and women of every race, class, creed, and colour. It was not, however until November 1960, that Parkinson's vision was realised, when the new library, free and for all, was completed and opened to the public. Film Censorship Consultative Committee That same month he was also appointed, by the Singapore Government, Chairman of a committee set up to study film censorship in the Colony and suggest changes, if necessary. Their terms of reference were to enquire into the existing procedure and legislation relating to cinematograph film censorship and to make recommendations with a view to improving the system, including legislation. They were also asked to consider whether the Official Film Censor should continue to be the controller of the British film s quota, and to consider the memorandum of the film trade submitted to the Governor earlier that year. Investigating, archiving and writing Malaya's past At the beginning of December 1950, Parkinson made an appeal, at the Singapore Rotary Club, for old log books, diaries, newspaper files, ledgers or maps accumulated over the years. He asked that these be passed to the Raffles Library or the University of Malaya library, instead of being thrown away, as they might aid research and help those studying the history of the country to set down an account of what had happened in Malaya since 1867. "The time will come when school-children will be taught the history of their own land rather than of Henry VIII or the capture of Quebec. Parkinson told his audience that there was a large volume of documentary evidence about Malaya written in Portuguese and Dutch. He said that the arrival of the Pluto in Singapore, one of the first vessels to pass through the Suez Canal when it opened in 1869, might be described as the moment when British Malaya was born. "I would urge you not to scrap old correspondence just because it clutters up the office. Send it to a library where it may some day be of great value," he said. In September 1951 the magazine, British Malaya, published Parkinson's letter that called for the formation of one central Archives Office where all the historical records of Malaya and Singapore could be properly preserved, pointing out that it would be of inestimable value to administrators, historians, economists, social science investigators and students. In his letter, Parkinson, who was still abroad attending the Anglo-American Conference of Historians, in London, said that the formation of an Archives Office was already in discussion, and was urgent, in view of the climate where documents were liable to damage by insects and mildew. He said that many private documents relating to Malaya were kept in the U.K., where they were not appreciated because names like Maxwell, Braddell and Swettenham might mean nothing there. "The establishment of a Malayan Archives Office would do much to encourage the transfer of these documents," he wrote. On 22 May 1953, Parkinson convened a meeting at the British Council, Stamford Road, Singapore, to form the Singapore branch of the Malayan Historical Society. Speaking at the inaugural meeting of the society's Singapore branch, Parkinson, addressing the more than 100 people attending, said the aims of the branch would be to assist in the recording of history, folklore, tradition and customs of Malaya and its people and to encourage the preservation of objects of historical and cultural interest. Of Malayan history, he said, it "has mostly still to be written. Nor can it even be taught in the schools until that writing has been done." Parkinson had been urging the Singapore and Federation Governments to set up a national archives since 1950. In June 1953 he urged the speedy establishment of a national archives, where, "in air-conditioned rooms, on steel shelves, with proper skilled supervision and proper precaution against fire and theft, the records of Malayan history might be preserved indefinitely and at small expense. He noted that cockroaches had nibbled away at many vital documents and records, shrouding many years of Malaya's past in mystery, aided by moths and silverfish and abetted by negligent officials. A start had, by then, already been made - an air-conditioned room at the Federal Museum had already been set aside for storing important historical documents and preserving them from cockroaches and decay, the work of Peter Williams-Hunt, the Federation Director of Museums and Adviser on Aborigine Affairs who had died that month. He noted, however, that the problems of supervising archives and collecting old documents, had still to be solved. In January 1955 Parkinson formed University of Malaya's Archaeological Society and became its first President. Upon commencement, The Society had a membership of 53 which was reported to be the largest of its kind in Southeast Asia at the time. "Drive to discover the secrets of S.E. Asia. Hundreds of amateurs will delve into mysteries of the past." In April 1956 it was reported that 'For the first time, a long-needed Standard History of Malaya is to be published for students.' According to the news report a large-scale project, developing a ten-volume series, the result of ten years of research by University of Malaya staff, was currently in progress, detailing events dating back to the Portuguese occupation of 1511, to the, then, present day. The first volume, written by Parkinson, covered the years 1867 to 1877 and was to be published within three months thence. It was estimated that the last volume would be released after 1960. The report noted that, as at that time, Parkinson and his wife had already released two books on history for junior students, entitled "The Heroes" and "Malayan Fables." Three months passed by and the book remained unpublished. It was not till 1960 that British Intervention in Malaya (1867-1877), that first volume, finally found its way on bookshelves and into libraries. By that time, the press reported, the series had expanded into a twelve-volume set. Malayan history syllabus In January 1951 Parkinson was interviewed by New Zealand film producer and director, Wynona “Noni” Hope Wright. He told of his reorganisation of the Department of History during the last term to facilitate a new syllabus. The interview took place in Parkinson's sitting room beneath a frieze depicting Malaya's history, painted by Parkinson. Departing from the usual syllabus, Parkinson had decided to leave out European History almost entirely in order to give greater focus to Southeast Asia, particularly Malaya. The course, designed experimentally, takes in the study of world history up to 1497 in the first year, the impact of different European nations on Southeast Asia in the second year, and the study of Southeast Asia, particularly Malaya, after the establishment of British influence at the Straits Settlements in the third year. The students who make it through and decide to specialise in history will, then, have been brought to a point where they can profitably undertake original research in the history of modern Malaya, i.e. the 19th and 20th centuries, an area where, according to Parkinson, little had been done, with hardly any serious research attempted for the period after 'the transfer,' in 1867. Parkinson hoped that lecturing on this syllabus would ultimately produce a full-scale history of Malaya. This would include discovering documentation from Portuguese and Dutch sources from the time when those two countries still had a foothold in Malaya. He said that, while the period of development of the Straits Settlements under the East India Company were well-documented - the bulk of these archived at the Raffles Museum, local records after 1867 were not as plentiful and that it would be necessary to reconstruct those records from microfilm copies of documents kept in the United Kingdom. The task for the staff at the History Department was made formidable because their unfamiliarity with the Dutch and Portuguese languages. "I have no doubt that the history of Malaya must finally be written by Malayans, but we can at least do very much to prepare the way." Parkinson told Wright. "Scholars trained at this University in the spirit and technique of historical research, a study divorced from all racial and religious animosities, a study concerned only with finding the truth and explaining it in a lucid and attractive literary form, should be able to make a unique contribution to the mutual understanding of East and West," he said. "History apart, nothing seems to be of more vital importance in our time than the promotion of this understanding. In no field at the present time does the perpetuation of distrust and mutual incomprehension seem more dangerous. If we can, from this university, send forth graduates who can combine learning and ways of thought of the Far East and of the West, they may play a great part in | On 18 August 1950, Parkinson opened a week-long exhibition on the "History of English Handwriting," at the British Council centre, Stamford Road, Singapore. On 21 March 1952, he opened an exhibition of photographs from The Times of London which had been shown widely in different parts of the world. The exhibition comprised a selection of photographs spanning 1921 to 1951. 140 photographs were on display for a month at the British Council Hall, Singapore, showing scenes ranging from the German surrender to the opening of the Festival of Britain by the late King. He opened an exhibition of photographs taken by students of the University of Malaya during their tour of India, at the University Arts Theatre in Cluny Road, Singapore, 10 October 1953. Victor Purcell Towards the end of August, Professor of Far Eastern History at Cambridge University, Dr. Victor Purcell, who was also a former Acting Secretary of Chinese Affairs in Singapore, addressed the Kuala Lumpur Rotary Club. The Straits Times, quoting Purcell, noted, "Professor C. N. Parkinson had been appointed to the Chair of History at the University of Malaya and 'we can confidently anticipate that under his direction academic research into Malaya's history will assume a creative aspect which it has not possessed before.'" Johore Transfer Committee In October, Parkinson was appointed, by the Senate of the University of Malaya, to head a special committee of experts to consult on technical details regarding the transfer of the University to Johore. Along with him were Professor R. E. Holttum (Botany), and Acting Professors C. G. Webb (Physics) and D. W. Fryer (Geography). Library and Museum In November, Parkinson was appointed a member of the Committee for the management of Raffles Library and Museum, replacing Professor G. G. Hough who had resigned. In March 1952, Parkinson proposed a central public library, for Singapore, as a memorial to King George VI, commemorating that monarch's reign. He is reported to have said, "Perhaps the day has gone by for public monuments except in a useful form. And if that be so, might not, some enterprise of local importance be graced with the late King's name? One plan he could certainly have warmly approved would be that of building a Central Public Library," he opined. Parkinson noted that the Raffles Library was growing in usefulness and would, in short time, outgrow the building that then housed it. He said, given the educational work that was producing a large literate population demanding books in English, Malay and Chinese, what was surely needed was a genuinely public library,air-conditioned to preserve the books, and of a design to make those books readily accessible. He suggested that the building, equipment and maintenance of the public library ought to be the responsibility of the Municipality rather than the Government. T. P. F. McNeice, the then President of the Singapore City Council, as well as leading educationists of the time, thought the suggestion "an excellent, first-class suggestion to meet a definite and urgent need." McNeice also agreed that the project ought to be the responsibility of the City Council. Also in favour of the idea was Director of Education, A. W. Frisby who thought that there ought to be branches of the library, which could be fed by the central library, Raffles Institution Principal P. F. Howitt, Canon R. K. S. Adams (Principal of St. Andrews School) and Homer Cheng, the President of the Chinese Y.M.C.A. Principal of the Anglo-Chinese School, H. H. Peterson suggested the authorities also consider a mobile school library. While Parkinson had originally suggested that this be a Municipal and not a Government undertaking, something changed. A public meeting, convened by the Friends of Singapore - Parkinson was its President - at the British Council Hall on 15 May, decided that Singapore's memorial to King George VI would take the form of a public library, possibly with mobile units and sub-libraries in the out-of-town districts. Parkinson, in addressing the assembly noted that Raffles Library was not a free library, did not have vernacular sections, and its building could not be air-conditioned. McNeice, the Municipal President then proposed a resolution be sent to Government that the meeting considered the most appropriate memorial to the late King ought to take the form of a library (or libraries) and urged Government to set up a committee with enough non-Government representation, to consider the matter. The Government got involved, and a Government spokesperson spoke to the Straits Times about this on 16 May, saying that the Singapore Government welcomed proposals from the public on the form in which a memorial to King George ought to take, whether a public library, as suggested by Parkinson, or some other form. In the middle of 1952, the Singapore Government began setting up a committee to consider the suggestions made on the form Singapore's memorial to King George VI ought to take. G. G. Thomson, the Government's Public Relations Secretary informed the Straits Times that the committee would have official and non-Government representation and added that, apart from Parkinson's suggestion of a free public library, a polytechnic had also been suggested. W. L. Blythe, the Colonial Secretary, making it clear where his vote lay, pointed out that Singapore, at that time, already had a library, the Raffles Library. From news coverage we learn that yet another committee had been formed, this time to consider what would be necessary to establish an institution along the lines of the London Polytechnic. Blythe stated that the arguments he had heard in favour of a polytechnic were very strong. Director of Raffles Library and Museum, W. M. F. Tweedie was in favour of the King George VI free public library but up to the end of November, nothing had been heard of any developments towards that end. Tweedie suggested the ground beside the British Council as being suitable for such a library, and, if the public library was built, he would suggest for all the books at the Raffles Library to be moved to the new site, so that the space thus vacated could be used for a public art gallery. Right after, the Government, who were not supposed to have been involved in the first place - the suggestion made by Parkinson and accepted by City Council President T. P. F. McNeice that this be a Municipal and not Government undertaking - approved the proposal to set up a polytechnic as a memorial to King George IV. And Singapore continued with its subscription library and was without a free public library as envisioned by Parkinson. However, his call did not go unheeded. The following year, in August, 1953, the Lee Foundation pledged a dollar-for-dollar match up to $375,000 towards the establishment of a national library, provided that it was a free, without-cost, public library, open to men and women of every race, class, creed, and colour. It was not, however until November 1960, that Parkinson's vision was realised, when the new library, free and for all, was completed and opened to the public. Film Censorship Consultative Committee That same month he was also appointed, by the Singapore Government, Chairman of a committee set up to study film censorship in the Colony and suggest changes, if necessary. Their terms of reference were to enquire into the existing procedure and legislation relating to cinematograph film censorship and to make recommendations with a view to improving the system, including legislation. They were also asked to consider whether the Official Film Censor should continue to be the controller of the British film s quota, and to consider the memorandum of the film trade submitted to the Governor earlier that year. Investigating, archiving and writing Malaya's past At the beginning of December 1950, Parkinson made an appeal, at the Singapore Rotary Club, for old log books, diaries, newspaper files, ledgers or maps accumulated over the years. He asked that these be passed to the Raffles Library or the University of Malaya library, instead of being thrown away, as they might aid research and help those studying the history of the country to set down an account of what had happened in Malaya since 1867. "The time will come when school-children will be taught the history of their own land rather than of Henry VIII or the capture of Quebec. Parkinson told his audience that there was a large volume of documentary evidence about Malaya written in Portuguese and Dutch. He said that the arrival of the Pluto in Singapore, one of the first vessels to pass through the Suez Canal when it opened in 1869, might be described as the moment when British Malaya was born. "I would urge you not to scrap old correspondence just because it clutters up the office. Send it to a library where it may some day be of great value," he said. In September 1951 the magazine, British Malaya, published Parkinson's letter that called for the formation of one central Archives Office where all the historical records of Malaya and Singapore could be properly preserved, pointing out that it would be of inestimable value to administrators, historians, economists, social science investigators and students. In his letter, Parkinson, who was still abroad attending the Anglo-American Conference of Historians, in London, said that the formation of an Archives Office was already in discussion, and was urgent, in view of the climate where documents were liable to damage by insects and mildew. He said that many private documents relating to Malaya were kept in the U.K., where they were not appreciated because names like Maxwell, Braddell and Swettenham might mean nothing there. "The establishment of a Malayan Archives Office would do much to encourage the transfer of these documents," he wrote. On 22 May 1953, Parkinson convened a meeting at the British Council, Stamford Road, Singapore, to form the Singapore branch of the Malayan Historical Society. Speaking at the inaugural meeting of the society's Singapore branch, Parkinson, addressing the more than 100 people attending, said the aims of the branch would be to assist in the recording of history, folklore, tradition and customs of Malaya and its people and to encourage the preservation of objects of historical and cultural interest. Of Malayan history, he said, it "has mostly still to be written. Nor can it even be taught in the schools until that writing has been done." Parkinson had been urging the Singapore and Federation Governments to set up a national archives since 1950. In June 1953 he urged the speedy establishment of a national archives, where, "in air-conditioned rooms, on steel shelves, with proper skilled supervision and proper precaution against fire and theft, the records of Malayan history might be preserved indefinitely and at small expense. He noted that cockroaches had nibbled away at many vital documents and records, shrouding many years of Malaya's past in mystery, aided by moths and silverfish and abetted by negligent officials. A start had, by then, already been made - an air-conditioned room at the Federal Museum had already been set aside for storing important historical documents and preserving them from cockroaches and decay, the work of Peter Williams-Hunt, the Federation Director of Museums and Adviser on Aborigine Affairs who had died that month. He noted, however, that the problems of supervising archives and collecting old documents, had still to be solved. In January 1955 Parkinson formed University of Malaya's Archaeological Society and became its first President. Upon commencement, The Society had a membership of 53 which was reported to be the largest of its kind in Southeast Asia at the time. "Drive to discover the secrets of S.E. Asia. Hundreds of amateurs will delve into mysteries of the past." In April 1956 it was reported that 'For the first time, a long-needed Standard History of Malaya is to be published for students.' According to the news report a large-scale project, developing a ten-volume series, the result of ten |
ocean to another (e.g., Caledonian Canal, Panama Canal). Features At their simplest, canals consist of a trench filled with water. Depending on the stratum the canal passes through, it may be necessary to line the cut with some form of watertight material such as clay or concrete. When this is done with clay, it is known as puddling. Canals need to be level, and while small irregularities in the lie of the land can be dealt with through cuttings and embankments, for larger deviations other approaches have been adopted. The most common is the pound lock, which consists of a chamber within which the water level can be raised or lowered connecting either two pieces of canal at a different level or the canal with a river or the sea. When there is a hill to be climbed, flights of many locks in short succession may be used. Prior to the development of the pound lock in 984 AD in China by Chhaio Wei-Yo and later in Europe in the 15th century, either flash locks consisting of a single gate were used or ramps, sometimes equipped with rollers, were used to change the level. Flash locks were only practical where there was plenty of water available. Locks use a lot of water, so builders have adopted other approaches for situations where little water is available. These include boat lifts, such as the Falkirk Wheel, which use a caisson of water in which boats float while being moved between two levels; and inclined planes where a caisson is hauled up a steep railway. To cross a stream, road or valley (where the delay caused by a flight of locks at either side would be unacceptable) the valley can be spanned by a navigable aqueduct – a famous example in Wales is the Pontcysyllte Aqueduct (now a UNESCO World Heritage Site) across the valley of the River Dee. Another option for dealing with hills is to tunnel through them. An example of this approach is the Harecastle Tunnel on the Trent and Mersey Canal. Tunnels are only practical for smaller canals. Some canals attempted to keep changes in level down to a minimum. These canals known as contour canals would take longer, winding routes, along which the land was a uniform altitude. Other, generally later, canals took more direct routes requiring the use of various methods to deal with the change in level. Canals have various features to tackle the problem of water supply. In cases, like the Suez Canal, the canal is simply open to the sea. Where the canal is not at sea level, a number of approaches have been adopted. Taking water from existing rivers or springs was an option in some cases, sometimes supplemented by other methods to deal with seasonal variations in flow. Where such sources were unavailable, reservoirs – either separate from the canal or built into its course – and back pumping were used to provide the required water. In other cases, water pumped from mines was used to feed the canal. In certain cases, extensive "feeder canals" were built to bring water from sources located far from the canal. Where large amounts of goods are loaded or unloaded such as at the end of a canal, a canal basin may be built. This would normally be a section of water wider than the general canal. In some cases, the canal basins contain wharfs and cranes to assist with movement of goods. When a section of the canal needs to be sealed off so it can be drained for maintenance stop planks are frequently used. These consist of planks of wood placed across the canal to form a dam. They are generally placed in pre-existing grooves in the canal bank. On more modern canals, "guard locks" or gates were sometimes placed to allow a section of the canal to be quickly closed off, either for maintenance, or to prevent a major loss of water due to a canal breach. History The transport capacity of pack animals and carts is limited. A mule can carry an eighth-ton [] maximum load over a journey measured in days and weeks, though much more for shorter distances and periods with appropriate rest. Besides, carts need roads. Transport over water is much more efficient and cost-effective for large cargoes. Ancient canals The oldest known canals were irrigation canals, built in Mesopotamia circa 4000 BC, in what is now Iraq. The Indus Valley Civilization, Ancient India, (circa 3000 BC) had sophisticated irrigation and storage systems developed, including the reservoirs built at Girnar in 3000 BC. This is the first time that such planned civil project had taken place in the ancient world. In Egypt, canals date back at least to the time of Pepi I Meryre (reigned 2332–2283 BC), who ordered a canal built to bypass the cataract on the Nile near Aswan. In ancient China, large canals for river transport were established as far back as the Spring and Autumn Period (8th–5th centuries BC), the longest one of that period being the Hong Gou (Canal of the Wild Geese), which according to the ancient historian Sima Qian connected the old states of Song, Zhang, Chen, Cai, Cao, and Wei. The Caoyun System of canals was essential for imperial taxation, which was largely assessed in kind and involved enormous shipments of rice and other grains. By far the longest canal was the Grand Canal of China, still the longest canal in the world today and the oldest extant one. It is long and was built to carry the Emperor Yang Guang between Zhuodu (Beijing) and Yuhang (Hangzhou). The project began in 605 and was completed in 609, although much of the work combined older canals, the oldest section of the canal existing since at least 486 BC. Even in its narrowest urban sections it is rarely less than wide. Greek engineers were also among the first to use canal locks, by which they regulated the water flow in the Ancient Suez Canal as early as the 3rd century BC. Hohokam was a society in the North American Southwest in what is now part of Arizona, United States, and Sonora, Mexico. Their irrigation systems supported the largest population in the Southwest by 1300 CE. Archaeologists working at a major archaeological dig in the 1990s in the Tucson Basin, along the Santa Cruz River, identified a culture and people that may have been the ancestors of the Hohokam. This prehistoric group occupied southern Arizona as early as 2000 BCE, and in the Early Agricultural Period grew corn, lived year-round in sedentary villages, and developed sophisticated irrigation canals. The large-scale Hohokam irrigation network in the Phoenix metropolitan area was the most complex in ancient North America. A portion of the ancient canals has been renovated for the Salt River Project and now helps to supply the city's water. Middle Ages In the Middle Ages, water transport was several times cheaper and faster than transport overland. Overland transport by animal drawn conveyances was used around settled areas, but unimproved roads required pack animal trains, usually of mules to carry any degree of mass, and while a mule could carry an eighth ton, it also needed teamsters to tend it and one man could only tend perhaps five mules, meaning overland bulk transport was also expensive, as men expect compensation in the form of wages, room and board. This was because long-haul roads were unpaved, more often than not too narrow for carts, much less wagons, and in poor condition, wending their way through forests, marshy or muddy quagmires as often as unimproved but dry footing. In that era, as today, greater cargoes, especially bulk goods and raw materials, could be transported by ship far more economically than by land; in the pre-railroad days of the industrial revolution, water transport was the gold standard of fast transportation. The first artificial canal in Western Europe was the Fossa Carolina built at the end of the 8th century under personal supervision of Charlemagne. In Britain, the Glastonbury Canal is believed to be the first post-Roman canal and was built in the middle of the 10th century to link the River Brue at Northover with Glastonbury Abbey, a distance of about . Its initial purpose is believed to be the transport of building stone for the abbey, but later it was used for delivering produce, including grain, wine and fish, from the abbey's outlying properties. It remained in use until at least the 14th century, but possibly as late as the mid-16th century. More lasting and of more economic impact were canals like the Naviglio Grande built between 1127 and 1257 to connect Milan with the Ticino River. The Naviglio Grande is the most important of the lombard "navigli" and the oldest functioning canal in Europe.Later, canals were built in the Netherlands and Flanders to drain the polders and assist transportation of goods and people. Canal building was revived in this age because of commercial expansion from the 12th century. River navigations were improved progressively by the use of single, or flash locks. Taking boats through these used large amounts of water leading to conflicts with watermill owners and to correct this, the pound or chamber lock first appeared, in the 10th century in China and in Europe in 1373 in Vreeswijk, Netherlands. Another important development was the mitre gate, which was, it is presumed, introduced in Italy by Bertola da Novate in the 16th century. This allowed wider gates and also removed the height restriction of guillotine locks. To break out of the limitations caused by river valleys, the first summit level canals were developed with the Grand Canal of China in 581–617 AD whilst in Europe the first, also using single locks, was the Stecknitz Canal in Germany in 1398. Africa In the Songhai Empire of West Africa, several canals were constructed under Sunni Ali and Askia Muhammad between Kabara and Timbuktu in the 15th century. These were used primarily for irrigation and transport. Sunni Ali also attempted to construct a canal from the Niger River to Walata to facilitate conquest of the city but his progress was halted when he went to war with the Mossi Kingdoms. Early modern period Around 1500–1800 the first summit level canal to use pound locks in Europe was the Briare Canal connecting the Loire and Seine (1642), followed by the more ambitious Canal du Midi (1683) connecting the Atlantic to the Mediterranean. This included a staircase of 8 locks at Béziers, a tunnel, and three major aqueducts. Canal building progressed steadily in Germany in the 17th and 18th centuries with three great rivers, the Elbe, Oder and Weser being linked by canals. In post-Roman Britain, the first early modern period canal built appears to have been the Exeter Canal, which was surveyed in 1563, and open in 1566. The oldest canal in the European settlements of North America, technically a mill race built for industrial purposes, is Mother Brook between the Boston, Massachusetts neighbourhoods of Dedham and Hyde Park connecting the higher waters of the Charles River and the mouth of the Neponset River and the sea. It was constructed in 1639 to provide water power for mills. In Russia, the Volga–Baltic Waterway, a nationwide canal system connecting the Baltic Sea and Caspian Sea via the Neva and Volga rivers, was opened in 1718. Industrial Revolution See also: History of the British canal system See also: History of turnpikes and canals in the United States The modern canal system was mainly a product of the 18th century and early 19th century. It came into being because the Industrial Revolution (which began in Britain during the mid-18th century) demanded an economic and reliable way to transport goods and commodities in large quantities. By the early 18th century, river navigations such as the Aire and Calder Navigation were becoming quite sophisticated, with pound locks and longer and longer "cuts" (some with intermediate locks) to avoid circuitous or difficult stretches of river. Eventually, the experience of building long multi-level cuts with their own locks gave rise to the idea of building a "pure" canal, a waterway designed on the basis of where goods needed to go, not where a river happened to be. The claim for the first pure canal in Great Britain is debated between "Sankey" and "Bridgewater" supporters. The first true canal in what is now the United Kingdom was the Newry Canal in Northern Ireland constructed by Thomas Steers in 1741. The Sankey Brook Navigation, which connected St Helens with the River Mersey, is often claimed as the first modern "purely artificial" canal because although originally a scheme to make the Sankey Brook navigable, it included an entirely new artificial channel that was effectively a canal along the Sankey Brook valley. However, "Bridgewater" supporters point out that the last quarter-mile of the navigation is indeed a canalized stretch of the Brook, and that it was the Bridgewater Canal (less obviously associated with an existing river) that captured the popular imagination and inspired further canals. In the mid-eighteenth century the 3rd Duke of Bridgewater, who owned a number of coal mines in northern England, wanted a reliable way to transport his coal to the rapidly industrializing city of Manchester. He commissioned the engineer James Brindley to build a canal for that purpose. Brindley's design included an aqueduct carrying the canal over the River Irwell. This was an engineering wonder which immediately attracted tourists. The construction of this canal was funded entirely by the Duke and was called the Bridgewater Canal. It opened in 1761 and was the first major British canal. The new canals proved highly successful. The boats on the canal were horse-drawn with a towpath alongside the canal for the horse to walk along. This horse-drawn system proved to be highly economical and became standard across the British canal network. Commercial horse-drawn canal boats could be seen on the UK's canals until as late as the 1950s, although by then diesel-powered boats, often towing a second unpowered boat, had become standard. The canal boats could carry thirty tons at a time with only one horse pulling – more than ten times the amount of cargo per horse that was possible with a cart. Because of this huge increase in supply, the Bridgewater canal reduced the price of coal in Manchester by nearly two-thirds within just a year of its opening. The Bridgewater was also a huge financial success, with it earning what had been spent on its construction within just a few years. This success proved the viability of canal transport, and soon industrialists in many other parts of the country wanted canals. After the Bridgewater canal, early canals were built by groups of private individuals with an interest in improving communications. In Staffordshire the famous potter Josiah Wedgwood saw an opportunity to bring bulky cargoes of clay to his factory doors and to transport his fragile finished goods to market in Manchester, Birmingham or further away, by water, minimizing breakages. Within just a few years of the Bridgewater's opening, an embryonic national canal network came into being, with the construction of canals such as the Oxford Canal and the Trent & Mersey Canal. The new canal system was both cause and effect of the rapid industrialization of The Midlands and the north. The period between the 1770s and the 1830s is often referred to as the "Golden Age" of British canals. For each canal, an Act of Parliament was necessary to authorize construction, and as people saw the high incomes achieved from canal tolls, canal proposals came to be put forward by investors interested in profiting from dividends, at least as much as by people whose businesses would profit from cheaper transport of raw materials and finished goods. In a further development, there was often out-and-out speculation, where people would try to buy shares in a newly floated company simply to sell them on for an immediate profit, regardless of whether the canal was ever profitable, or even built. During this period of "canal mania", huge sums were invested in canal building, and although many schemes came to nothing, the canal system rapidly expanded to nearly 4,000 miles (over 6,400 kilometres) in length. Many rival canal companies were formed and competition was rampant. Perhaps the best example was Worcester Bar in Birmingham, a point where the Worcester and Birmingham Canal and the Birmingham Canal Navigations Main Line were only seven feet apart. For many years, a dispute about tolls meant that goods travelling through Birmingham had to be portaged from boats in one canal to boats in the other. Canal companies were initially chartered by individual states in the United States. These early canals were constructed, owned, and operated by private joint-stock companies. Four were completed when the War of 1812 broke out; these were the South Hadley Canal (opened 1795) in Massachusetts, Santee Canal (opened 1800) in South Carolina, the Middlesex Canal (opened 1802) also in Massachusetts, and the Dismal Swamp Canal (opened 1805) in Virginia. The Erie Canal (opened 1825) was chartered and owned by the state of New York and financed by bonds bought by private investors. The Erie canal runs about from Albany, New York, on the Hudson River to Buffalo, New York, at Lake Erie. The Hudson River connects Albany to the Atlantic port of New York City and the Erie Canal completed a navigable water route from the Atlantic Ocean to the Great Lakes. The canal contains 36 locks and encompasses a total elevation differential of around 565 ft. (169 m). The Erie Canal with its easy connections to most of the U.S. mid-west and New York City soon quickly paid back all its invested capital (US$7 million) and started turning a profit. By cutting transportation costs in half or more it became a large profit center for Albany and New York City as it allowed the cheap transportation of many of the agricultural products grown in the mid west of the United States to the rest of the world. From New York City these agricultural products could easily be shipped to other U.S. states or overseas. Assured of a market for their farm products the settlement of the U.S. mid-west was greatly accelerated by the Erie Canal. The profits generated by the Erie Canal project started a canal building boom in the United States that lasted until about 1850 when railroads started becoming seriously competitive in price and convenience. The Blackstone Canal (finished | Ticino River. The Naviglio Grande is the most important of the lombard "navigli" and the oldest functioning canal in Europe.Later, canals were built in the Netherlands and Flanders to drain the polders and assist transportation of goods and people. Canal building was revived in this age because of commercial expansion from the 12th century. River navigations were improved progressively by the use of single, or flash locks. Taking boats through these used large amounts of water leading to conflicts with watermill owners and to correct this, the pound or chamber lock first appeared, in the 10th century in China and in Europe in 1373 in Vreeswijk, Netherlands. Another important development was the mitre gate, which was, it is presumed, introduced in Italy by Bertola da Novate in the 16th century. This allowed wider gates and also removed the height restriction of guillotine locks. To break out of the limitations caused by river valleys, the first summit level canals were developed with the Grand Canal of China in 581–617 AD whilst in Europe the first, also using single locks, was the Stecknitz Canal in Germany in 1398. Africa In the Songhai Empire of West Africa, several canals were constructed under Sunni Ali and Askia Muhammad between Kabara and Timbuktu in the 15th century. These were used primarily for irrigation and transport. Sunni Ali also attempted to construct a canal from the Niger River to Walata to facilitate conquest of the city but his progress was halted when he went to war with the Mossi Kingdoms. Early modern period Around 1500–1800 the first summit level canal to use pound locks in Europe was the Briare Canal connecting the Loire and Seine (1642), followed by the more ambitious Canal du Midi (1683) connecting the Atlantic to the Mediterranean. This included a staircase of 8 locks at Béziers, a tunnel, and three major aqueducts. Canal building progressed steadily in Germany in the 17th and 18th centuries with three great rivers, the Elbe, Oder and Weser being linked by canals. In post-Roman Britain, the first early modern period canal built appears to have been the Exeter Canal, which was surveyed in 1563, and open in 1566. The oldest canal in the European settlements of North America, technically a mill race built for industrial purposes, is Mother Brook between the Boston, Massachusetts neighbourhoods of Dedham and Hyde Park connecting the higher waters of the Charles River and the mouth of the Neponset River and the sea. It was constructed in 1639 to provide water power for mills. In Russia, the Volga–Baltic Waterway, a nationwide canal system connecting the Baltic Sea and Caspian Sea via the Neva and Volga rivers, was opened in 1718. Industrial Revolution See also: History of the British canal system See also: History of turnpikes and canals in the United States The modern canal system was mainly a product of the 18th century and early 19th century. It came into being because the Industrial Revolution (which began in Britain during the mid-18th century) demanded an economic and reliable way to transport goods and commodities in large quantities. By the early 18th century, river navigations such as the Aire and Calder Navigation were becoming quite sophisticated, with pound locks and longer and longer "cuts" (some with intermediate locks) to avoid circuitous or difficult stretches of river. Eventually, the experience of building long multi-level cuts with their own locks gave rise to the idea of building a "pure" canal, a waterway designed on the basis of where goods needed to go, not where a river happened to be. The claim for the first pure canal in Great Britain is debated between "Sankey" and "Bridgewater" supporters. The first true canal in what is now the United Kingdom was the Newry Canal in Northern Ireland constructed by Thomas Steers in 1741. The Sankey Brook Navigation, which connected St Helens with the River Mersey, is often claimed as the first modern "purely artificial" canal because although originally a scheme to make the Sankey Brook navigable, it included an entirely new artificial channel that was effectively a canal along the Sankey Brook valley. However, "Bridgewater" supporters point out that the last quarter-mile of the navigation is indeed a canalized stretch of the Brook, and that it was the Bridgewater Canal (less obviously associated with an existing river) that captured the popular imagination and inspired further canals. In the mid-eighteenth century the 3rd Duke of Bridgewater, who owned a number of coal mines in northern England, wanted a reliable way to transport his coal to the rapidly industrializing city of Manchester. He commissioned the engineer James Brindley to build a canal for that purpose. Brindley's design included an aqueduct carrying the canal over the River Irwell. This was an engineering wonder which immediately attracted tourists. The construction of this canal was funded entirely by the Duke and was called the Bridgewater Canal. It opened in 1761 and was the first major British canal. The new canals proved highly successful. The boats on the canal were horse-drawn with a towpath alongside the canal for the horse to walk along. This horse-drawn system proved to be highly economical and became standard across the British canal network. Commercial horse-drawn canal boats could be seen on the UK's canals until as late as the 1950s, although by then diesel-powered boats, often towing a second unpowered boat, had become standard. The canal boats could carry thirty tons at a time with only one horse pulling – more than ten times the amount of cargo per horse that was possible with a cart. Because of this huge increase in supply, the Bridgewater canal reduced the price of coal in Manchester by nearly two-thirds within just a year of its opening. The Bridgewater was also a huge financial success, with it earning what had been spent on its construction within just a few years. This success proved the viability of canal transport, and soon industrialists in many other parts of the country wanted canals. After the Bridgewater canal, early canals were built by groups of private individuals with an interest in improving communications. In Staffordshire the famous potter Josiah Wedgwood saw an opportunity to bring bulky cargoes of clay to his factory doors and to transport his fragile finished goods to market in Manchester, Birmingham or further away, by water, minimizing breakages. Within just a few years of the Bridgewater's opening, an embryonic national canal network came into being, with the construction of canals such as the Oxford Canal and the Trent & Mersey Canal. The new canal system was both cause and effect of the rapid industrialization of The Midlands and the north. The period between the 1770s and the 1830s is often referred to as the "Golden Age" of British canals. For each canal, an Act of Parliament was necessary to authorize construction, and as people saw the high incomes achieved from canal tolls, canal proposals came to be put forward by investors interested in profiting from dividends, at least as much as by people whose businesses would profit from cheaper transport of raw materials and finished goods. In a further development, there was often out-and-out speculation, where people would try to buy shares in a newly floated company simply to sell them on for an immediate profit, regardless of whether the canal was ever profitable, or even built. During this period of "canal mania", huge sums were invested in canal building, and although many schemes came to nothing, the canal system rapidly expanded to nearly 4,000 miles (over 6,400 kilometres) in length. Many rival canal companies were formed and competition was rampant. Perhaps the best example was Worcester Bar in Birmingham, a point where the Worcester and Birmingham Canal and the Birmingham Canal Navigations Main Line were only seven feet apart. For many years, a dispute about tolls meant that goods travelling through Birmingham had to be portaged from boats in one canal to boats in the other. Canal companies were initially chartered by individual states in the United States. These early canals were constructed, owned, and operated by private joint-stock companies. Four were completed when the War of 1812 broke out; these were the South Hadley Canal (opened 1795) in Massachusetts, Santee Canal (opened 1800) in South Carolina, the Middlesex Canal (opened 1802) also in Massachusetts, and the Dismal Swamp Canal (opened 1805) in Virginia. The Erie Canal (opened 1825) was chartered and owned by the state of New York and financed by bonds bought by private investors. The Erie canal runs about from Albany, New York, on the Hudson River to Buffalo, New York, at Lake Erie. The Hudson River connects Albany to the Atlantic port of New York City and the Erie Canal completed a navigable water route from the Atlantic Ocean to the Great Lakes. The canal contains 36 locks and encompasses a total elevation differential of around 565 ft. (169 m). The Erie Canal with its easy connections to most of the U.S. mid-west and New York City soon quickly paid back all its invested capital (US$7 million) and started turning a profit. By cutting transportation costs in half or more it became a large profit center for Albany and New York City as it allowed the cheap transportation of many of the agricultural products grown in the mid west of the United States to the rest of the world. From New York City these agricultural products could easily be shipped to other U.S. states or overseas. Assured of a market for their farm products the settlement of the U.S. mid-west was greatly accelerated by the Erie Canal. The profits generated by the Erie Canal project started a canal building boom in the United States that lasted until about 1850 when railroads started becoming seriously competitive in price and convenience. The Blackstone Canal (finished in 1828) in Massachusetts and Rhode Island fulfilled a similar role in the early industrial revolution between 1828 and 1848. The Blackstone Valley was a major contributor of the American Industrial Revolution where Samuel Slater built his first textile mill. Power canals See also: Power canal A power canal refers to a canal used for hydraulic power generation, rather than for transport. Nowadays power canals are built almost exclusively as parts of hydroelectric power stations. Parts of the United States, particularly in the Northeast, had enough fast-flowing rivers that water power was the primary means of powering factories (usually textile mills) until after the American Civil War. |
able to understand novel sentences? The study of language processing ranges from the investigation of the sound patterns of speech to the meaning of words and whole sentences. Linguistics often divides language processing into orthography, phonetics, phonology, morphology, syntax, semantics, and pragmatics. Many aspects of language can be studied from each of these components and from their interaction. The study of language processing in cognitive science is closely tied to the field of linguistics. Linguistics was traditionally studied as a part of the humanities, including studies of history, art and literature. In the last fifty years or so, more and more researchers have studied knowledge and use of language as a cognitive phenomenon, the main problems being how knowledge of language can be acquired and used, and what precisely it consists of. Linguists have found that, while humans form sentences in ways apparently governed by very complex systems, they are remarkably unaware of the rules that govern their own speech. Thus linguists must resort to indirect methods to determine what those rules might be, if indeed rules as such exist. In any event, if speech is indeed governed by rules, they appear to be opaque to any conscious consideration. Learning and development Learning and development are the processes by which we acquire knowledge and information over time. Infants are born with little or no knowledge (depending on how knowledge is defined), yet they rapidly acquire the ability to use language, walk, and recognize people and objects. Research in learning and development aims to explain the mechanisms by which these processes might take place. A major question in the study of cognitive development is the extent to which certain abilities are innate or learned. This is often framed in terms of the nature and nurture debate. The nativist view emphasizes that certain features are innate to an organism and are determined by its genetic endowment. The empiricist view, on the other hand, emphasizes that certain abilities are learned from the environment. Although clearly both genetic and environmental input is needed for a child to develop normally, considerable debate remains about how genetic information might guide cognitive development. In the area of language acquisition, for example, some (such as Steven Pinker) have argued that specific information containing universal grammatical rules must be contained in the genes, whereas others (such as Jeffrey Elman and colleagues in Rethinking Innateness) have argued that Pinker's claims are biologically unrealistic. They argue that genes determine the architecture of a learning system, but that specific "facts" about how grammar works can only be learned as a result of experience. Memory Memory allows us to store information for later retrieval. Memory is often thought of as consisting of both a long-term and short-term store. Long-term memory allows us to store information over prolonged periods (days, weeks, years). We do not yet know the practical limit of long-term memory capacity. Short-term memory allows us to store information over short time scales (seconds or minutes). Memory is also often grouped into declarative and procedural forms. Declarative memory—grouped into subsets of semantic and episodic forms of memory—refers to our memory for facts and specific knowledge, specific meanings, and specific experiences (e.g. "Are apples food?", or "What did I eat for breakfast four days ago?"). Procedural memory allows us to remember actions and motor sequences (e.g. how to ride a bicycle) and is often dubbed implicit knowledge or memory . Cognitive scientists study memory just as psychologists do, but tend to focus more on how memory bears on cognitive processes, and the interrelationship between cognition and memory. One example of this could be, what mental processes does a person go through to retrieve a long-lost memory? Or, what differentiates between the cognitive process of recognition (seeing hints of something before remembering it, or memory in context) and recall (retrieving a memory, as in "fill-in-the-blank")? Perception and action Perception is the ability to take in information via the senses, and process it in some way. Vision and hearing are two dominant senses that allow us to perceive the environment. Some questions in the study of visual perception, for example, include: (1) How are we able to recognize objects?, (2) Why do we perceive a continuous visual environment, even though we only see small bits of it at any one time? One tool for studying visual perception is by looking at how people process optical illusions. The image on the right of a Necker cube is an example of a bistable percept, that is, the cube can be interpreted as being oriented in two different directions. The study of haptic (tactile), olfactory, and gustatory stimuli also fall into the domain of perception. Action is taken to refer to the output of a system. In humans, this is accomplished through motor responses. Spatial planning and movement, speech production, and complex motor movements are all aspects of action. Consciousness Consciousness is the awareness of external objects and experiences within oneself. This helps the mind with having the ability to experience or feel a sense of self. Research methods Many different methodologies are used to study cognitive science. As the field is highly interdisciplinary, research often cuts across multiple areas of study, drawing on research methods from psychology, neuroscience, computer science and systems theory. Behavioral experiments In order to have a description of what constitutes intelligent behavior, one must study behavior itself. This type of research is closely tied to that in cognitive psychology and psychophysics. By measuring behavioral responses to different stimuli, one can understand something about how those stimuli are processed. Lewandowski & Strohmetz (2009) reviewed a collection of innovative uses of behavioral measurement in psychology including behavioral traces, behavioral observations, and behavioral choice. Behavioral traces are pieces of evidence that indicate behavior occurred, but the actor is not present (e.g., litter in a parking lot or readings on an electric meter). Behavioral observations involve the direct witnessing of the actor engaging in the behavior (e.g., watching how close a person sits next to another person). Behavioral choices are when a person selects between two or more options (e.g., voting behavior, choice of a punishment for another participant). Reaction time. The time between the presentation of a stimulus and an appropriate response can indicate differences between two cognitive processes, and can indicate some things about their nature. For example, if in a search task the reaction times vary proportionally with the number of elements, then it is evident that this cognitive process of searching involves serial instead of parallel processing. Psychophysical responses. Psychophysical experiments are an old psychological technique, which has been adopted by cognitive psychology. They typically involve making judgments of some physical property, e.g. the loudness of a sound. Correlation of subjective scales between individuals can show cognitive or sensory biases as compared to actual physical measurements. Some examples include: sameness judgments for colors, tones, textures, etc. threshold differences for colors, tones, textures, etc. Eye tracking. This methodology is used to study a variety of cognitive processes, most notably visual perception and language processing. The fixation point of the eyes is linked to an individual's focus of attention. Thus, by monitoring eye movements, we can study what information is being processed at a given time. Eye tracking allows us to study cognitive processes on extremely short time scales. Eye movements reflect online decision making during a task, and they provide us with some insight into the ways in which those decisions may be processed. Brain imaging Brain imaging involves analyzing activity within the brain while performing various tasks. This allows us to link behavior and brain function to help understand how information is processed. Different types of imaging techniques vary in their temporal (time-based) and spatial (location-based) resolution. Brain imaging is often used in cognitive neuroscience. Single-photon emission computed tomography and positron emission tomography. SPECT and PET use radioactive isotopes, which are injected into the subject's bloodstream and taken up by the brain. By observing which areas of the brain take up the radioactive isotope, we can see which areas of the brain are more active than other areas. PET has similar spatial resolution to fMRI, but it has extremely poor temporal resolution. Electroencephalography. EEG measures the electrical fields generated by large populations of neurons in the cortex by placing a series of electrodes on the scalp of the subject. This technique has an extremely high temporal resolution, but a relatively poor spatial resolution. Functional magnetic resonance imaging. fMRI measures the relative amount of oxygenated blood flowing to different parts of the brain. More oxygenated blood in a particular region is assumed to correlate with an increase in neural activity in that part of the brain. This allows us to localize particular functions within different brain regions. fMRI has moderate spatial and temporal resolution. Optical imaging. This technique uses infrared transmitters and receivers to measure the amount of light reflectance by blood near different areas of the brain. Since oxygenated and deoxygenated blood reflects light by different amounts, we can study which areas are more active (i.e., those that have more oxygenated blood). Optical imaging has moderate temporal resolution, but poor spatial resolution. It also has the advantage that it is extremely safe and can be used to study infants' brains. Magnetoencephalography. MEG measures magnetic fields resulting from cortical activity. It is similar to EEG, except that it has improved spatial resolution since the magnetic fields it measures are not as blurred or attenuated by the scalp, meninges and so forth as the electrical activity measured in EEG is. MEG uses SQUID sensors to detect tiny magnetic fields. Computational modeling Computational models require a mathematically and logically formal representation of a problem. Computer models are used in the simulation and experimental verification of different specific and general properties of intelligence. Computational modeling can help us understand the functional organization of a particular cognitive phenomenon. Approaches to cognitive modeling can be categorized as: (1) symbolic, on abstract mental functions of an intelligent mind by means of symbols; (2) subsymbolic, on the neural and associative properties of the human brain; and (3) across the symbolic–subsymbolic border, including hybrid. Symbolic modeling evolved from the computer science paradigms using the technologies of knowledge-based systems, as well as a philosophical perspective (e.g. "Good Old-Fashioned Artificial Intelligence" (GOFAI)). They were developed by the first cognitive researchers and later used in information engineering for expert systems. Since the early 1990s it was generalized in systemics for the investigation of functional human-like intelligence models, such as personoids, and, in parallel, developed as the SOAR environment. | process, and transform information. Mental faculties of concern to cognitive scientists include language, perception, memory, attention, reasoning, and emotion; to understand these faculties, cognitive scientists borrow from fields such as linguistics, psychology, artificial intelligence, philosophy, neuroscience, and anthropology. The typical analysis of cognitive science spans many levels of organization, from learning and decision to logic and planning; from neural circuitry to modular brain organization. One of the fundamental concepts of cognitive science is that "thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures." The goal of cognitive science is to understand the principles of intelligence with the hope that this will lead to a better comprehension of the mind and of learning and to develop intelligent devices. The cognitive sciences began as an intellectual movement in the 1950s often referred to as the cognitive revolution. History The cognitive sciences began as an intellectual movement in the 1950s, called the cognitive revolution. Cognitive science has a prehistory traceable back to ancient Greek philosophical texts (see Plato's Meno and Aristotle's De Anima); and includes writers such as Descartes, David Hume, Immanuel Kant, Benedict de Spinoza, Nicolas Malebranche, Pierre Cabanis, Leibniz and John Locke. However, although these early writers contributed greatly to the philosophical discovery of mind and this would ultimately lead to the development of psychology, they were working with an entirely different set of tools and core concepts than those of the cognitive scientist. The modern culture of cognitive science can be traced back to the early cyberneticists in the 1930s and 1940s, such as Warren McCulloch and Walter Pitts, who sought to understand the organizing principles of the mind. McCulloch and Pitts developed the first variants of what are now known as artificial neural networks, models of computation inspired by the structure of biological neural networks. Another precursor was the early development of the theory of computation and the digital computer in the 1940s and 1950s. Kurt Gödel, Alonzo Church, Alan Turing, and John von Neumann were instrumental in these developments. The modern computer, or Von Neumann machine, would play a central role in cognitive science, both as a metaphor for the mind, and as a tool for investigation. The first instance of cognitive science experiments being done at an academic institution took place at MIT Sloan School of Management, established by J.C.R. Licklider working within the psychology department and conducting experiments using computer memory as models for human cognition. In 1959, Noam Chomsky published a scathing review of B. F. Skinner's book Verbal Behavior. At the time, Skinner's behaviorist paradigm dominated the field of psychology within the United States. Most psychologists focused on functional relations between stimulus and response, without positing internal representations. Chomsky argued that in order to explain language, we needed a theory like generative grammar, which not only attributed internal representations but characterized their underlying order. The term cognitive science was coined by Christopher Longuet-Higgins in his 1973 commentary on the Lighthill report, which concerned the then-current state of artificial intelligence research. In the same decade, the journal Cognitive Science and the Cognitive Science Society were founded. The founding meeting of the Cognitive Science Society was held at the University of California, San Diego in 1979, which resulted in cognitive science becoming an internationally visible enterprise. In 1972, Hampshire College started the first undergraduate education program in Cognitive Science, led by Neil Stillings. In 1982, with assistance from Professor Stillings, Vassar College became the first institution in the world to grant an undergraduate degree in Cognitive Science. In 1986, the first Cognitive Science Department in the world was founded at the University of California, San Diego. In the 1970s and early 1980s, as access to computers increased, artificial intelligence research expanded. Researchers such as Marvin Minsky would write computer programs in languages such as LISP to attempt to formally characterize the steps that human beings went through, for instance, in making decisions and solving problems, in the hope of better understanding human thought, and also in the hope of creating artificial minds. This approach is known as "symbolic AI". Eventually the limits of the symbolic AI research program became apparent. For instance, it seemed to be unrealistic to comprehensively list human knowledge in a form usable by a symbolic computer program. The late 80s and 90s saw the rise of neural networks and connectionism as a research paradigm. Under this point of view, often attributed to James McClelland and David Rumelhart, the mind could be characterized as a set of complex associations, represented as a layered network. Critics argue that there are some phenomena which are better captured by symbolic models, and that connectionist models are often so complex as to have little explanatory power. Recently symbolic and connectionist models have been combined, making it possible to take advantage of both forms of explanation. While both connectionism and symbolic approaches have proven useful for testing various hypotheses and exploring approaches to understanding aspects of cognition and lower level brain functions, neither are biologically realistic and therefore, both suffer from a lack of neuroscientific plausibility. Connectionism has proven useful for exploring computationally how cognition emerges in development and occurs in the human brain, and has provided alternatives to strictly domain-specific / domain general approaches. For example, scientists such as Jeff Elman, Liz Bates, and Annette Karmiloff-Smith have posited that networks in the brain emerge from the dynamic interaction between them and environmental input. Principles Levels of analysis A central tenet of cognitive science is that a complete understanding of the mind/brain cannot be attained by studying only a single level. An example would be the problem of remembering a phone number and recalling it later. One approach to understanding this process would be to study behavior through direct observation, or naturalistic observation. A person could be presented with a phone number and be asked to recall it after some delay of time; then the accuracy of the response could be measured. Another approach to measure cognitive ability would be to study the firings of individual neurons while a person is trying to remember the phone number. Neither of these experiments on its own would fully explain how the process of remembering a phone number works. Even if the technology to map out every neuron in the brain in real-time were available and it were known when each neuron fired it would still be impossible to know how a particular firing of neurons translates into the observed behavior. Thus an understanding of how these two levels relate to each other is imperative. Francisco Varela, in The Embodied Mind: Cognitive Science and Human Experience argues that "the new sciences of the mind need to enlarge their horizon to encompass both lived human experience and the possibilities for transformation inherent in human experience". On the classic cognitivist view, this can be provided by a functional level account of the process. Studying a particular phenomenon from multiple levels creates a better understanding of the processes that occur in the brain to give rise to a particular behavior. Marr gave a famous description of three levels of analysis: The computational theory, specifying the goals of the computation; Representation and algorithms, giving a representation of the inputs and outputs and the algorithms which transform one into the other; and The hardware implementation, or how algorithm and representation may be physically realized. Interdisciplinary nature Cognitive science is an interdisciplinary field with contributors from various fields, including psychology, neuroscience, linguistics, philosophy of mind, computer science, anthropology and biology. Cognitive scientists work collectively in hope of understanding the mind and its interactions with the surrounding world much like other sciences do. The field regards itself as compatible with the physical sciences and uses the scientific method as well as simulation or modeling, often comparing the output |
irregular inflected forms; in English, the verb be has a number of highly irregular (suppletive) forms and has more different inflected forms than any other English verb (am, is, are, was, were, etc.; see English verbs for details). Other copulas show more resemblances to pronouns. That is the case for Classical Chinese and Guarani, for instance. In highly synthetic languages, copulas are often suffixes, attached to a noun, but they may still behave otherwise like ordinary verbs: in Inuit languages. In some other languages, like Beja and Ket, the copula takes the form of suffixes that attach to a noun but are distinct from the person agreement markers used on predicative verbs. This phenomenon is known as nonverbal person agreement (or nonverbal subject agreement), and the relevant markers are always established as deriving from cliticized independent pronouns. For cases in which the copula is omitted or takes zero form, see below. Additional uses of copular verbs A copular verb may also have other uses supplementary to or distinct from its uses as a copula. As auxiliary verbs The English copular verb be can be used as an auxiliary verb, expressing passive voice (together with the past participle) or expressing progressive aspect (together with the present participle): Other languages' copulas have additional uses as auxiliaries. For example, French can be used to express passive voice similarly to English be, and both French and German are used to express the perfect forms of certain verbs: The last usage was formerly prevalent in English also. The auxiliary functions of these verbs derive from their copular function, and can be interpreted as a special case of the copular function (the verbal form that follows it being considered adjectival). Another auxiliary-type usage of the copula in English is together with the to-infinitive to denote an obligatory action or expected occurrence: "I am to serve you;" "The manager is to resign." It can be put also into past tense: "We were to leave at 9." For forms like "if I was/were to come," see English conditional sentences. (Note that by certain criteria, the English copula be may always be considered an auxiliary verb; see Diagnostics for identifying auxiliary verbs in English.) Existential usage The English to be, and its equivalents in certain other languages, also have a non-copular use as an existential verb, meaning "to exist." This use is illustrated in the following sentences: I want only to be, and that is enough; I think therefore I am; To be or not to be, that is the question. In these cases, the verb itself expresses a predicate (that of existence), rather than linking to a predicative expression as it does when used as a copula. In ontology it is sometimes suggested that the "is" of existence is reducible to the "is" of property attribution or class membership; to be, Aristotle held, is to be something. However, Abelard in his Dialectica made a reductio ad absurdum argument against the idea that the copula can express existence. Similar examples can be found in many other languages; for example, the French and Latin equivalents of I think therefore I am are and , where and are the equivalents of English "am," normally used as copulas. However, other languages prefer a different verb for existential use, as in the Spanish version (where the verb "to exist" is used rather than the copula or ‘to be’). Another type of existential usage is in clauses of the there is… or there are… type. Languages differ in the way they express such meanings; some of them use the copular verb, possibly with an expletive pronoun like the English there, while other languages use different verbs and constructions, like the French (which uses parts of the verb ‘to have,’ not the copula) or the Swedish (the passive voice of the verb for "to find"). For details, see existential clause. Relying on a unified theory of copular sentences, it has been proposed that the English there-sentences are subtypes of inverse copular constructions. Zero copula In some languages, copula omission occurs within a particular grammatical context. For example, speakers of Russian, Indonesian, Turkish, Hungarian, Arabic, Hebrew, Geʽez and Quechuan languages consistently drop the copula in present tense: Russian: , ‘I (am a) person;’ Indonesian: ‘I (am) a human;’ Turkish: ‘s/he (is a) human;’ Hungarian: ‘s/he (is) a human;’ Arabic: أنا إنسان, ‘I (am a) human;’ Hebrew: אני אדם, ʔani ʔadam "I (am a) human;" Geʽez: አነ ብእሲ/ብእሲ አነ ʔana bəʔəsi / bəʔəsi ʔana "I (am a) man" / "(a) man I (am)"; Southern Quechua: payqa runam "s/he (is) a human." The usage is known generically as the zero copula. Note that in other tenses (sometimes in forms other than third person singular), the copula usually reappears. Some languages drop the copula in poetic or aphorismic contexts. Examples in English include The more, the better. Out of many, one. True that. Such poetic copula dropping is more pronounced in some languages other than English, like the Romance languages. In informal speech of English, the copula may also be dropped in general sentences, as in "She a nurse." It is a feature of African-American Vernacular English, but is also used by a variety of other English speakers in informal contexts. An example is the sentence "I saw twelve men, each a soldier." Examples in specific languages In Ancient Greek, when an adjective precedes a noun with an article, the copula is understood: ὁ οἴκος ἐστὶ μακρός, "the house is large," can be written μακρός ὁ οἴκος, "large the house (is)." In Quechua (Southern Quechua used for the examples), zero copula is restricted to present tense in third person singular (kan): Payqa runam — "(s)he is a human;" but: (paykuna) runakunam kanku "(they) are human."ap In Māori, the zero copula can be used in predicative expressions and with continuous verbs (many of which take a copulative verb in many Indo-European languages) — He nui te whare, literally "a big the house," "the house (is) big;" I te tēpu te pukapuka, literally "at (past locative particle) the table the book," "the book (was) on the table;" Nō Ingarangi ia, literally "from England (s)he," "(s)he (is) from England," Kei te kai au, literally "at the (act of) eating I," "I (am) eating." Alternatively, in many cases, the particle ko can be used as a copulative (though not all instances of ko are used as thus, like all other Maori particles, ko has multiple purposes): Ko nui te whare "The house is big;" Ko te pukapuka kei te tēpu "It is the book (that is) on the table;" Ko au kei te kai "It is me eating." However, when expressing identity or class membership, ko must be used: Ko tēnei tāku pukapuka "This is my book;" Ko Ōtautahi he tāone i Te Waipounamu "Christchurch is a city in the South Island (of New Zealand);" Ko koe tōku hoa "You are my friend." Note that when expressing identity, ko can be placed on either object in the clause without changing the meaning (ko tēnei tāku pukapuka is the same as ko tāku pukapuka tēnei) but not on both (ko tēnei ko tāku pukapuka would be equivalent to saying "it is this, it is my book" in English). In Hungarian, zero copula is restricted to present tense in third person singular and plural: Ő ember/Ők emberek — "s/he is a human"/"they are humans;" but: (én) ember vagyok "I am a human," (te) ember vagy "you are a human," mi emberek vagyunk "we are humans," (ti) emberek vagytok "you (all) are humans." The copula also reappears for stating locations: az emberek a házban vannak, "the people are in the house," and for stating time: hat óra van, "it is six o'clock." However, the copula may be omitted in colloquial language: hat óra (van), "it is six o'clock." Hungarian uses copula lenni for expressing location: Itt van Róbert "Bob is here," but it is omitted in the third person present tense for attribution or identity statements: Róbert öreg "Bob is old;" ők éhesek "They are hungry;" Kati nyelvtudós "Cathy is a linguist" (but Róbert öreg volt "Bob was old," éhesek voltak "They were hungry," Kati nyelvtudós volt "Cathy was a linguist). In Turkish, both the third person singular and the third person plural copulas are omittable. Ali burada and Ali buradadır both mean "Ali is here," and Onlar aç and Onlar açlar both mean "They are hungry." Both of the sentences are acceptable and grammatically correct, but sentences with the copula are more formal. The Turkish first person singular copula suffix is omitted when introducing oneself. Bora ben (I am Bora) is grammatically correct, but "Bora benim" (same sentence with the copula) is not for an introduction (but is grammatically correct in other cases). Further restrictions may apply before omission is permitted. For example, in the Irish language, is, the present tense of the copula, may be omitted when the predicate is a noun. Ba, the past/conditional, cannot be deleted. If the present copula is omitted, the pronoun (e.g., é, í, iad) preceding the noun is omitted as well. Additional copulas Sometimes, the term copula is taken to include not only a language's equivalent(s) to the verb be but also other verbs or forms that serve to link a subject to a predicative expression (while adding semantic content of their own). For example, English verbs like become, get, feel, look, taste, smell, and seem can have this function, as in the following sentences (the predicative expression, the complement of the verb, is in italics): (This usage should be distinguished from the use of some of these verbs as "action" verbs, as in They look at the wall, in which look denotes an action and cannot be replaced by the basic copula are.) Some verbs have rarer, secondary uses as copular verbs, like the verb fall in sentences like The zebra fell victim to the lion. These extra copulas are sometimes called "semi-copulas" or "pseudo-copulas." For a list of common verbs of this type in English, see List of English copulae. In particular languages Indo-European In Indo-European languages, the words meaning to be are sometimes similar to each other. Due to the high frequency of their use, their inflection retains a considerable degree of similarity in some cases. Thus, for example, the English form is is a cognate of German ist, Latin est, Persian ast and Russian jest', even though the Germanic, Italic, Iranian and Slavic language groups split at least 3000 years ago. The origins of the copulas of most Indo-European languages can be traced back to four Proto-Indo-European stems: *es- (*h1es-), *sta- (*steh2-), *wes- and *bhu- (*bʰuH-). English The English copular verb be has eight forms (more than any other English verb): be, am, is, are, being, was, were, been. Additional archaic forms include art, wast, wert, and occasionally beest (as a subjunctive). For more details see English verbs. For the etymology of the various forms, see Indo-European copula. The main uses of the copula in English are described in the above sections. The possibility of copula omission is mentioned under . A particular construction found in English (particularly in speech) is the use of two successive copulas when only one appears necessary, as in My point is, is that.... The acceptability of this construction is a disputed matter in English prescriptive grammar. The simple English copula "be" may on occasion be substituted by other verbs with near identical meanings. Persian In Persian, the verb to be can either take the form of ast (cognate to English is) or budan (cognate to be). {| border="0" cellspacing="2" cellpadding="1" |- | Aseman abi ast. |آسمان آبی است | the sky is blue |- | Aseman abi khahad bood. |{{lang|آسمان آبی خواهد بود | the sky will be blue |- | Aseman abi bood. |آسمان آبی بود | the sky was blue |} Hindustani In Hindustani (Hindi and Urdu), the copula होना ɦonɑ ہونا can be put into four grammatical aspects (simple, habitual, perfective, and progressive) and each of those four aspects can be put into five grammatical moods (indicative, presumptive, subjunctive, contrafactual, and imperative). Some example sentences using the simple aspect are shown below: Besides the verb होना honā (to be), there are three other verbs which can also be used as the copula, they are रहना rêhnā (to stay), जाना jānā (to go), and आना ānā (to come). The following table shows the conjugations of the copula होना honā in the five grammatical moods in the simple aspect. The transliteration scheme used is ISO 15919. Romance Copulas in the Romance languages usually consist of two different verbs that can be translated as "to be," the main one from the Latin esse (via Vulgar Latin essere; esse deriving from *es-), often referenced as sum (another of the Latin verb's principal parts) and a secondary one from stare (from *sta-), often referenced as sto. The resulting distinction in the modern forms is found in all the Iberian Romance languages, and to a lesser extent Italian, but not in French or Romanian. The difference is that the first usually refers to essential characteristics, while the second refers to states and situations, e.g., "Bob is old" versus "Bob is well." A similar division is found in the non-Romance Basque language (viz. egon and izan). (Note that the English words just used, "essential" and "state," are also cognate with the Latin infinitives esse and stare. The word "stay" also comes from Latin stare, through Middle French estai, stem of Old French ester.) In Spanish and Portuguese, the high degree of verbal inflection, plus the existence of two copulas (ser and estar), means that there are 105 (Spanish) and 110 (Portuguese) separate forms to express the copula, compared to eight in English and one in Chinese. In some cases, the verb itself changes the meaning of the adjective/sentence. The following examples are from Portuguese: Slavic Some Slavic languages make a distinction between essence and state (similar to that discussed in the above section on the Romance languages), by putting a predicative expression denoting a state into the instrumental case, and essential characteristics are in the nominative. This can apply with other copula verbs as well: the verbs for "become" are normally used with the instrumental case. As noted above under , Russian and other East Slavic languages generally omit the copula in the present tense. Irish In Irish and Scottish Gaelic, there are two copulas, and the syntax is also changed when one is distinguishing between states or situations and essential characteristics. Describing the subject's state or situation typically uses the normal VSO ordering with the verb bí. The copula is is used to state essential characteristics or equivalences. {| border="0" cellspacing="2" cellpadding="1" valign="top" | align=left valign=top| || align=right valign=top | || align=left valign=top | |- |Is fear é Liam.|| "Liam is a man." ||(Lit., "Is man Liam.") |- |Is leabhar é sin.|| "That is a book." ||(Lit., "Is book it that.") |} The word is is the copula (rhymes with the English word "miss"). The pronoun used with the copula is different from the normal pronoun. For a masculine singular noun, é is used (for "he" or "it"), as opposed to the normal pronoun sé; for a feminine singular noun, í is used (for "she" or "it"), as opposed to normal pronoun sí; for plural nouns, iad is used (for "they" or "those"), as opposed to the normal pronoun siad. To describe being in a state, condition, place, or act, the verb "to be" is used: Tá mé ag rith. "I am running." Bantu languages Chichewa In Chichewa, a Bantu language spoken mainly in Malawi, a very similar distinction exists between permanent and temporary states as in Spanish and Portuguese, but only in the present tense. For a permanent state, in the 3rd person, the copula used in the present tense is ndi (negative sí): iyé ndi mphunzitsi "he is a teacher" iyé sí mphunzitsi "he is not a teacher" For the 1st and 2nd persons the particle ndi is combined with pronouns, e.g. ine "I": ine ndine mphunzitsi "I am a teacher" iwe ndiwe mphunzitsi "you (singular) are a teacher" ine síndine mphunzitsi "I am not a teacher" For temporary states and location, the copula is the appropriate form of the defective verb -li: iyé ali bwino "he is well" iyé sáli bwino "he is not well" iyé ali ku nyumbá "he is in the house" For the 1st and 2nd persons the person is shown, as normally with Chichewa verbs, by the appropriate pronominal prefix: ine ndili bwino "I am well" iwe uli bwino "you (sg.) are well" kunyumbá kuli bwino "at home (everything) is fine" In the past tenses, -li is used for both types of copula: iyé analí bwino "he was well (this morning)" iyé ánaalí mphunzitsi "he was a teacher (at that time)" In the future, subjunctive, or conditional tenses, a form of the verb khala ("sit/dwell") is used as a copula: máwa ákhala bwino "he'll be fine tomorrow" Muylaq' Aymaran Uniquely, the existence of the copulative verbalizer suffix in the Southern Peruvian Aymaran language variety, Muylaq' Aymara, is evident only in the surfacing of a vowel that would otherwise have been deleted because of the presence of a following suffix, lexically prespecified to suppress it. As the copulative verbalizer has no independent phonetic structure, it is represented by the Greek letter ʋ in the examples used in this entry. Accordingly, unlike in most other Aymaran variants, whose copulative verbalizer is expressed with a vowel-lengthening component, -:, the presence of the copulative verbalizer in Muylaq' Aymara is often not apparent on the surface at all and is analyzed as existing only meta-linguistically. However, it is also relevant to note that in a verb phrase like "It is old," the noun thantha meaning "old" does not require the copulative verbalizer, thantha-wa "It is old." It is now pertinent to make some observations about the distribution of the copulative verbalizer. The best place to start is with words in which its presence or absence is obvious. When the vowel-suppressing first person simple tense suffix attaches to a verb, the vowel of the immediately preceding suffix is suppressed (in the | a man." ||(Lit., "Is man Liam.") |- |Is leabhar é sin.|| "That is a book." ||(Lit., "Is book it that.") |} The word is is the copula (rhymes with the English word "miss"). The pronoun used with the copula is different from the normal pronoun. For a masculine singular noun, é is used (for "he" or "it"), as opposed to the normal pronoun sé; for a feminine singular noun, í is used (for "she" or "it"), as opposed to normal pronoun sí; for plural nouns, iad is used (for "they" or "those"), as opposed to the normal pronoun siad. To describe being in a state, condition, place, or act, the verb "to be" is used: Tá mé ag rith. "I am running." Bantu languages Chichewa In Chichewa, a Bantu language spoken mainly in Malawi, a very similar distinction exists between permanent and temporary states as in Spanish and Portuguese, but only in the present tense. For a permanent state, in the 3rd person, the copula used in the present tense is ndi (negative sí): iyé ndi mphunzitsi "he is a teacher" iyé sí mphunzitsi "he is not a teacher" For the 1st and 2nd persons the particle ndi is combined with pronouns, e.g. ine "I": ine ndine mphunzitsi "I am a teacher" iwe ndiwe mphunzitsi "you (singular) are a teacher" ine síndine mphunzitsi "I am not a teacher" For temporary states and location, the copula is the appropriate form of the defective verb -li: iyé ali bwino "he is well" iyé sáli bwino "he is not well" iyé ali ku nyumbá "he is in the house" For the 1st and 2nd persons the person is shown, as normally with Chichewa verbs, by the appropriate pronominal prefix: ine ndili bwino "I am well" iwe uli bwino "you (sg.) are well" kunyumbá kuli bwino "at home (everything) is fine" In the past tenses, -li is used for both types of copula: iyé analí bwino "he was well (this morning)" iyé ánaalí mphunzitsi "he was a teacher (at that time)" In the future, subjunctive, or conditional tenses, a form of the verb khala ("sit/dwell") is used as a copula: máwa ákhala bwino "he'll be fine tomorrow" Muylaq' Aymaran Uniquely, the existence of the copulative verbalizer suffix in the Southern Peruvian Aymaran language variety, Muylaq' Aymara, is evident only in the surfacing of a vowel that would otherwise have been deleted because of the presence of a following suffix, lexically prespecified to suppress it. As the copulative verbalizer has no independent phonetic structure, it is represented by the Greek letter ʋ in the examples used in this entry. Accordingly, unlike in most other Aymaran variants, whose copulative verbalizer is expressed with a vowel-lengthening component, -:, the presence of the copulative verbalizer in Muylaq' Aymara is often not apparent on the surface at all and is analyzed as existing only meta-linguistically. However, it is also relevant to note that in a verb phrase like "It is old," the noun thantha meaning "old" does not require the copulative verbalizer, thantha-wa "It is old." It is now pertinent to make some observations about the distribution of the copulative verbalizer. The best place to start is with words in which its presence or absence is obvious. When the vowel-suppressing first person simple tense suffix attaches to a verb, the vowel of the immediately preceding suffix is suppressed (in the examples in this subsection, the subscript "c" appears prior to vowel-suppressing suffixes in the interlinear gloss to better distinguish instances of deletion that arise from the presence of a lexically pre-specified suffix from those that arise from other (e.g. phonotactic) motivations). Consider the verb sara- which is inflected for the first person simple tense and so, predictably, loses its final root vowel: sar(a)-ct-wa "I go." However, prior to the suffixation of the first person simple suffix -ct to the same root nominalized with the agentive nominalizer -iri, the word must be verbalized. The fact that the final vowel of -iri below is not suppressed indicates the presence of an intervening segment, the copulative verbalizer: sar(a)-iri-ʋ-t-wa "I usually go." It is worthwhile to compare of the copulative verbalizer in Muylaq' Aymara as compared to La Paz Aymara, a variant which represents this suffix with vowel lengthening. Consider the near-identical sentences below, both translations of "I have a small house" in which the nominal root uta-ni "house-attributive" is verbalized with the copulative verbalizer, but note that the correspondence between the copulative verbalizer in these two variants is not always a strict one-to-one relation. {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | La Paz Aymara: |ma: jisk'a uta-ni-:-ct(a)-wa |- | Muylaq' Aymara: |ma isk'a uta-ni-ʋ-ct-wa |} Georgian As in English, the verb "to be" (qopna) is irregular in Georgian (a Kartvelian language); different verb roots are employed in different tenses. The roots -ar-, -kn-, -qav-, and -qop- (past participle) are used in the present tense, future tense, past tense and the perfective tenses respectively. Examples: {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | Masc'avlebeli var. | "I am a teacher." |- | Masc'avlebeli viknebi. | "I will be a teacher." |- | Masc'avlebeli viqavi. | "I was a teacher." |- | Masc'avlebeli vqopilvar. | "I have been a teacher." |- | Masc'avlebeli vqopiliqavi. | "I had been a teacher." |} Note that, in the last two examples (perfective and pluperfect), two roots are used in one verb compound. In the perfective tense, the root qop (which is the expected root for the perfective tense) is followed by the root ar, which is the root for the present tense. In the pluperfective tense, again, the root qop is followed by the past tense root qav. This formation is very similar to German (an Indo-European language), where the perfect and the pluperfect are expressed in the following way: {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | Ich bin Lehrer gewesen. | "I have been a teacher," literally "I am teacher been." |- | Ich war Lehrer gewesen. | "I had been a teacher," literally "I was teacher been." |} Here, gewesen is the past participle of sein ("to be") in German. In both examples, as in Georgian, this participle is used together with the present and the past forms of the verb in order to conjugate for the perfect and the pluperfect aspects. Haitian Creole Haitian Creole, a French-based creole language, has three forms of the copula: se, ye, and the zero copula, no word at all (the position of which will be indicated with Ø, just for purposes of illustration). Although no textual record exists of Haitian-Creole at its earliest stages of development from French, se is derived from French (written c'est), which is the normal French contraction of (that, written ce) and the copula (is, written est) (a form of the verb être). The derivation of ye is less obvious; but we can assume that the French source was ("he/it is," written il est), which, in rapidly spoken French, is very commonly pronounced as (typically written y est). The use of a zero copula is unknown in French, and it is thought to be an innovation from the early days when Haitian-Creole was first developing as a Romance-based pidgin. Latin also sometimes used a zero copula. Which of se / ye / Ø is used in any given copula clause depends on complex syntactic factors that we can superficially summarize in the following four rules: 1. Use Ø (i.e., no word at all) in declarative sentences where the complement is an adjective phrase, prepositional phrase, or adverb phrase: {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | Li te Ø an Ayiti. | "She was in Haiti." || (Lit., "She past-tense in Haiti.") |- | Liv-la Ø jon. | "The book is yellow." || (Lit., "Book-the yellow.") |- | Timoun-yo Ø lakay. | "The kids are [at] home." || (Lit., "Kids-the home.") |} 2. Use se when the complement is a noun phrase. But note that, whereas other verbs come after any tense/mood/aspect particles (like pa to mark negation, or te to explicitly mark past tense, or ap to mark progressive aspect), se comes before any such particles: {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | Chal se ekriven. | "Charles is writer." |- | Chal, ki se ekriven, pa vini. | "Charles, who is writer, not come." |} 3. Use se where French and English have a dummy "it" subject: {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | Se mwen! | "It's me!" French C'est moi! |- | Se pa fasil. | "It's not easy," colloquial French C'est pas facile. |} 4. Finally, use the other copula form ye in situations where the sentence's syntax leaves the copula at the end of a phrase: {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | Kijan ou ye? | "How you are?" |- | Pou kimoun liv-la te ye? | "Whose book was it?" || (Lit., "Of who book-the past-tense is?) |- | M pa konnen kimoun li ye. | "I don't know who he is." || (Lit., "I not know who he is.") |- | Se yon ekriven Chal ye. | "Charles is a writer!" || (Lit., "It's a writer Charles is;" cf. French C'est un écrivain qu'il est.) |} The above is, however, only a simplified analysis. Japanese The Japanese copula (most often translated into English as an inflected form of "to be") has many forms. E.g., The form da is used predicatively, na - attributively, de - adverbially or as a connector, and des - predicatively or as a politeness indicator. Examples: {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | 私は学生だ。 | Watashi wa gakusei da. || "I'm a student." || (lit., I TOPIC student COPULA) |- | これはペンです。 | Kore wa pen desu. || "This is a pen." || (lit., this TOPIC pen COPULA-POLITE) |} desu is the polite form of the copula. Thus, many sentences like the ones below are almost identical in meaning and differ in the speaker's politeness to the addressee and in nuance of how assured the person is of their statement. {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | あれはホテルだ。 | Are wa hoteru da.|| "That's a hotel." || (lit., that TOPIC hotel COPULA) |- | あれはホテルです。 | Are wa hoteru desu.|| "That is a hotel." || (lit., that TOPIC hotel COPULA-POLITE) |} A predicate in Japanese is expressed by the predicative form of a verb, the predicative form of an adjective or noun + the predicative form of a copula. {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | このビールはおいしい。 | Kono bīru wa oishii. || "This beer is delicious." |- | このビールはおいしいです。 | Kono bīru wa oishii desu. || "This beer is delicious." |- | *このビールはおいしいだ。 | *Kono bīru wa oishii da. || colspan=2 | This is grammatically incorrect because da can only be coupled with a noun to form a predicate. |} Other forms of copula: である de aru, であります de arimasu (used in writing and formal speaking) でございます de gozaimasu (used in public announcements, notices, etc.) The copula is subject to dialectal variation throughout Japan, resulting in forms like や ya in Kansai and じゃ ja in Hiroshima (see map above). Japanese also has two verbs corresponding to English "to be": aru and iru. They are not copulas but existential verbs. Aru is used for inanimate objects, including plants, whereas iru is used for animate things like people, animals, and robots, though there are exceptions to this generalization. {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | 本はテーブルにある。 | Hon wa tēburu ni aru.|| "The book is on a table." |- | 小林さんはここにいる。 | Kobayashi-san wa koko ni iru.|| "Kobayashi is here." |} Japanese speakers, when learning English, often drop the auxiliary verbs "be" and "do," incorrectly believing that "be" is a semantically empty copula equivalent to "desu" and "da." Korean For sentences with predicate nominatives, the copula "이" (i-) is added to the predicate nominative (with no space in between). {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | 바나나는 과일이다. | Ba-na-na-neun gwa-il -i-da. || "Bananas are a fruit." |} Some adjectives (usually colour adjectives) are nominalized and used with the copula "이"(i-). 1. Without the copula "이"(i-): {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | 장미는 빨개요. | Jang-mi-neun ppal-gae-yo.|| "Roses are red." |} 2. With the copula "이"(i-): {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | 장미는 빨간색이다. | Jang-mi-neun ppal-gan-saek-i-da.|| "Roses are red-coloured." |} Some Korean adjectives are derived using the copula. Separating these articles and nominalizing the former part will often result in a sentence with a related, but different meaning. Using the separated sentence in a situation where the un-separated sentence is appropriate is usually acceptable as the listener can decide what the speaker is trying to say using the context. Chinese N.B. The characters used are simplified ones, and the transcriptions given in italics reflect Standard Chinese pronunciation, using the pinyin system. In Chinese, both states and qualities are, in general, expressed with stative verbs (SV) with no need for a copula, e.g., in Chinese, "to be tired" (累 lèi), "to be hungry" (饿 è), "to be located at" (在 zài), "to be stupid" (笨 bèn) and so forth. A sentence can consist simply of a pronoun and such a verb: for example, 我饿 wǒ è ("I am hungry"). Usually, however, verbs expressing qualities are qualified by an adverb (meaning "very," "not," "quite," etc.); when not otherwise qualified, they are often preceded by 很 hěn, which in other contexts means "very," but in this use often has no particular meaning. Only sentences with a noun as the complement (e.g., "This is my sister") use the copular verb "to be": . This is used frequently; for example, instead of having a verb meaning "to be Chinese," the usual expression is "to be a Chinese person" (; "I am a Chinese person;" "I am Chinese"). This is sometimes called an equative verb. Another possibility is for the complement to be just a noun modifier (ending in ), the noun being omitted: Before the Han Dynasty, the character 是 served as a demonstrative pronoun meaning "this." (This usage survives in some idioms and proverbs.) Some linguists believe that 是 developed into a copula because it often appeared, as a repetitive subject, after the subject of a sentence (in classical Chinese we can say, for example: "George W. Bush, this president of the United States" meaning "George W. Bush is the president of the United States). The character 是 appears to be formed as a compound of characters with the meanings of "early" and "straight." Another use of 是 in modern Chinese is in combination with the modifier 的 de to mean "yes" or to show agreement. For example: Question: 你的汽车是不是红色的? nǐ de qìchē shì bú shì hóngsè de? "Is your car red or not?"Response: 是的 shì de "Is," meaning "Yes," or 不是 bú shì "Not is," meaning "No." (A more common way of showing that the person asking the question is correct is by simply saying "right" or "correct," 对 duì; the corresponding negative answer is 不对 bú duì, "not right.") Yet another use of 是 is in the shì...(de) construction, which is used to emphasize a particular element of the sentence; see . In Hokkien 是 sī acts as the copula, and 是 is the equivalent in Wu Chinese. Cantonese uses 係 () instead of 是; similarly, Hakka uses 係 he55. Siouan languages In Siouan languages like Lakota, in principle almost all words—according to their structure—are verbs. So not only (transitive, intransitive and so-called "stative") verbs but even nouns often behave like verbs and do not need to have copulas. For example, the word wičháša refers to a man, and the verb "to-be-a-man" is expressed as wimáčhaša/winíčhaša/wičháša (I am/you are/he is a man). Yet there also is a copula héčha (to be a ...) that in most cases is used: wičháša hemáčha/heníčha/héčha (I am/you are/he is a man). In order to express the statement "I am a doctor of profession," one has to say pezuta wičháša hemáčha. But, in order to express that that person is THE doctor (say, that had been phoned to help), one must use another copula iyé (to be the one): pežúta wičháša (kiŋ) miyé yeló (medicine-man DEF ART I-am-the-one MALE ASSERT). In order to refer to space (e.g., Robert is in the house), various verbs are used, e.g., yaŋkÁ (lit., to sit) for humans, or háŋ/hé (to stand upright) for inanimate objects of a certain shape. "Robert is in the house" could be translated as Robert thimáhel yaŋké (yeló), whereas "There's one restaurant next to the gas station" translates as Owótethipi wígli-oínažiŋ kiŋ hél isákhib waŋ hé. Constructed languages The constructed language Lojban has two words that act similar to a copula in natural languages. The clause me ... me'u turns whatever follows it into a |
that he had reached the Far East. As a colonial governor, Columbus was accused by his contemporaries of significant brutality and was soon removed from the post. Columbus's strained relationship with the Crown of Castile and its appointed colonial administrators in America led to his arrest and removal from Hispaniola in 1500, and later to protracted litigation over the perquisites that he and his heirs claimed were owed to them by the crown. Columbus's expeditions inaugurated a period of exploration, conquest, and colonization that lasted for centuries, helping create the modern Western world. The transfers between the Old World and New World that followed his first voyage are known as the Columbian exchange. Columbus was widely celebrated in the centuries after his death, but public perception has fractured in the 21st century as scholars have given greater attention to the harms committed under his governance, particularly the beginning of the depopulation of Hispaniola's indigenous Taínos caused by mistreatment and Old World diseases, as well as by that people's enslavement. Proponents of the Black Legend theory of historiography claim that Columbus has been unfairly maligned as part of a wider anti-Catholic sentiment. Many places in the Western Hemisphere bear his name, including the country of Colombia, the District of Columbia, and British Columbia. Early life Columbus's early life is obscure, but scholars believe he was born in the Republic of Genoa between 25 August and 31 October 1451. His father was Domenico Colombo, a wool weaver who worked in Genoa and Savona and who also owned a cheese stand at which young Christopher worked as a helper. His mother was Susanna Fontanarossa. He had three brothers—Bartolomeo, Giovanni Pellegrino, and Giacomo (also called Diego), as well as a sister named Bianchinetta. His brother Bartolomeo ran a cartography workshop in Lisbon for at least part of his adulthood. His native language is presumed to have been a Genoese dialect although Columbus probably never wrote in that language. His name in the 16th-century Genoese language was Cristoffa Corombo (). His name in Italian is Cristoforo Colombo, and in Spanish Cristóbal Colón. In one of his writings, he says he went to sea at the age of fourteen. In 1470, the Colombo family moved to Savona, where Domenico took over a tavern. Some modern authors have argued that he was not from Genoa but, instead, from the Aragon region of Spain or from Portugal. These competing hypotheses generally have been discounted by mainstream scholars. In 1473, Columbus began his apprenticeship as business agent for the wealthy Spinola, Centurione, and Di Negro families of Genoa. Later, he made a trip to Chios, an Aegean island then ruled by Genoa. In May 1476, he took part in an armed convoy sent by Genoa to carry valuable cargo to northern Europe. He probably visited Bristol, England, and Galway, Ireland. It has been speculated that he have also gone to Iceland in 1477, although many scholars doubt it. It is known that in the autumn of 1477, he sailed on a Portuguese ship from Galway to Lisbon, where he found his brother Bartolomeo, and they continued trading for the Centurione family. Columbus based himself in Lisbon from 1477 to 1485. In 1478, the Centuriones sent Columbus on a sugar-buying trip to Madeira. He married Felipa Perestrello e Moniz, daughter of Bartolomeu Perestrello, a Portuguese nobleman of Lombard origin, who had been the donatary captain of Porto Santo. In 1479 or 1480, Columbus's son Diego was born. Between 1482 and 1485, Columbus traded along the coasts of West Africa, reaching the Portuguese trading post of Elmina at the Guinea coast (in present-day Ghana). Before 1484, Columbus returned to Porto Santo to find that his wife had died. He returned to Portugal to settle her estate and take his son Diego with him. He left Portugal for Castile in 1485, where he found a mistress in 1487, a 20-year-old orphan named Beatriz Enríquez de Arana. It is likely that Beatriz met Columbus when he was in Córdoba, a gathering site of many Genoese merchants and where the court of the Catholic Monarchs was located at intervals. Beatriz, unmarried at the time, gave birth to Columbus's natural son, Fernando Columbus in July 1488, named for the monarch of Aragon. Columbus recognized the boy as his offspring. Columbus entrusted his older, legitimate son Diego to take care of Beatriz and pay the pension set aside for her following his death, but Diego was negligent in his duties. Being ambitious, Columbus eventually learned Latin, Portuguese, and Castilian. He read widely about astronomy, geography, and history, including the works of Claudius Ptolemy, Pierre Cardinal d'Ailly's Imago Mundi, the travels of Marco Polo and Sir John Mandeville, Pliny's Natural History, and Pope Pius II's Historia Rerum Ubique Gestarum. According to historian Edmund Morgan, Columbus was not a scholarly man. Yet he studied these books, made hundreds of marginal notations in them and came out with ideas about the world that were characteristically simple and strong and sometimes wrong ... Quest for Asia Background Under the Mongol Empire's hegemony over Asia and the Pax Mongolica, Europeans had long enjoyed a safe land passage on the Silk Road to parts of East Asia (including China) and Maritime Southeast Asia, which were sources of valuable goods. With the fall of Constantinople to the Ottoman Empire in 1453, the Silk Road was closed to Christian traders. In 1474, the Florentine astronomer Paolo dal Pozzo Toscanelli suggested to King Afonso V of Portugal that sailing west across the Atlantic would be a quicker way to reach the Maluku (Spice) Islands, China, and Japan than the route around Africa, but Afonso rejected his proposal. In the 1480s, Columbus and his brother proposed a plan to reach the East Indies by sailing west. Columbus supposedly wrote Toscanelli in 1481 and received encouragement, along with a copy of a map the astronomer had sent Afonso implying that a westward route to Asia was possible. Columbus's plans were complicated by the opening of the Cape Route to Asia around Africa in 1488. Carol Delaney and other commentators have argued that Columbus was a Christian millennialist and apocalypticist and that these beliefs motivated his quest for Asia in a variety of ways. Columbus often wrote about seeking gold in the log books of his voyages and writes about acquiring the precious metal "in such quantity that the sovereigns... will undertake and prepare to go conquer the Holy Sepulcher" in a fulfillment of Biblical prophecy. Columbus also often wrote about converting all races to Christianity. Abbas Hamandi argues that Columbus was motivated by the hope of "[delivering] Jerusalem from Muslim hands" by "using the resources of newly discovered lands". Geographical considerations Despite a popular misconception to the contrary, nearly all educated Westerners of Columbus's time knew that the Earth is spherical, a concept that had been understood since antiquity. The techniques of celestial navigation, which uses the position of the Sun and the stars in the sky, had long been in use by astronomers and were beginning to be implemented by mariners. As far back as the 3rd century BC, Eratosthenes had correctly computed the circumference of the Earth by using simple geometry and studying the shadows cast by objects at two remote locations. In the 1st century BC, Posidonius confirmed Eratosthenes's results by comparing stellar observations at two separate locations. These measurements were widely known among scholars, but Ptolemy's use of the smaller, old-fashioned units of distance led Columbus to underestimate the size of the Earth by about a third. Three cosmographical parameters determined the bounds of Columbus's enterprise: 1) The distance across the ocean between Europe and Asia, which depended on the extent of the oikumene, i.e., the Eurasian land-mass stretching east-west between Spain and China, 2) the circumference of the earth and the number of miles or leagues in a degree of longitude, 3) which was possible to deduce from the theory of the relationship between the size of the surfaces of water and the land as held by the followers of Aristotle in medieval times. From Pierre d'Ailly's Imago Mundi (1410), Columbus learned of Alfraganus's estimate that a degree of latitude (equal to approximately a degree of longitude along the equator) spanned 56.67 Arabic miles (equivalent to ), but he did not realize that this was expressed in the Arabic mile (about 1,830 meters) rather than the shorter Roman mile (about 1,480 meters) with which he was familiar. Columbus therefore estimated the size of the Earth to be about 75% of Eratosthenes's calculation, and the distance westward from the Canary Islands to the Indies as only 68 degrees, or 3080 nautical miles (a 58% margin of error). Most scholars of the time accepted Ptolemy's estimate that Eurasia spanned 180° longitude, rather than the actual 130° (to the Chinese mainland) or 150° (to Japan at the latitude of Spain). Columbus believed an even higher estimate, leaving a smaller percentage for water. In d'Ailly's Imago Mundi, Columbus read Marinus of Tyre's estimate that the longitudinal span of Eurasia was 225° at the latitude of Rhodes. Some historians, such as Samuel Morison, have suggested that he followed the statement in the apocryphal book 2 Esdras (6:42) that "six parts [of the globe] are habitable and the seventh is covered with water." He was also aware of Marco Polo's claim that Japan (which he called "Cipangu") was some to the east of China ("Cathay"), and closer to the equator than it is. He was influenced by Toscanelli's idea that there were inhabited islands even farther to the east than Japan, including the mythical Antillia, which he thought might lie not much farther to the west than the Azores. Based on his sources, Columbus estimated a distance of from the Canary Islands west to Japan; the actual distance is . No ship in the 15th century could have carried enough food and fresh water for such a long voyage, and the dangers involved in navigating through the uncharted ocean would have been formidable. Most European navigators reasonably concluded that a westward voyage from Europe to Asia was unfeasible. The Catholic Monarchs, however, having completed the Reconquista, an expensive war in the Iberian Peninsula, were eager to obtain a competitive edge over other European countries in the quest for trade with the Indies. Columbus's project, though far-fetched, held the promise of such an advantage. Nautical considerations Though Columbus was wrong about the number of degrees of longitude that separated Europe from the Far East and about the distance that each degree represented, he did take advantage of the trade winds, which would prove to be the key to his successful navigation of the Atlantic Ocean. He planned to first sail to the Canary Islands before continuing west with the northeast trade wind. Part of the return to Spain would require traveling against the wind using an arduous sailing technique called beating, during which progress is made very slowly. To effectively make the return voyage, Columbus would need to follow the curving trade winds northeastward to the middle latitudes of the North Atlantic, where he would be able to catch the "westerlies" that blow eastward to the coast of Western Europe. The navigational technique for travel in the Atlantic appears to have been exploited first by the Portuguese, who referred to it as the volta do mar ('turn of the sea'). Through his marriage to his first wife, Felipa Perestrello, Columbus had access to the nautical charts and logs that had belonged to her deceased father, Bartolomeu Perestrello, who had served as a captain in the Portuguese navy under Prince Henry the Navigator. In the mapmaking shop where he worked with his brother Bartolomeo, Columbus also had ample opportunity to hear the stories of old seamen about their voyages to the western seas, but his knowledge of the Atlantic wind patterns was still imperfect at the time of his first voyage. By sailing due west from the Canary Islands during hurricane season, skirting the so-called horse latitudes of the mid-Atlantic, he risked being becalmed and running into a tropical cyclone, both of which he avoided by chance. Quest for financial support for a voyage By about 1484, Columbus proposed his planned voyage to King John II of Portugal. The king submitted Columbus's proposal to his advisors, who rejected it, correctly, on the grounds that Columbus's estimate for a voyage of 2,400 nautical miles was only a quarter of what it should have been. In 1488, Columbus again appealed to the court of Portugal, and John II again granted him an audience. That meeting also proved unsuccessful, in part because not long afterwards Bartolomeu Dias returned to Portugal with news of his successful rounding of the southern tip of Africa (near the Cape of Good Hope). Columbus sought an audience with the monarchs Ferdinand II of Aragon and Isabella I of Castile, who had united several kingdoms in the Iberian Peninsula by marrying and were now ruling together. On 1 May 1486, permission having been granted, Columbus presented his plans to Queen Isabella, who, in turn, referred it to a committee. The learned men of Spain, like their counterparts in Portugal, replied that Columbus had grossly underestimated the distance to Asia. They pronounced the idea impractical and advised the Catholic Monarchs to pass on the proposed venture. To keep Columbus from taking his ideas elsewhere, and perhaps to keep their options open, the sovereigns gave him an allowance, totaling about 14,000 maravedis for the year, or about the annual salary of a sailor. In May 1489, the queen sent him another 10,000 maravedis, and the same year the monarchs furnished him with a letter ordering all cities and towns under their dominion to provide him food and lodging at no cost. Columbus also dispatched his brother Bartolomeo to the court of Henry VII of England to inquire whether the English crown might sponsor his expedition, but he was captured by pirates en route, and only arrived in early 1491. By that time, Columbus had retreated to La Rábida Friary, where the Spanish crown sent him 20,000 maravedis to buy new clothes and instructions to return to the Spanish court for renewed discussions. Agreement with the Spanish crown Columbus waited at King Ferdinand's camp until Ferdinand and Isabella conquered Granada, the last Muslim stronghold on the Iberian Peninsula, in January 1492. A council led by Isabella's confessor, Hernando de Talavera, found Columbus's proposal to reach the Indies implausible. Columbus had left for France when Ferdinand intervened, first sending Talavera and Bishop Diego Deza to appeal to the queen. Isabella was finally convinced by the king's clerk Luis de Santángel, who argued that Columbus would take his ideas elsewhere, and offered to help arrange the funding. Isabella then sent a royal guard to fetch Columbus, who had traveled 2 leagues (over 10 kilometers) toward Córdoba. In the April 1492 "Capitulations of Santa Fe", King Ferdinand and Queen Isabella promised Columbus that if he succeeded he would be given the rank of Admiral of the Ocean Sea and appointed Viceroy and Governor of all the new lands he might claim for Spain. He had the right to nominate three persons, from whom the sovereigns would choose one, for any office in the new lands. He would be entitled to 10% (diezmo) of all the revenues from the new lands in perpetuity. He also would have the option of buying one-eighth interest in any commercial venture in the new lands, and receive one-eighth (ochavo) of the profits. In 1500, during his third voyage to the Americas, Columbus was arrested and dismissed from his posts. He and his sons, Diego and Fernando, then conducted a lengthy series of court cases against the Castilian crown, known as the pleitos colombinos, alleging that the Crown had illegally reneged on its contractual obligations to Columbus and his heirs. The Columbus family had some success in their first litigation, as a judgment of 1511 confirmed Diego's position as viceroy but reduced his powers. Diego resumed litigation in 1512, which lasted until 1536, and further disputes initiated by heirs continued until 1790. Voyages Between 1492 and 1504, Columbus completed four round-trip voyages between Spain and the Americas, each voyage being sponsored by the Crown of Castile. On his first voyage he reached the Americas, initiating the European exploration and colonization of the Americas, as well as the Columbian exchange. His role in history is thus important to the Age of Discovery, Western history, and human history writ large. In Columbus's letter on the first voyage, published following his first return to Spain, he claimed that he had reached Asia, as previously described by Marco Polo and other Europeans. Over his subsequent voyages, Columbus refused to acknowledge that the lands he visited and claimed for Spain were not part of Asia, in the face of mounting evidence to the contrary. This might explain, in part, why the American continent was named after the Florentine explorer Amerigo Vespucci—who received credit for recognizing it as a "New World"—and not after Columbus. First voyage (1492–1493) On the evening of 3 August 1492, Columbus departed from Palos de la Frontera with three ships. The largest was a carrack, the Santa María, owned and captained by Juan de la Cosa, and under Columbus's direct command. The other two were smaller caravels, the Pinta and the Niña, piloted by the Pinzón brothers. Columbus first sailed to the Canary Islands. There he restocked provisions and made repairs then departed from San Sebastián de La Gomera on 6 September, for what turned out to be a five-week voyage across the ocean. On 7 October, the crew spotted "[i]mmense flocks of birds". On 11 October, Columbus changed the fleet's course to due west, and sailed through the night, believing land was soon to be found. At around 02:00 the following morning, a lookout on the Pinta, Rodrigo de Triana, spotted land. The captain of the Pinta, Martín Alonso Pinzón, verified the sight of land and alerted Columbus. Columbus later maintained that he had already seen a light on the land a few hours earlier, thereby claiming for himself the lifetime pension promised by Ferdinand and Isabella to the first person to sight land. Columbus called this island (in what is now the Bahamas) San Salvador (meaning "Holy Savior"); the natives called it Guanahani. Christopher Columbus's journal entry of 12 October 1492 states:I saw some who had marks of wounds on their bodies and I made signs to them asking what they were; and they showed me how people from other islands nearby came there and tried to take them, and how they defended themselves; and I believed and believe that they come here from tierra firme to take them captive. They should be good and intelligent servants, for I see that they say very quickly everything that is said to them; and I believe they would become Christians very easily, for it seemed to me that they had no religion. Our Lord pleasing, at the time of my departure I will take six of them from here to Your Highnesses in order that they may learn to speak.Columbus called the inhabitants of the lands that he visited Los Indios (Spanish for "Indians"). He initially encountered the Lucayan, Taíno, and Arawak peoples. Noting their gold ear ornaments, Columbus took some of the Arawaks prisoner and insisted that they guide him to the source of the gold. Columbus observed that their primitive weapons and military tactics made the natives susceptible to easy conquest, writing, "the people here are simple in war-like matters ... I could conquer the whole of them with fifty men, and govern them as I pleased." Columbus also explored the northeast coast of Cuba, where he landed on 28 October. On the night of 26 November, Martín Alonso Pinzón took the Pinta on an unauthorized expedition in search of an island called "Babeque" or "Baneque", which the natives had told him was rich in gold. Columbus, for his part, continued to the northern coast of Hispaniola, where he landed on 6 December. There, the Santa María ran aground on 25 December 1492 and had to be abandoned. The wreck was used as a target for cannon fire to impress the native peoples. Columbus was received by the native cacique Guacanagari, who gave him permission to leave some of his men behind. Columbus left 39 men, including the interpreter Luis de Torres, and founded the settlement of La Navidad, in present-day Haiti. Columbus took more natives prisoner and continued his exploration. He kept sailing along the northern coast of Hispaniola with a single ship until he encountered Pinzón and the Pinta on 6 January. On 13 January 1493, Columbus made his last stop of this voyage in the Americas, in the Bay of Rincón in northeast Hispaniola. There he encountered the Ciguayos, the only natives who offered violent resistance during this voyage. The Ciguayos refused to trade the amount of bows and arrows that Columbus desired; in the ensuing clash one Ciguayo was stabbed in the buttocks and another wounded with an arrow in his chest. Because of these events, Columbus called the inlet the Golfo de Las Flechas (Bay of Arrows). Columbus headed for Spain on the Niña, but a storm separated him from the Pinta, and forced the Niña to stop at the island of Santa Maria in the Azores. Half of his crew went ashore to say prayers of thanksgiving in a chapel for having survived the storm. But while praying, they were imprisoned by the governor of the island, ostensibly on suspicion of being pirates. After a two-day standoff, the prisoners were released, and Columbus again set sail for Spain. Another storm forced Columbus into the port at Lisbon. From there he went to Vale do Paraíso north of Lisbon to meet King John II of Portugal, who told Columbus that he believed the voyage to be in violation of the 1479 Treaty of Alcáçovas. After spending more than a week in Portugal, Columbus set sail for Spain. Returning to Palos on 15 March 1493, he was given a hero's welcome and soon afterward received by Isabella and Ferdinand in Barcelona. Columbus's letter on the first voyage, dispatched to the Spanish court, was instrumental in spreading the news throughout Europe about his voyage. Almost immediately after his arrival in Spain, printed versions began to appear, and word of his voyage spread rapidly. Most people initially believed that he had reached Asia. The Bulls of Donation, three papal bulls of Pope Alexander VI delivered in 1493, purported to grant overseas territories to Portugal and the Catholic Monarchs of Spain. They were replaced by the Treaty of Tordesillas of 1494. Second voyage (1493–1496) On 24 September 1493, Columbus sailed from Cádiz with 17 ships, and supplies to establish permanent colonies in the Americas. He sailed with nearly 1,500 men, including sailors, soldiers, priests, carpenters, stonemasons, metalworkers, and farmers. Among the expedition members were Alvarez Chanca, a physician who wrote a detailed account of the second voyage; Juan Ponce de León, the first governor of Puerto Rico and Florida; the father of Bartolomé de las Casas; Juan de la Cosa, a cartographer who is credited with making the first world map depicting the New World; and Columbus's youngest brother Diego. The fleet stopped at the Canary Islands to take on more supplies, and set sail again on 7 October, deliberately taking a more southerly course than on the first voyage. On 3 November, they arrived in the Windward Islands; the first island they encountered was named Dominica by Columbus, but not finding a good harbor there, they anchored off a nearby smaller island, which he named Mariagalante, now a part of Guadeloupe and called Marie-Galante. Other islands named by Columbus on this voyage were Montserrat, Antigua, Saint Martin, the Virgin Islands, as well as many others. On 22 November, Columbus returned to Hispaniola to visit La Navidad, where 39 Spaniards had been left during the first voyage. Columbus found the fort in ruins, destroyed by the Taínos after some of the Spaniards antagonizing their hosts with their unrestrained lust for gold and women. Columbus then established a poorly located and short-lived settlement to the east, La Isabela, in the present-day Dominican Republic. From April to August 1494, Columbus explored Cuba and Jamaica, then returned to Hispaniola. By the end of 1494, disease and famine had killed two-thirds of the Spanish settlers. Columbus implemented encomienda, a Spanish labor system that rewarded conquerors with the labor of conquered non-Christian people. Columbus executed Spanish colonists for minor crimes, and used dismemberment as punishment. Columbus and the colonists enslaved the indigenous people, including children. Natives were beaten, raped, and tortured for the location of imagined gold. Thousands committed suicide rather than face the oppression. In February 1495, Columbus rounded up about 1,500 Arawaks, some of whom had rebelled, in a great slave raid. About 500 of the strongest were shipped to Spain as slaves, with about two hundred of those dying en route. In June 1495, the Spanish crown sent ships and supplies to Hispaniola. In October, Florentine merchant Gianotto Berardi, who had won the contract to provision the fleet of Columbus's second voyage and to supply the colony on Hispaniola, received almost 40,000 maravedís worth of enslaved Indians. He renewed his effort to get supplies to Columbus, and was working to organize a fleet when he suddenly died in December. On 10 March 1496, having been away about 30 months, the fleet departed La Isabela. On 8 June the crew sighted land somewhere between Lisbon and Cape St. Vincent, and disembarked in Cádiz on 11 June. Third voyage (1498–1500) On 30 May 1498, Columbus left with six ships from Sanlúcar, Spain. The fleet called at Madeira and the Canary Islands, where it divided in two, with three ships heading for Hispaniola and the other three vessels, commanded by Columbus, sailing south to the Cape Verde Islands and then westward across the Atlantic. It is probable that this expedition was intended at least partly to confirm rumors of a large continent south of the Caribbean Sea, that is, South America. On 31 July they sighted Trinidad, the most southerly of the Caribbean islands. On 5 August, Columbus sent several small boats ashore on the southern side of the Paria Peninsula in what is now Venezuela, near the mouth of the Orinoco river. This was the first recorded landing of Europeans on the mainland of South America, which Columbus realized must be a continent. The fleet then sailed to the islands of Chacachacare and Margarita, reaching the latter on 14 August, and sighted Tobago and Grenada from afar, according to some scholars. On 19 August, Columbus returned to Hispaniola. There he found settlers in rebellion against his rule, and his unfulfilled promises of riches. Columbus had some of the Europeans tried for their disobedience; at least one rebel leader was hanged. In October 1499, Columbus sent two ships to Spain, asking the Court of Spain to appoint a royal commissioner to help him govern. By this time, accusations of tyranny and incompetence on the part of Columbus had also reached the Court. The sovereigns sent Francisco de Bobadilla, a relative of Marquesa Beatriz de Bobadilla, a patron of Columbus and a close friend of Queen Isabella, to investigate the accusations of brutality made against the Admiral. Arriving in Santo Domingo while Columbus was away, Bobadilla was immediately met with complaints about all three Columbus brothers. He moved into Columbus's house and seized his property, took depositions from the Admiral's enemies, and declared himself governor. Bobadilla reported to Spain that Columbus once punished a man found guilty of stealing corn by having his ears and nose cut off and then selling him into slavery. He claimed that Columbus regularly used torture and mutilation to govern Hispaniola. Testimony recorded in the report stated that Columbus congratulated his brother Bartolomeo on "defending the family" when the latter ordered a woman paraded naked through the streets and then had her tongue cut because she had "spoken ill of the admiral and his brothers". The document also describes how Columbus put down native unrest and revolt: he first ordered a brutal suppression of the uprising in which many natives were killed, and then paraded their dismembered bodies through the streets in an attempt to discourage further rebellion. Columbus vehemently denied the charges. The neutrality and accuracy of the accusations and investigations of Bobadilla toward Columbus and his brothers have been disputed by historians, given the anti-Italian sentiment of the Spaniards and Bobadilla's desire to take over Columbus' position. In early October 1500, Columbus and Diego presented themselves to Bobadilla, and were put in chains aboard La Gorda, the caravel on which Bobadilla had arrived at Santo Domingo. They were returned to Spain, and languished in jail for | discovery by archivist Isabel Aguirre of an incomplete copy of the testimonies against them gathered by Francisco de Bobadilla at Santo Domingo in 1500. She found a manuscript copy of this pesquisa (inquiry) in the Archive of Simancas, Spain, uncatalogued until she and Consuelo Varela published their book, La caída de Cristóbal Colón: el juicio de Bobadilla (The fall of Christopher Colón: the judgement of Bobadilla) in 2006. Fourth voyage (1502–1504) On 9 May 1502 Columbus, with his brother Bartolomeo as second in command and his son Fernando, left Cádiz with his flagship Santa María and three other vessels, crewed by 140 men (Some scholars, including Sauer, say the fleet sailed 11 May; Cook says 9 May). He sailed to Arzila on the Moroccan coast to rescue Portuguese soldiers said to be besieged by the Moors. The siege had been lifted by the time they arrived, so the Spaniards stayed only a day and continued on to the Canary Islands. On 15 June, the fleet arrived at Martinique, where it lingered for several days. A hurricane was forming, so Columbus continued westward, hoping to find shelter on Hispaniola. He arrived at Santo Domingo on 29 June, but was denied port, and the new governor Francisco de Bobadilla refused to listen to his warning that a hurricane was approaching. Instead, while Columbus's ships sheltered at the mouth of the Rio Jaina, the first Spanish treasure fleet sailed into the hurricane. Columbus's ships survived with only minor damage, while 20 of the 30 ships in the governor's fleet were lost along with 500 lives (including that of Francisco de Bobadilla). Although a few surviving ships managed to straggle back to Santo Domingo, Aguja, the fragile ship carrying Columbus's personal belongings and his 4,000 pesos in gold was the sole vessel to reach Spain. The gold was his tenth (décimo) of the profits from Hispaniola, equal to 240,000 maravedis, guaranteed by the Catholic Monarchs in 1492. After a brief stop at Jamaica, Columbus sailed to Central America, arriving at the coast of Honduras on 30 July. Here Bartolomeo found native merchants and a large canoe. On 14 August, Columbus landed on the continental mainland at Punta Caxinas, now Puerto Castilla, Honduras. He spent two months exploring the coasts of Honduras, Nicaragua, and Costa Rica, seeking a strait in the western Caribbean through which he could sail to the Indian Ocean. Sailing south along the Nicaraguan coast, he found a channel that led into Almirante Bay in Panama on 5 October. As soon as his ships anchored in Almirante Bay, Columbus encountered Ngäbe people in canoes who were wearing gold ornaments. In January 1503, he established a garrison at the mouth of the Belén River. Columbus left for Hispaniola on 16 April. On 10 May he sighted the Cayman Islands, naming them "Las Tortugas" after the numerous sea turtles there. His ships sustained damage in a storm off the coast of Cuba. Unable to travel farther, on 25 June 1503 they were beached in Saint Ann Parish, Jamaica. For six months Columbus and 230 of his men remained stranded on Jamaica. Diego Méndez de Segura, who had shipped out as a personal secretary to Columbus, and a Spanish shipmate called Bartolomé Flisco, along with six natives, paddled a canoe to get help from Hispaniola. The governor, Nicolás de Ovando y Cáceres, detested Columbus and obstructed all efforts to rescue him and his men. In the meantime Columbus, in a desperate effort to induce the natives to continue provisioning him and his hungry men, won their favor by predicting a lunar eclipse for 29 February 1504, using Abraham Zacuto's astronomical charts. Help finally arrived, no thanks to the governor, on 28 June 1504, and Columbus and his men arrived in Sanlúcar, Spain, on 7 November. Later life, illness, and death Columbus had always claimed that the conversion of non-believers was one reason for his explorations, but he grew increasingly religious in his later years. Probably with the assistance of his son Diego and his friend the Carthusian monk Gaspar Gorricio, Columbus produced two books during his later years: a Book of Privileges (1502), detailing and documenting the rewards from the Spanish Crown to which he believed he and his heirs were entitled, and a Book of Prophecies (1505), in which passages from the Bible were used to place his achievements as an explorer in the context of Christian eschatology. In his later years, Columbus demanded that the Spanish Crown give him his tenth of all the riches and trade goods yielded by the new lands, as stipulated in the Capitulations of Santa Fe. Because he had been relieved of his duties as governor, the crown did not feel bound by that contract and his demands were rejected. After his death, his heirs sued the Crown for a part of the profits from trade with America, as well as other rewards. This led to a protracted series of legal disputes known as the pleitos colombinos ("Columbian lawsuits"). During a violent storm on his first return voyage, Columbus, then 41, had suffered an attack of what was believed at the time to be gout. In subsequent years, he was plagued with what was thought to be influenza and other fevers, bleeding from the eyes, temporary blindness and prolonged attacks of gout. The attacks increased in duration and severity, sometimes leaving Columbus bedridden for months at a time, and culminated in his death 14 years later. Based on Columbus's lifestyle and the described symptoms, some modern commentators suspect that he suffered from reactive arthritis, rather than gout. Reactive arthritis is a joint inflammation caused by intestinal bacterial infections or after acquiring certain sexually transmitted diseases (primarily chlamydia or gonorrhea). In 2006, Frank C. Arnett, a medical doctor, and historian Charles Merrill, published their paper in The American Journal of the Medical Sciences proposing that Columbus had a form of reactive arthritis; Merrill made the case in that same paper that Columbus was the son of Catalans and his mother possibly a member of a prominent converso (converted Jew) family. "It seems likely that [Columbus] acquired reactive arthritis from food poisoning on one of his ocean voyages because of poor sanitation and improper food preparation," says Arnett, a rheumatologist and professor of internal medicine, pathology and laboratory medicine at the University of Texas Medical School at Houston. Some historians such as H. Micheal Tarver and Emily Slape, as well as medical doctors such as Arnett and Antonio Rodríguez Cuartero, believe that Columbus had such a form of reactive arthritis, but according to other authorities, this is "speculative", or "very speculative". After his arrival to Sanlúcar from his fourth voyage (and Queen Isabella's death), an ill Columbus settled in Seville in April 1505. He stubbornly continued to make pleas to the crown to defend his own personal privileges and his family's. He moved to Segovia (where the court was at the time) on a mule by early 1506, and, on the occasion of the wedding of King Ferdinand with Germaine of Foix in Valladolid in March 1506, Columbus moved to the aforementioned city to persist with his demands. On 20 May 1506, aged 54, Columbus died in Valladolid. Location of remains Columbus's remains were first buried at a convent in Valladolid, then moved to the monastery of La Cartuja in Seville (southern Spain) by the will of his son Diego. They may have been exhumed in 1513 and interred at the Cathedral of Seville. In about 1536, the remains of both Columbus and his son Diego were moved to a cathedral in Colonial Santo Domingo, in the present-day Dominican Republic. By some accounts, in 1793, when France took over the entire island of Hispaniola, Columbus's remains were moved to Havana, Cuba. After Cuba became independent following the Spanish–American War in 1898, the remains were moved back to the Cathedral of Seville, Spain, where they were placed on an elaborate catafalque. In June 2003, DNA samples were taken from these remains as well as those of Columbus's brother Diego and younger son Fernando. Initial observations suggested that the bones did not appear to match Columbus's physique or age at death. DNA extraction proved difficult; only short fragments of mitochondrial DNA could be isolated. These matched corresponding DNA from Columbus's brother, supporting that both individuals had shared the same mother. Such evidence, together with anthropologic and historic analyses, led the researchers to conclude that the remains belonged to Christopher Columbus. In 1877, a priest discovered a lead box at Santo Domingo inscribed: "Discoverer of America, First Admiral". Inscriptions found the next year read "Last of the remains of the first admiral, Sire Christopher Columbus, discoverer." The box contained bones of an arm and a leg, as well as a bullet. These remains were considered legitimate by physician and U.S. Assistant Secretary of State John Eugene Osborne, who suggested in 1913 that they travel through the Panama Canal as a part of its opening ceremony. These remains were kept at the Basilica Cathedral of Santa María la Menor before being moved to the Columbus Lighthouse (inaugurated in 1992). The authorities in Santo Domingo have never allowed these remains to be exhumed, so it is unconfirmed whether they are from Columbus's body as well. Commemoration The figure of Columbus was not ignored in the British colonies during the colonial era: Columbus became a unifying symbol early in the history of the colonies that became the United States when Puritan preachers began to use his life story as a model for a "developing American spirit". In the spring of 1692, Puritan preacher Cotton Mather described Columbus's voyage as one of three shaping events of the modern age, connecting Columbus's voyage and the Puritans' migration to North America, seeing them together as the key to a grand design. The use of Columbus as a founding figure of New World nations spread rapidly after the American Revolution. This was out of a desire to develop a national history and founding myth with fewer ties to Britain. His name was the basis for the female national personification of the United States, Columbia, in use since the 1730s with reference to the original Thirteen Colonies, and also a historical name applied to the Americas and to the New World. The federal capital (District of Columbia) was named for her, as well as Columbia, South Carolina, and Columbia Rediviva, the ship for which the Columbia River was named. Columbus's name was given to the newly born Republic of Colombia in the early 19th century, inspired by the political project of "Colombeia" developed by revolutionary Francisco de Miranda, which was put at the service of the emancipation of continental Hispanic America. To commemorate the 400th anniversary of the landing of Columbus, the 1893 World's Fair in Chicago was named the World's Columbian Exposition. The U.S. Postal Service issued the first U.S. commemorative stamps, the Columbian Issue, depicting Columbus, Queen Isabella and others in various stages of his several voyages. The policies related to the celebration of the Spanish colonial empire as the vehicle of a nationalist project undertaken in Spain during the Restoration in the late 19th century took form with the commemoration of the 4th centenary on 12 October 1892 (in which the figure of Columbus was extolled by the Conservative government), eventually becoming the very same national day. Several monuments commemorating the "discovery" were erected in cities such as Palos, Barcelona, Granada, Madrid, Salamanca, Valladolid and Seville in the years around the 400th anniversary. For the Columbus Quincentenary in 1992, a second Columbian issue was released jointly with Italy, Portugal, and Spain. Columbus was celebrated at Seville Expo '92, and Genoa Expo '92. The Boal Mansion Museum, founded in 1951, contains a collection of materials concerning later descendants of Columbus and collateral branches of the family. It features a 16th-century chapel from a Spanish castle reputedly owned by Diego Colón which became the residence of Columbus's descendants. The chapel interior was dismantled and moved from Spain in 1909 and re-erected on the Boal estate at Boalsburg, Pennsylvania. Inside it are numerous religious paintings and other objects including a reliquary with fragments of wood supposedly from the True Cross. The museum also holds a collection of documents mostly relating to Columbus descendants of the late 18th and early 19th centuries. In many countries of the Americas, as well as Spain and Italy, Columbus Day celebrates the anniversary of Columbus's arrival in the Americas on 12 October 1492. Legacy The voyages of Columbus are considered a turning point in human history, marking the beginning of globalization and accompanying demographic, commercial, economic, social, and political changes. His explorations resulted in permanent contact between the two hemispheres, and the term "pre-Columbian" is used to refer to the cultures of the Americas before the arrival of Columbus and his European successors. The ensuing Columbian exchange saw the massive exchange of animals, plants, fungi, diseases, technologies, mineral wealth and ideas. In the first century after his endeavors, Columbus's figure largely languished in the backwaters of history, and his reputation was beset by his failures as a colonial administrator. His legacy was somewhat rescued from oblivion when he began to appear as a character in Italian and Spanish plays and poems from the late 16th century onward. Columbus was subsumed into the Western narrative of colonization and empire building, which invoked notions of translatio imperii and translatio studii to underline who was considered "civilized" and who was not. The Americanization of the figure of Columbus began in the latter decades of the 18th century, after the revolutionary period of the United States, elevating the status of his reputation to a national myth, homo americanus. His landing became a powerful icon as an "image of American genesis". The American Columbus myth was reconfigured later in the century when he was enlisted as an ethnic hero by immigrants to the United States who were not of Anglo-Saxon stock, such as Jewish, Italian, and Irish people, who claimed Columbus as a sort of ethnic founding father. Catholics unsuccessfully tried to promote him for canonization in the 19th century. From the 1990s onward, new depictions of Columbus as the environmental hatchet man or the scapegoat for genocide compete with the hitherto prevalent discourses of Columbus as the Christ-bearer, the scientist, or the father of America. More recently, however, the narrative has featured the negative effects of the conquest on native populations. Exposed to Old World diseases, the indigenous populations of the New World collapsed, and were largely replaced by Europeans and Africans, who brought with them new methods of farming, business, governance, and religious worship. Originality of discovery of America Though Christopher Columbus came to be considered the European discoverer of America in Western popular culture, his historical legacy is more nuanced. After settling Iceland, the Norse settled the uninhabited southern part of Greenland beginning in the 10th century. Norsemen are believed to have then set sail from Greenland and Iceland to become the first known Europeans to reach the North American mainland, nearly 500 years before Columbus reached the Caribbean. The 1960s discovery of a Norse settlement dating to c. 1000 AD at L'Anse aux Meadows, Newfoundland, partially corroborates accounts within the Icelandic sagas of Erik the Red's colonization of Greenland and his son Leif Erikson's subsequent exploration of a place he called Vinland. In the late 19th century, following work by Carl Christian Rafn and B. F. De Costa, Rasmus Bjørn Anderson sought to deconstruct the Columbus discovery story. In his book America Not Discovered by Columbus (1874), he attempted to show that Columbus must have known of the North American continent before he started his voyage of discovery. Most modern scholars doubt Columbus had knowledge of the Norse settlements in America, with his arrival to the continent being most likely an independent discovery. Europeans devised explanations for the origins of the Native Americans and their geographical distribution with narratives that often served to reinforce their own preconceptions built on ancient intellectual foundations. In modern Latin America, the non-Native populations of some countries often demonstrate an ambiguous attitude toward the perspectives of indigenous peoples regarding the so-called "discovery" by Columbus and the era of colonialism that followed. In his 1960 monograph, Mexican philosopher and historian Edmundo O'Gorman explicitly rejects the Columbus discovery myth, arguing that the idea that Columbus discovered America was a misleading legend fixed in the public mind through the works of American author Washington Irving during the 19th century. O'Gorman argues that to assert Columbus "discovered America" is to shape the facts concerning the events of 1492 to make them conform to an interpretation that arose many years later. For him, the Eurocentric view of the discovery of America sustains systems of domination in ways that favor Europeans. In a 1992 article for The UNESCO Courier, Félix Fernández-Shaw argues that the word "discovery" prioritizes European explorers as the "heroes" of the contact between the Old and New World. He suggests that the word "encounter" is more appropriate, being a more universal term which includes Native Americans in the narrative. America as a distinct land Historians have traditionally argued that Columbus remained convinced until his death that his journeys had been along the east coast of Asia as he originally intended (excluding arguments such as Anderson's). On his third voyage he briefly referred to South America as a "hitherto unknown" continent, while also rationalizing that it was the "Earthly Paradise" |
include biochemistry, nuclear chemistry, organic chemistry, inorganic chemistry, polymer chemistry, analytical chemistry, physical chemistry, theoretical chemistry, quantum chemistry, environmental chemistry, and thermochemistry. Postdoctoral experience may be required for certain positions. Workers whose work involves chemistry, but not at a complexity requiring an education with a chemistry degree, are commonly referred to as chemical technicians. Such technicians commonly do such work as simpler, routine analyses for quality control or in clinical laboratories, having an associate degree. A chemical technologist has more education or experience than a chemical technician but less than a chemist, often having a bachelor's degree in a different field of science with also an associate degree in chemistry (or many credits related to chemistry) or having the same education as a chemical technician but more experience. There are also degrees specific to become a chemical technologist, which are somewhat distinct from those required when a student is interested in becoming a professional chemist. A Chemical technologist is more involved in the management and operation of the equipment and instrumentation necessary to perform chemical analyzes than a chemical technician. They are part of the team of a chemical laboratory in which the quality of the raw material, intermediate products and finished products is analyzed. They also perform functions in the areas of environmental quality control and the operational phase of a chemical plant. In addition to all the training usually given to chemical technologists in their respective degree (or one given via an associate degree), a chemist is also trained to understand more details related to chemical phenomena so that the chemist can be capable of more planning on the steps to achieve a distinct goal via a chemistry-related endeavor. The higher the competency level achieved in the field of chemistry (as assessed via a combination of education, experience and personal achievements), the higher the responsibility given to that chemist and the more complicated the task might be. Chemistry, as a field, have so many applications that different tasks and objectives can be given to workers or scientists with these different levels of education or experience. The specific title of each job varies from position to position, depending on factors such as the kind of industry, the routine level of the task, the current needs of a particular enterprise, the size of the enterprise or hiring firm, the philosophy and management principles of the hiring firm, the visibility of the competency and individual achievements of the one seeking employment, economic factors such as recession or economic depression, among other factors, so this makes it difficult to categorize the exact roles of these chemistry-related workers as standard for that given level of education. Because of these factors affecting exact job titles with distinct responsibilities, some chemists might begin doing technician tasks while other chemists might begin doing more complicated tasks than those of a technician, such as tasks that also involve formal applied research, management, or supervision included within the responsibilities of that same job title. The level of supervision given to that chemist also varies in a similar manner, with factors similar to those that affect the tasks demanded for a particular chemist. It is important that those interested in a Chemistry degree understand the variety of roles available to them (on average), which vary depending on education and job experience. Those Chemists who hold a bachelor's degree are most commonly involved in positions related to either research assistance (working under the guidance of senior chemists in a research-oriented activity), or, alternatively, they may work on distinct (chemistry-related) aspects of a business, organization or enterprise including aspects that involve quality control, quality assurance, manufacturing, production, formulation, inspection, method validation, visitation for troubleshooting of chemistry-related instruments, regulatory affairs, "on-demand" technical services, chemical analysis for non-research purposes (e.g., as a legal request, for testing purposes, or for government or non-profit agencies); chemists may also work in environmental evaluation and assessment. Other jobs or roles may include sales and marketing of chemical products and chemistry-related instruments or technical writing. The more experience obtained, the more independence and leadership or management roles these chemists may perform in those organizations. Some chemists with relatively higher experience might change jobs or job position to become a manager of a chemistry-related enterprise, a supervisor, an entrepreneur or a chemistry consultant. Other chemists choose to combine their education and experience as a chemist with a distinct credential to provide different services (e.g., forensic chemists, chemistry-related software development, patent law specialists, environmental law firm staff, scientific news reporting staff, engineering design staff, etc.). In comparison, chemists who have obtained a Master of Science (M.S.) in chemistry or in a very related discipline may find chemist roles that allow them to enjoy more independence, leadership and responsibility earlier in their careers with less years of experience than those with a bachelor's degree as highest degree. Sometimes, M.S. chemists receive more complex tasks duties in comparison with the roles and positions found by chemists with a bachelor's degree as their highest academic degree and with the same or close-to-same years of job experience. There are positions that are open only to those that at least have a degree related to chemistry at the master's level. Although good chemists without a Ph. D. degree but with relatively many years of experience may be allowed some applied research positions, the general rule is that Ph. D. chemists are preferred for research positions and are typically the preferred choice for the highest administrative positions on big enterprises involved in chemistry-related duties. Some positions, especially research oriented, will only allow those chemists who are Ph. D. holders. Jobs that involve intensive research and actively seek to lead the discovery of completely new chemical compounds under specifically assigned monetary funds and resources or jobs that seek to develop new scientific theories require a Ph. D. more often than not. Chemists with a Ph. D. as the highest academic degree are found typically on the research-and-development department of an enterprise and can also hold university positions as professors. Professors for research universities or for big universities usually have a Ph. D., and some research-oriented institutions might require post-doctoral training. Some smaller colleges (including some smaller four-year colleges or smaller non-research universities for undergraduates) as well as community colleges usually hire chemists with a M.S. as professors too (and rarely, some big | the philosophy and management principles of the hiring firm, the visibility of the competency and individual achievements of the one seeking employment, economic factors such as recession or economic depression, among other factors, so this makes it difficult to categorize the exact roles of these chemistry-related workers as standard for that given level of education. Because of these factors affecting exact job titles with distinct responsibilities, some chemists might begin doing technician tasks while other chemists might begin doing more complicated tasks than those of a technician, such as tasks that also involve formal applied research, management, or supervision included within the responsibilities of that same job title. The level of supervision given to that chemist also varies in a similar manner, with factors similar to those that affect the tasks demanded for a particular chemist. It is important that those interested in a Chemistry degree understand the variety of roles available to them (on average), which vary depending on education and job experience. Those Chemists who hold a bachelor's degree are most commonly involved in positions related to either research assistance (working under the guidance of senior chemists in a research-oriented activity), or, alternatively, they may work on distinct (chemistry-related) aspects of a business, organization or enterprise including aspects that involve quality control, quality assurance, manufacturing, production, formulation, inspection, method validation, visitation for troubleshooting of chemistry-related instruments, regulatory affairs, "on-demand" technical services, chemical analysis for non-research purposes (e.g., as a legal request, for testing purposes, or for government or non-profit agencies); chemists may also work in environmental evaluation and assessment. Other jobs or roles may include sales and marketing of chemical products and chemistry-related instruments or technical writing. The more experience obtained, the more independence and leadership or management roles these chemists may perform in those organizations. Some chemists with relatively higher experience might change jobs or job position to become a manager of a chemistry-related enterprise, a supervisor, an entrepreneur or a chemistry consultant. Other chemists choose to combine their education and experience as a chemist with a distinct credential to provide different services (e.g., forensic chemists, chemistry-related software development, patent law specialists, environmental law firm staff, scientific news reporting staff, engineering design staff, etc.). In comparison, chemists who have obtained a Master of Science (M.S.) in chemistry or in a very related discipline may find chemist roles that allow them to enjoy more independence, leadership and responsibility earlier in their careers with less years of experience than those with a bachelor's degree as highest degree. Sometimes, M.S. chemists receive more complex tasks duties in comparison with the roles and positions found by chemists with a bachelor's degree as their highest academic degree and with the same or close-to-same years of job experience. There are positions that are open only to those that at least have a degree related to chemistry at the master's level. Although good chemists without a Ph. D. degree but with relatively many years of experience may be allowed some applied research positions, the general rule is that Ph. D. chemists are preferred for research positions and are typically the preferred choice for the highest administrative positions on big enterprises involved in chemistry-related duties. Some positions, especially research oriented, will only allow those chemists who are Ph. D. holders. Jobs that involve intensive research and actively seek to lead the discovery of completely new chemical compounds under specifically assigned monetary funds and resources or jobs that seek to develop new scientific theories require a Ph. D. more often than not. Chemists with a Ph. D. as the highest academic degree are found typically on the research-and-development department of an enterprise and can also hold university positions as professors. Professors for research universities or for big universities usually have a Ph. D., and some research-oriented institutions might require post-doctoral training. Some smaller colleges (including some smaller four-year colleges or smaller non-research universities for undergraduates) as well as community colleges usually hire chemists with a M.S. as professors too (and rarely, some big universities who need part-time or temporary instructors, or temporary staff), but when the positions are scarce and the applicants are many, they might prefer Ph. D. holders instead. Employment The three major employers of chemists are academic institutions, industry, especially the chemical industry and the pharmaceutical industry, and government laboratories. Chemistry typically is divided into several major sub-disciplines. There are also several main cross-disciplinary and more specialized fields of chemistry. There is a great deal of overlap between different branches of chemistry, as well as with other scientific fields such as biology, medicine, physics, radiology, and several engineering disciplines. Analytical chemistry is the analysis of material samples to gain an understanding of their chemical composition and structure. Analytical chemistry incorporates standardized experimental methods in chemistry. These methods may be used in all subdisciplines of chemistry, excluding purely theoretical chemistry. Biochemistry is the study of the chemicals, chemical reactions and chemical interactions that take place in living organisms. Biochemistry and organic chemistry are closely related, for example, in medicinal chemistry. Inorganic chemistry is the study of the properties and reactions of inorganic compounds. The distinction between organic and inorganic disciplines is not absolute and there is much overlap, most importantly in the sub-discipline of organometallic chemistry. The Inorganic chemistry is also the study of atomic and molecular structure and bonding. Medicinal chemistry is the science involved with designing, synthesizing and developing pharmaceutical drugs. Medicinal chemistry involves the identification, synthesis and development of new chemical entities suitable for therapeutic use. It also includes the study of existing drugs, their biological properties, and their quantitative structure-activity relationships. Organic chemistry is the study of the structure, properties, composition, mechanisms, and chemical reaction of carbon compounds. Physical chemistry is the study of the physical fundamental basis of chemical systems and processes. In particular, the energetics and dynamics of such systems and processes are of interest to physical chemists. Important areas of study include chemical thermodynamics, chemical kinetics, electrochemistry, |
lived in South Gate, California. In 1988, the two brothers teamed up with New York City native Lawrence Muggerud (also known as DJ Muggs, previously in a rap group named 7A3) and Louis Freese (also known as B-Real) to form a hip-hop group named DVX (Devastating Vocal Excellence). The band soon lost Mellow Man Ace to a solo career, and changed their name to Cypress Hill, after a street in South Gate. Early works and mainstream success (1989–1996) After recording a demo in 1989, Cypress Hill signed a record deal with Ruffhouse Records. Their self-titled first album was released in August 1991. The lead single was the double A-side "The Phuncky Feel One"/"How I Could Just Kill a Man" which received heavy airplay on urban and college radio, most notably peaking at #1 on Billboard Hot Rap Tracks chart. The other two singles released from the album were "Hand on the Pump" and "Latin Lingo", the latter of which combined English and Spanish lyrics, a trait that was continued throughout their career. The success of these singles led to the album selling two million copies in the U.S. alone. In 1992, Cypress Hill's first contribution to a soundtrack was the song "Shoot 'Em Up" for the movie Juice. The group made their first appearance at Lollapalooza on the side stage in 1992. It was the festival's second year of touring, and featured a diverse lineup of acts such as Red Hot Chili Peppers, Ice Cube, Lush, Tool, Stone Temple Pilots, among others. Black Sunday, the group's second album, debuted at number one on the Billboard 200 in 1993, recording the highest Soundscan for a rap group up until that time. Also, with their debut still in the charts, they became the first rap group to have two albums in the top 10 of the Billboard 200 at the same time. With "Insane in the Brain" becoming a crossover hit, the album went triple platinum in the U.S. and sold about 3.26 million copies. "Insane in the Brain" also garnered the group their first Grammy nomination. Cypress Hill headlined the Soul Assassins tour with House of Pain and Funkdoobiest as support, then performed on a college tour with Rage Against the Machine and Seven Year Bitch. In 1993, Cypress Hill also had two tracks on the Judgment Night soundtrack, teaming up with Pearl Jam (without vocalist Eddie Vedder) on the track "Real Thing" and Sonic Youth on "I Love You Mary Jane". The soundtrack was notable for intentionally creating collaborations between the rap/hip-hop and rock/metal genres, and as a result the soundtrack peaked at #17 on the Billboard 200. The group later played at Woodstock 94, introducing new member Eric Bobo, son of Willie Bobo and formerly a percussionist with the Beastie Boys. That same year, Rolling Stone named the group as the Best Rap Group in their music awards voted by critics and readers. Cypress Hill then played at Lollapalooza for two successive years, topping the bill in 1995. They also appeared on the "Homerpalooza" episode of The Simpsons. The group received their second Grammy nomination in 1995 for "I Ain't Goin' Out Like That". Cypress Hill's third album III: Temples of Boom was released in 1995 as it peaked at #3 on the Billboard 200 and #3 on the Canadian Albums Chart. The album was certified platinum by the RIAA. "Throw Your Set in the Air" was the most successful single off the album, peaking at #45 on the Billboard Hot 100 and #11 on the Hot Rap Tracks charts. The single also earned Cypress Hill's third Grammy nomination. Afterwards, Sen Dog took a break from the group to form a Los Angeles-based rap rock band, SX-10. Meanwhile, in 1996, Cypress Hill appeared on the first Smokin' Grooves tour, featuring Ziggy Marley, The Fugees, Busta Rhymes, and A Tribe Called Quest. The group also released a nine track EP Unreleased and Revamped with rare mixes. Continued success, crossover appeal, and Stoned Raiders (1997–2002) In 1997, the members focused on their solo careers. DJ Muggs released Soul Assassins: Chapter 1, with features from Dr. Dre, KRS-One, Wyclef Jean, and Mobb Deep. B-Real appeared with Busta Rhymes, Coolio, LL Cool J, and Method Man on "Hit 'Em High" from the multi-platinum Space Jam Soundtrack. He also appeared with RBX, Nas, and KRS-One on "East Coast Killer, West Coast Killer" from Dr. Dre's Dr. Dre Presents the Aftermath album, and contributed to an album entitled The Psycho Realm with the group of the same name. Sen Dog also released the Get Wood sampler as part of SX-10 on the label Flip Records. In addition, Eric Bobo contributed drums to various rock bands on their albums, such as 311 and Soulfly. Cypress Hill released IV in 1998 which went gold in the US. The lead single off the album was "Dr. Greenthumb", as it peaked at #11 on the Hot Rap Tracks chart. It also peaked at #70 on the Billboard Hot 100, their last appearance on the chart to date. In 1999, Cypress Hill helped with the PC first-person shooter video game Kingpin: Life of Crime. Three of the band's songs from the 1998 IV album were in the game, "16 Men Till There's No Men Left", "Checkmate", and "Lightning Strikes". The group also did voice work for some of the game's characters. Also in 1999, the band released a greatest hits album in Spanish, Los Grandes Éxitos en Español. In 2000, Cypress Hill then fused genres with their fifth album, Skull & Bones, which consisted of two discs. The first disc Skull was composed of rap tracks while Bones explored further the group's forays into rock. The album peaked at #5 on the Billboard 200 and at #3 on the Canadian Albums Chart. The first two singles were "(Rock) Superstar" for rock radio and "(Rap) Superstar" for urban radio. Both singles received heavy airplay on both rock and urban radio, enabling Cypress Hill to crossover again. Following the release of Skull & Bones, Cypress Hill and MxPx landed a slot opening for The Offspring on the Conspiracy of One Tour. The group also released Live at the Fillmore, a concert disc recorded at San Francisco's The Fillmore in 2000. Cypress Hill continued their experimentation with rock on the Stoned Raiders album in 2001; however, its sales were a disappointment. The album peaked at #64 on the Billboard 200, the group's lowest position to that point. Also in 2001, the group made a cameo appearance as themselves in the film How High. Cypress Hill then recorded the track "Just Another Victim" for WWF as a theme song for Tazz, borrowing elements from the 2000 single "(Rock) Superstar". The song would later be featured on the compilation WWF Forceable Entry in March 2002. Till Death Do Us Part, DJ Muggs' hiatus, and Rise Up (2003–2012) Cypress Hill released Till Death Do Us Part in March 2004. It featured appearances by Bob Marley's son Damian Marley, Prodigy of Mobb Deep, and producers The Alchemist and Fredwreck. The album represented a further departure from the group's signature sound. Reggae was a strong influence on its sound, especially on the lead single "What's Your Number?". The track featured Tim Armstrong of Rancid on guitar and backup vocals. It was based on the classic song "The Guns of Brixton" from The Clash's album London Calling. "What's Your Number?" saw Cypress Hill crossover into the rock | that they would anticipate the outcome of the legislation before returning. Also in 2010, Cypress Hill performed at the Reading and Leeds Festivals on August 28 at Leeds and August 29 at Reading. On June 5, 2012, Cypress Hill and dubstep artist Rusko released a collaborative EP entitled Cypress X Rusko. DJ Muggs, who was still on a hiatus, and Eric Bobo were absent on the release. Also in 2012, Cypress Hill collaborated with Deadmau5 on his sixth studio album Album Title Goes Here, lending vocals on "Failbait". Elephants on Acid, Hollywood Walk of Fame, and Back in Black (2013–present) During the interval between Cypress Hill albums, the four members commenced work on various projects. B-Real formed the band Prophets of Rage alongside three members of Rage Against the Machine and two members of Public Enemy. He also released The Prescription EP under his Dr. Greenthumb persona. Sen Dog formed the band Powerflo alongside members of Fear Factory, downset., and Biohazard. DJ Muggs revived his Soul Assassins project as its main producer. Eric Bobo formed a duo named Ritmo Machine. He also contributed to an unreleased album by his father Willie Bobo. On September 28, 2018, Cypress Hill released the album Elephants on Acid, which saw the return of DJ Muggs as main composer and producer. It peaked at #120 on the Billboard 200. Overall, four different singles were released to promote the album. In April 2019 Cypress Hill received a star on the Hollywood Walk of Fame. They became the first Latin-American group to receive a star. In January 2022, the group announced the release date of their 10th studio album entitled Back in Black. The album is slated to be released on March 18, 2022. In addition, Cypress Hill will support the album by joining Slipknot alongside Ho99o9 for the second half of the 2022 Knotfest Roadshow. They had previously invited Slipknot to join their Great Smoke-Out festival back in 2009. Style Rapping One of the band's most striking aspects is B-Real's exaggeratedly high-pitched nasal vocals. In the book Check the Technique, B-Real described his nasal style, saying his rapping voice is "high and annoying...the nasal style I have was just something that I developed...my more natural style wasn't so pleasing to DJ Muggs and Sen Dog's ears" and talking about the nasal style in the book How to Rap, B-Real said "you want to stand out from the others and just be distinct...when you got something that can separate you from everybody else, you gotta use it to your advantage." In the film Art of Rap, B-Real credited the Beastie Boys as an influence when developing his rapping style. Sen Dog's voice is deeper, more violent, and often shouted alongside the rapping; his vocals are often emphasized by adding another background/choir voice to say them. Sen Dog's style is in contrast to B-Real's, who said "Sen's voice is so strong" and "it all blends together" when they are both on the same track. Both B-Real and Sen Dog started writing lyrics in both Spanish and English. Initially, B-Real was inspired to start writing raps from watching Sen Dog and Mellow Man Ace writing their lyrics, and originally B-Real was going to just be the writer for the group rather than a rapper. Their lyrics are noted for bringing a "cartoonish" approach to violence by Peter Shapiro and Allmusic. Production The sound and groove of their music, mostly produced by DJ Muggs, has spooky sounds and a stoned aesthetic; with its bass-heavy rhythms and odd sample loops ("Insane in the Brain" has a blues guitar pitched looped in its chorus), it carries a psychedelic value, which is lessened in their rock-oriented albums. For using rock/metal instrumentation the band is sometimes classified as a rap rock/metal rap group. The double album Skull & Bones consists of a pure rap disc (Skull) and a separate rock disc (Bones). In the live album Live at The Fillmore, some of the old classics were played in a rock/metal version, with Eric Bobo playing the drums and Sen Dog's band SX-10 as the other instrumentalists. 2010's Rise Up was the most radically different album in regards to production. DJ Muggs had produced |
firefighters and engineers use combustion analyzers to test the efficiency of a burner during the combustion process. Also, the efficiency of an internal combustion engine can be measured in this way, and some U.S. states and local municipalities use combustion analysis to define and rate the efficiency of vehicles on the road today. Incomplete combustion produced carbon monoxide Carbon monoxide is one of the products from incomplete combustion. Carbon is released in the normal incomplete combustion reaction, forming soot and dust. Since carbon monoxide is a poisonous gas, complete combustion is preferable, as carbon monoxide may also lead to respiratory troubles when breathed since it takes the place of oxygen and combines with hemoglobin. Problems associated with incomplete combustion Environmental problems: These oxides combine with water and oxygen in the atmosphere, creating nitric acid and sulfuric acids, which return to Earth's surface as acid deposition, or "acid rain." Acid deposition harms aquatic organisms and kills trees. Due to its formation of certain nutrients that are less available to plants such as calcium and phosphorus, it reduces the productivity of the ecosystem and farms. An additional problem associated with nitrogen oxides is that they, along with hydrocarbon pollutants, contribute to the formation of ground level ozone, a major component of smog. Human health problems: Breathing carbon monoxide causes headache, dizziness, vomiting, and nausea. If carbon monoxide levels are high enough, humans become unconscious or die. Exposure to moderate and high levels of carbon monoxide over long periods is positively correlated with risk of heart disease. People who survive severe carbon monoxide poisoning may suffer long-term health problems. Carbon monoxide from air is absorbed in the lungs which then binds with hemoglobin in human's red blood cells. This would reduce the capacity of red blood cells to carry oxygen throughout the body. Smouldering Smouldering is the slow, low-temperature, flameless form of combustion, sustained by the heat evolved when oxygen directly attacks the surface of a condensed-phase fuel. It is a typically incomplete combustion reaction. Solid materials that can sustain a smouldering reaction include coal, cellulose, wood, cotton, tobacco, peat, duff, humus, synthetic foams, charring polymers (including polyurethane foam) and dust. Common examples of smoldering phenomena are the initiation of residential fires on upholstered furniture by weak heat sources (e.g., a cigarette, a short-circuited wire) and the persistent combustion of biomass behind the flaming fronts of wildfires. Rapid Rapid combustion is a form of combustion, otherwise known as a fire, in which large amounts of heat and light energy are released, which often results in a flame. This is used in a form of machinery such as internal combustion engines and in thermobaric weapons. Such a combustion is frequently called a Rapid combustion, though for an internal combustion engine this is inaccurate. An internal combustion engine nominally operates on a controlled rapid burn. When the fuel-air mixture in an internal combustion engine explodes, that is known as detonation. Spontaneous Spontaneous combustion is a type of combustion which occurs by self-heating (increase in temperature due to exothermic internal reactions), followed by thermal runaway (self-heating which rapidly accelerates to high temperatures) and finally, ignition. For example, phosphorus self-ignites at room temperature without the application of heat. Organic materials undergoing bacterial composting can generate enough heat to reach the point of combustion. Turbulent Combustion resulting in a turbulent flame is the most used for industrial application (e.g. gas turbines, gasoline engines, etc.) because the turbulence helps the mixing process between the fuel and oxidizer. Micro-gravity The term 'micro' gravity refers to a gravitational state that is 'low' (i.e., 'micro' in the sense of 'small' and not necessarily a millionth of Earth's normal gravity) such that the influence of buoyancy on physical processes may be considered small relative to other flow processes that would be present at normal gravity. In such an environment, the thermal and flow transport dynamics can behave quite differently than in normal gravity conditions (e.g., a candle's flame takes the shape of a sphere.). Microgravity combustion research contributes to the understanding of a wide variety of aspects that are relevant to both the environment of a spacecraft (e.g., fire dynamics relevant to crew safety on the International Space Station) and terrestrial (Earth-based) conditions (e.g., droplet combustion dynamics to assist developing new fuel blends for improved combustion, materials fabrication processes, thermal management of electronic systems, multiphase flow boiling dynamics, and many others). Micro-combustion Combustion processes which happen in very small volumes are considered micro-combustion. The high surface-to-volume ratio increases specific heat loss. Quenching distance plays a vital role in stabilizing the flame in such combustion chambers. Chemical equations Stoichiometric combustion of a hydrocarbon in oxygen Generally, the chemical equation for stoichiometric combustion of a hydrocarbon in oxygen is: C_\mathit{x}H_\mathit{y}{} + \mathit{z}O2 -> \mathit{x}CO2{} + \frac{\mathit{y}}{2}H2O where . For example, the stoichiometric burning of propane in oxygen is: \underset{propane\atop (fuel)}{C3H8} + \underset{oxygen}{5O2} -> \underset{carbon\ dioxide}{3CO2} + \underset{water}{4H2O} Stoichiometric combustion of a hydrocarbon in air If the stoichiometric combustion takes place using air as the oxygen source, the nitrogen present in the air (Atmosphere of Earth) can be added to the equation (although it does not react) to show the stoichiometric composition of the fuel in air and the composition of the resultant flue gas. Note that treating all non-oxygen components in air as nitrogen gives a 'nitrogen' to oxygen ratio of 3.77, i.e. (100% - O2%) / O2% where O2% is 20.95% vol: where . For example, the stoichiometric combustion of propane (C3H8) in air is: The stoichiometric composition of propane in air is 1 / (1 + 5 + 18.87) = 4.02% vol. The stoichiometric combustion reaction for CHO in air: The stoichiometric combustion reaction for CHOS: The stoichiometric combustion reaction for CHONS: The stoichiometric combustion reaction for CHOF: Trace combustion products Various other substances begin to appear in significant amounts in combustion products when the flame temperature is above about . When excess air is used, nitrogen may oxidize to and, to a much lesser extent, to . forms by disproportionation of , and and form by disproportionation of . For example, when of propane is burned with of air (120% of the stoichiometric amount), the combustion products contain 3.3% . At , the equilibrium combustion products contain 0.03% and 0.002% . At , the combustion products contain 0.17% , 0.05% , 0.01% , and 0.004% . Diesel engines are run with an excess of oxygen to combust small particles that tend to form with only a stoichiometric amount of oxygen, necessarily producing nitrogen oxide emissions. Both the United States and European Union enforce limits to vehicle nitrogen oxide emissions, which necessitate the use of special catalytic converters or treatment of the exhaust with urea (see Diesel exhaust fluid). Incomplete combustion of a hydrocarbon in oxygen The incomplete (partial) combustion of a hydrocarbon with oxygen produces a gas mixture containing mainly , , , and . Such gas mixtures are commonly prepared for use as protective atmospheres for the heat-treatment of metals and for gas carburizing. The general reaction equation for incomplete combustion of one mole of a hydrocarbon in oxygen is: \underset{fuel}{C_\mathit{x} H_\mathit{y}} + \underset{oxygen}{\mathit{z} O2} -> \underset{carbon \ dioxide}{\mathit{a}CO2} + \underset{carbon\ monoxide}{\mathit{b}CO} + \underset{water}{\mathit{c}H2O} + \underset{hydrogen}{\mathit{d}H2} When z falls below roughly 50% of the stoichiometric value, can become an important combustion product; when z falls below roughly 35% of the stoichiometric value, elemental carbon may become stable. The products of incomplete combustion can be calculated with the aid of a material balance, together with the assumption that the combustion products reach equilibrium. For example, in the combustion of one mole of propane () with four moles of , seven moles of combustion gas are formed, and z is 80% of the stoichiometric value. The three elemental balance equations are: Carbon: Hydrogen: Oxygen: These three equations are insufficient in themselves to calculate the combustion gas composition. However, at the equilibrium position, the water-gas shift reaction gives another equation: CO + H2O -> CO2 + H2; For example, at the value of K is 0.728. Solving, the combustion gas consists of 42.4% , 29.0% , 14.7% , and 13.9% . Carbon becomes a stable phase at and pressure when z is less than 30% of the stoichiometric value, at which point the combustion products contain more than 98% and and about 0.5% . Substances or materials which undergo combustion are called fuels. The most common examples are natural gas, propane, kerosene, diesel, petrol, charcoal, coal, wood, etc. Liquid fuels Combustion of a liquid fuel in an oxidizing atmosphere actually happens in the gas phase. It is the vapor that burns, not the liquid. Therefore, a liquid will normally catch fire only above a certain temperature: its flash point. The flash point of a liquid fuel is the lowest temperature at which it can form an ignitable mix with air. It is the minimum temperature at which there is enough evaporated fuel in the air to start combustion. Gaseous fuels Combustion of gaseous fuels may occur through one of four distinctive types of burning: diffusion flame, premixed flame, autoignitive reaction front, or as a detonation. The type of burning that actually occurs depends on the degree to which the fuel and oxidizer are mixed prior to heating: for example, a diffusion flame is formed if the fuel and oxidizer are separated initially, whereas a premixed flame is formed otherwise. Similarly, the type of burning also depends on the pressure: a detonation, for example, is an autoignitive reaction front coupled to a strong shock wave giving it its characteristic high-pressure peak and high detonation velocity. Solid fuels The act of combustion consists of three relatively distinct but overlapping phases: Preheating phase, when the unburned fuel is heated up to its flash point and then fire point. Flammable gases start being evolved in a process similar to dry distillation. Distillation phase or gaseous phase, when the mix of evolved flammable gases with oxygen is ignited. Energy is produced in the form of heat and light. Flames are often visible. Heat transfer from the combustion to the solid maintains the evolution of flammable vapours. Charcoal phase or solid phase, when the output of flammable gases from the material is too low for persistent presence of flame and the charred fuel does not burn rapidly and just glows and later only smoulders. Combustion management Efficient process heating requires recovery of the largest possible part of a fuel's heat of combustion into the material being processed. There are many avenues of loss in the operation of a heating process. Typically, the dominant loss is sensible heat leaving with the offgas (i.e., the flue gas). The temperature and quantity of offgas indicates its heat content (enthalpy), so keeping its quantity low minimizes heat loss. In a perfect furnace, the combustion air flow would be matched to the fuel flow to give each fuel molecule the exact amount of oxygen needed to cause complete combustion. However, in the real world, combustion does not proceed in a perfect manner. Unburned fuel (usually and ) discharged from the system represents a heating value loss (as well as a safety hazard). Since combustibles are undesirable in the offgas, while the presence of unreacted oxygen there presents minimal safety and environmental concerns, the first principle of combustion management is to provide more oxygen than is theoretically needed to ensure that all the fuel burns. For methane () combustion, for example, slightly more than two molecules of oxygen are required. The second principle of combustion management, however, is to not use too much oxygen. The correct amount of oxygen requires three types of measurement: first, active control of air and fuel flow; second, offgas oxygen measurement; and third, measurement of offgas combustibles. For each heating process, there exists an optimum condition of minimal offgas heat loss with acceptable levels of combustibles concentration. Minimizing excess oxygen pays an additional benefit: for a given offgas temperature, the NOx level is lowest when excess oxygen is kept lowest. Adherence to these two principles is furthered by making material and heat balances on the combustion process. The material balance directly relates the air/fuel ratio to the percentage of in the combustion gas. The heat balance relates the heat available for the charge to the overall net heat produced by fuel combustion. Additional material and heat balances can be made to quantify the thermal advantage from preheating the combustion air, or enriching it in oxygen. Reaction mechanism Combustion in oxygen is a chain reaction in which many distinct radical intermediates participate. The high energy required for initiation is explained by the unusual structure of the dioxygen molecule. The lowest-energy configuration of the dioxygen molecule is a stable, relatively unreactive diradical in a triplet spin state. Bonding can be described with three bonding electron pairs and two antibonding electrons, with spins aligned, such that the molecule has nonzero total angular momentum. Most fuels, on the other hand, are in a singlet state, with paired spins and zero total angular momentum. Interaction between the two is | flame is the most used for industrial application (e.g. gas turbines, gasoline engines, etc.) because the turbulence helps the mixing process between the fuel and oxidizer. Micro-gravity The term 'micro' gravity refers to a gravitational state that is 'low' (i.e., 'micro' in the sense of 'small' and not necessarily a millionth of Earth's normal gravity) such that the influence of buoyancy on physical processes may be considered small relative to other flow processes that would be present at normal gravity. In such an environment, the thermal and flow transport dynamics can behave quite differently than in normal gravity conditions (e.g., a candle's flame takes the shape of a sphere.). Microgravity combustion research contributes to the understanding of a wide variety of aspects that are relevant to both the environment of a spacecraft (e.g., fire dynamics relevant to crew safety on the International Space Station) and terrestrial (Earth-based) conditions (e.g., droplet combustion dynamics to assist developing new fuel blends for improved combustion, materials fabrication processes, thermal management of electronic systems, multiphase flow boiling dynamics, and many others). Micro-combustion Combustion processes which happen in very small volumes are considered micro-combustion. The high surface-to-volume ratio increases specific heat loss. Quenching distance plays a vital role in stabilizing the flame in such combustion chambers. Chemical equations Stoichiometric combustion of a hydrocarbon in oxygen Generally, the chemical equation for stoichiometric combustion of a hydrocarbon in oxygen is: C_\mathit{x}H_\mathit{y}{} + \mathit{z}O2 -> \mathit{x}CO2{} + \frac{\mathit{y}}{2}H2O where . For example, the stoichiometric burning of propane in oxygen is: \underset{propane\atop (fuel)}{C3H8} + \underset{oxygen}{5O2} -> \underset{carbon\ dioxide}{3CO2} + \underset{water}{4H2O} Stoichiometric combustion of a hydrocarbon in air If the stoichiometric combustion takes place using air as the oxygen source, the nitrogen present in the air (Atmosphere of Earth) can be added to the equation (although it does not react) to show the stoichiometric composition of the fuel in air and the composition of the resultant flue gas. Note that treating all non-oxygen components in air as nitrogen gives a 'nitrogen' to oxygen ratio of 3.77, i.e. (100% - O2%) / O2% where O2% is 20.95% vol: where . For example, the stoichiometric combustion of propane (C3H8) in air is: The stoichiometric composition of propane in air is 1 / (1 + 5 + 18.87) = 4.02% vol. The stoichiometric combustion reaction for CHO in air: The stoichiometric combustion reaction for CHOS: The stoichiometric combustion reaction for CHONS: The stoichiometric combustion reaction for CHOF: Trace combustion products Various other substances begin to appear in significant amounts in combustion products when the flame temperature is above about . When excess air is used, nitrogen may oxidize to and, to a much lesser extent, to . forms by disproportionation of , and and form by disproportionation of . For example, when of propane is burned with of air (120% of the stoichiometric amount), the combustion products contain 3.3% . At , the equilibrium combustion products contain 0.03% and 0.002% . At , the combustion products contain 0.17% , 0.05% , 0.01% , and 0.004% . Diesel engines are run with an excess of oxygen to combust small particles that tend to form with only a stoichiometric amount of oxygen, necessarily producing nitrogen oxide emissions. Both the United States and European Union enforce limits to vehicle nitrogen oxide emissions, which necessitate the use of special catalytic converters or treatment of the exhaust with urea (see Diesel exhaust fluid). Incomplete combustion of a hydrocarbon in oxygen The incomplete (partial) combustion of a hydrocarbon with oxygen produces a gas mixture containing mainly , , , and . Such gas mixtures are commonly prepared for use as protective atmospheres for the heat-treatment of metals and for gas carburizing. The general reaction equation for incomplete combustion of one mole of a hydrocarbon in oxygen is: \underset{fuel}{C_\mathit{x} H_\mathit{y}} + \underset{oxygen}{\mathit{z} O2} -> \underset{carbon \ dioxide}{\mathit{a}CO2} + \underset{carbon\ monoxide}{\mathit{b}CO} + \underset{water}{\mathit{c}H2O} + \underset{hydrogen}{\mathit{d}H2} When z falls below roughly 50% of the stoichiometric value, can become an important combustion product; when z falls below roughly 35% of the stoichiometric value, elemental carbon may become stable. The products of incomplete combustion can be calculated with the aid of a material balance, together with the assumption that the combustion products reach equilibrium. For example, in the combustion of one mole of propane () with four moles of , seven moles of combustion gas are formed, and z is 80% of the stoichiometric value. The three elemental balance equations are: Carbon: Hydrogen: Oxygen: These three equations are insufficient in themselves to calculate the combustion gas composition. However, at the equilibrium position, the water-gas shift reaction gives another equation: CO + H2O -> CO2 + H2; For example, at the value of K is 0.728. Solving, the combustion gas consists of 42.4% , 29.0% , 14.7% , and 13.9% . Carbon becomes a stable phase at and pressure when z is less than 30% of the stoichiometric value, at which point the combustion products contain more than 98% and and about 0.5% . Substances or materials which undergo combustion are called fuels. The most common examples are natural gas, propane, kerosene, diesel, petrol, charcoal, coal, wood, etc. Liquid fuels Combustion of a liquid fuel in an oxidizing atmosphere actually happens in the gas phase. It is the vapor that burns, not the liquid. Therefore, a liquid will normally catch fire only above a certain temperature: its flash point. The flash point of a liquid fuel is the lowest temperature at which it can form an ignitable mix with air. It is the minimum temperature at which there is enough evaporated fuel in the air to start combustion. Gaseous fuels Combustion of gaseous fuels may occur through one of four distinctive types of burning: diffusion flame, premixed flame, autoignitive reaction front, or as a detonation. The type of burning that actually occurs depends on the degree to which the fuel and oxidizer are mixed prior to heating: for example, a diffusion flame is formed if the fuel and oxidizer are separated initially, whereas a premixed flame is formed otherwise. Similarly, the type of burning also depends on the pressure: a detonation, for example, is an autoignitive reaction front coupled to a strong shock wave giving it its characteristic high-pressure peak and high detonation velocity. Solid fuels The act of combustion consists of three relatively distinct but overlapping phases: Preheating phase, when the unburned fuel is heated up to its flash point and then fire point. Flammable gases start being evolved in a process similar to dry distillation. Distillation phase or gaseous phase, when the mix of evolved flammable gases with oxygen is ignited. Energy is produced in the form of heat and light. Flames are often visible. Heat transfer from the combustion to the solid maintains the evolution of flammable vapours. Charcoal phase or solid phase, when the output of flammable gases from the material is too low for persistent presence of flame and the charred fuel does not burn rapidly and just glows and later only smoulders. Combustion management Efficient process heating requires recovery of the largest possible part of a fuel's heat of combustion into the material being processed. There are many avenues of loss in the operation of a heating process. Typically, the dominant loss is sensible heat leaving with the offgas (i.e., the flue gas). The temperature and quantity of offgas indicates its heat content (enthalpy), so keeping its quantity low minimizes heat loss. In a perfect furnace, the combustion air flow would be matched to the fuel flow to give each fuel molecule the exact amount of oxygen needed to cause complete combustion. However, in the real world, combustion does not proceed in a perfect manner. Unburned fuel (usually and ) discharged from the system represents a heating value loss (as well as a safety hazard). Since combustibles are undesirable in the offgas, while the presence of unreacted oxygen there presents minimal safety and environmental concerns, the first principle of combustion management is to provide more oxygen than is theoretically needed to ensure that all the fuel burns. For methane () combustion, for example, slightly more than two molecules of oxygen are required. The second principle of combustion management, however, is to not use too much oxygen. The correct amount of oxygen requires three types of measurement: first, active control of air and fuel flow; second, offgas oxygen measurement; and third, measurement of offgas combustibles. For each heating process, there exists an optimum condition of minimal offgas heat loss with acceptable levels of combustibles concentration. Minimizing excess oxygen pays an additional benefit: for a given offgas temperature, the NOx level is lowest when excess oxygen is kept lowest. Adherence to these two principles is furthered by making material and heat balances on the combustion process. The material balance directly relates the air/fuel ratio to the percentage of in the combustion gas. The heat balance relates the heat available for the charge to the overall net heat produced by fuel combustion. Additional material and heat balances can be made to quantify the thermal advantage from preheating the combustion air, or enriching it in oxygen. Reaction mechanism Combustion in oxygen is a chain reaction in which many distinct radical intermediates participate. The high energy required for initiation is explained by the unusual structure of the dioxygen molecule. The lowest-energy configuration of the dioxygen molecule is a stable, relatively unreactive diradical in a triplet spin state. Bonding can be described with three bonding electron pairs and two antibonding electrons, with spins aligned, such that the molecule has nonzero total angular momentum. Most fuels, on the other hand, are in a singlet state, with paired spins and zero total angular momentum. Interaction between the two is quantum mechanically a "forbidden transition", i.e. possible with a very low probability. To initiate combustion, energy is required to force dioxygen into a spin-paired state, or singlet oxygen. This intermediate is extremely reactive. The energy is supplied as heat, and the reaction then produces additional heat, which allows it to continue. Combustion of hydrocarbons is thought to be initiated by hydrogen atom abstraction (not proton abstraction) from the fuel to oxygen, to give a hydroperoxide radical (HOO). This reacts further to give hydroperoxides, which break up to give hydroxyl radicals. There are a great variety of these processes that produce fuel radicals and oxidizing radicals. Oxidizing species include singlet oxygen, hydroxyl, monatomic oxygen, and hydroperoxyl. Such intermediates are short-lived and cannot be isolated. However, non-radical intermediates are stable and are produced in incomplete combustion. An example is acetaldehyde produced in the combustion of ethanol. An intermediate in the combustion of carbon and hydrocarbons, carbon monoxide, is of special importance because it is a poisonous gas, but also economically useful for the production of syngas. Solid and heavy liquid fuels also undergo a great number of pyrolysis reactions that give more easily oxidized, gaseous fuels. These reactions are endothermic and require constant energy input from the ongoing combustion reactions. A lack of oxygen or other improperly designed conditions result in these noxious and carcinogenic pyrolysis products being emitted as thick, black smoke. The rate of combustion is the amount of a material that undergoes combustion over a period of time. It can be expressed in grams per second (g/s) or kilograms per second (kg/s). Detailed descriptions of combustion processes, from the chemical kinetics perspective, requires the formulation of large and intricate webs of elementary reactions. For instance, combustion of hydrocarbon fuels typically involve hundreds of chemical species reacting according to thousands of reactions. Inclusion of such mechanisms within computational flow solvers still represents a pretty challenging task mainly in two aspects. First, the number of degrees of freedom (proportional to the number of chemical species) can be dramatically large; second, the source term due to reactions introduces a disparate number of time scales which makes the whole dynamical system stiff. As a result, the direct numerical simulation of turbulent reactive flows with heavy fuels soon becomes intractable even for modern supercomputers. Therefore, a plethora of methodologies has been devised for reducing the complexity of combustion mechanisms without resorting to high detail level. Examples are provided by: The Relaxation Redistribution Method (RRM) The Intrinsic Low-Dimensional Manifold (ILDM) approach and further developments The invariant constrained equilibrium edge preimage curve method. A few variational approaches The Computational Singular perturbation (CSP) method and further developments. The Rate Controlled Constrained Equilibrium (RCCE) and Quasi Equilibrium Manifold (QEM) approach. The G-Scheme. The Method of Invariant Grids (MIG). Kinetic modelling The kinetic modelling may be explored for insight into the reaction mechanisms of thermal decomposition in the combustion of different materials by using for instance Thermogravimetric analysis. Temperature Assuming perfect combustion conditions, such as complete combustion under adiabatic conditions (i.e., no heat loss or gain), the |
the Churchmen in Ohrid, Preslav scholars were much more dependent upon Greek models and quickly abandoned the Glagolitic scripts in favor of an adaptation of the Greek uncial to the needs of Slavic, which is now known as the Cyrillic alphabet. The earliest datable Cyrillic inscriptions have been found in the area of Preslav. They have been found in the medieval city itself, and at nearby Patleina Monastery, both in present-day Shumen Province, in the Ravna Monastery and in the Varna Monastery. With the orthographic reform of Saint Evtimiy of Tarnovo and other prominent representatives of the Tarnovo Literary School (14th and 15th centuries) such as Gregory Tsamblak or Constantine of Kostenets the school influenced Russian, Serbian, Wallachian and Moldavian medieval culture. That is famous in Russia as the second South-Slavic influence. In the early 18th century, the Cyrillic script used in Russia was heavily reformed by Peter the Great, who had recently returned from his Grand Embassy in Western Europe. The new letterforms, called the Civil script, became closer to those of the Latin alphabet; several archaic letters were abolished and several letters were designed by Peter himself. Letters became distinguished between upper and lower case. West European typography culture was also adopted. The pre-reform forms of letters called 'Полуустав' were notably kept for use in Church Slavonic and are sometimes used in Russian even today, especially if one wants to give a text a 'Slavic' or 'archaic' feel. Letters Cyrillic script spread throughout the East Slavic and some South Slavic territories, being adopted for writing local languages, such as Old East Slavic. Its adaptation to local languages produced a number of Cyrillic alphabets, discussed below. Capital and lowercase letters were not distinguished in old manuscripts. Yeri () was originally a ligature of Yer and I ( + = ). Iotation was indicated by ligatures formed with the letter І: (not an ancestor of modern Ya, Я, which is derived from ), , (ligature of and ), , . Sometimes different letters were used interchangeably, for example = = , as were typographical variants like = . There were also commonly used ligatures like = . The letters also had numeric values, based not on Cyrillic alphabetical order, but inherited from the letters' Greek ancestors. The early Cyrillic alphabet is difficult to represent on computers. Many of the letterforms differed from those of modern Cyrillic, varied a great deal in manuscripts, and changed over time. Few fonts include glyphs sufficient to reproduce the alphabet. In accordance with Unicode policy, the standard does not include letterform variations or ligatures found in manuscript sources unless they can be shown to conform to the Unicode definition of a character. The Unicode 5.1 standard, released on 4 April 2008, greatly improves computer support for the early Cyrillic and the modern Church Slavonic language. In Microsoft Windows, the Segoe UI user interface font is notable for having complete support for the archaic Cyrillic letters since Windows 8. Currency signs Some currency signs have derived from Cyrillic letters: The Ukrainian hryvnia sign (₴) isfrom the cursive minuscule Ukrainian Cyrillic letter He (г). The Russian ruble sign (₽) from the majuscule Р. The Kyrgyzstani som sign (⃀) from the majuscule С (es) The Kazakhstani tenge sign (₸) from Т The Mongolian tögrög sign (₮) from Т Letterforms and typography The development of Cyrillic typography passed directly from the medieval stage to the late Baroque, without a Renaissance phase as in Western Europe. Late Medieval Cyrillic letters (categorized as vyaz' and still found on many icon inscriptions today) show a marked tendency to be very tall and narrow, with strokes often shared between adjacent letters. Peter the Great, Tsar of Russia, mandated the use of westernized letter forms (ru) in the early 18th century. Over time, these were largely adopted in the other languages that use the script. Thus, unlike the majority of modern Greek fonts that retained their own set of design principles for lower-case letters (such as the placement of serifs, the shapes of stroke ends, and stroke-thickness rules, although Greek capital letters do use Latin design principles), modern Cyrillic fonts are much the same as modern Latin fonts of the same font family. The development of some Cyrillic computer typefaces from Latin ones has also contributed to the visual Latinization of Cyrillic type. Lowercase forms Cyrillic uppercase and lowercase letter forms are not as differentiated as in Latin typography. Upright Cyrillic lowercase letters are essentially small capitals (with exceptions: Cyrillic , , , , , and adopted Western lowercase shapes, lowercase is typically designed under the influence of Latin , lowercase , and are traditional handwritten forms), although a good-quality Cyrillic typeface will still include separate small-caps glyphs. Cyrillic fonts, as well as Latin ones, have roman and italic types (practically all popular modern fonts include parallel sets of Latin and Cyrillic letters, where many glyphs, uppercase as well as lowercase, are simply shared by both). However, the native font terminology in most Slavic languages (for example, in Russian) does not use the words "roman" and "italic" in this sense. Instead, the nomenclature follows German naming patterns: Roman type is called ("upright type")—compare with ("regular type") in German Italic type is called ("cursive") or ("cursive type")—from the German word , meaning italic typefaces and not cursive writing Cursive handwriting is ("handwritten type")—in German: or , both meaning literally 'running type' A (mechanically) sloped oblique type of sans-serif faces is ("sloped" or "slanted type"). A boldfaced type is called ("semi-bold type"), because there existed fully boldfaced shapes that have been out of use since the beginning of the 20th century. Italic and cursive forms Similarly to Latin fonts, italic and cursive types of many Cyrillic letters (typically lowercase; uppercase only for handwritten or stylish types) are very different from their upright roman types. In certain cases, the correspondence between uppercase and lowercase glyphs does not coincide in Latin and Cyrillic fonts: for example, italic Cyrillic is the lowercase counterpart of not of . Note: in some fonts or styles, , i.e. the lowercase italic Cyrillic , may look like Latin , and , i.e. lowercase italic Cyrillic , may look like small-capital italic . In Standard Serbian, as well as in Macedonian, some italic and cursive letters are allowed to be different to more closely resemble the handwritten letters. The regular (upright) shapes are generally standardized in small caps form. Notes: Depending on fonts available, the Serbian row may appear identical to the Russian row. Unicode approximations are used in the faux row to ensure it can be rendered properly across all systems. In Bulgarian typography, many lowercase letterforms may more closely resemble the cursive forms on the one hand and Latin glyphs on the other hand, e.g. by having an ascender or descender or by using rounded arcs instead of sharp corners. Sometimes, uppercase letters may have a different shape as well, e.g. more triangular, Д and Л, like Greek delta Δ and lambda Λ. Notes: Depending on fonts available, the Bulgarian row may appear identical to the Russian row. Unicode approximations are used in the faux row to ensure it can be rendered properly across all systems; in some cases, such as ж with k-like ascender, no such approximation exists. Accessing variant forms Computer fonts typically default to the Central/Eastern, Russian letterforms, and require the use of OpenType Layout (OTL) features to display the Western, Bulgarian or Southern, Serbian/Macedonian forms. Depending on the choices of the font manufacturer, they may either be automatically activated by the local variant locl feature for text tagged with an appropriate language code, or the author needs to opt-in by activating a stylistic set ss## or character variant cv## feature. These solutions only enjoy partial support and may render with default glyphs in certain software configurations. Cyrillic alphabets Among others, Cyrillic is the standard script for writing the following languages: Slavic languages: Belarusian, Bulgarian, Macedonian, Russian, Rusyn, Serbo-Croatian (Standard Serbian, Bosnian, and Montenegrin), Ukrainian Non-Slavic languages of Russia: Abaza, Adyghe, Azerbaijani (in Dagestan), Bashkir, Buryat, Chechen, Chuvash, Erzya, Ingush, Kabardian, Kalmyk, Karachay-Balkar, Kildin Sami, Komi, Mari, Moksha, Nogai, Ossetian, Romani, Sakha/Yakut, Tatar, Tuvan, Udmurt, Yuit (Yupik) Non-Slavic languages in other countries: Abkhaz, Aleut (now mostly in church texts), Dungan, Kazakh (to be replaced by Latin script by 2025), Kyrgyz, Mongolian (to also be written with traditional Mongolian script by 2025), Tajik, Tlingit (now only in church texts), Turkmen (officially replaced | official script of the European Union, following the Latin and Greek alphabets. The writing system dates back to the 9th century AD, when the Bulgarian tsar Simeon I the Great –following the cultural and political course of his father Boris I– commissioned a new script, the Early Cyrillic alphabet, to be made at the Preslav Literary School in the First Bulgarian Empire, which would replace the Glagolitic script, produced earlier by Saints Cyril and Methodius and the same disciples that created the new Slavic script in Bulgaria. The usage of the Cyrillic script in Bulgaria was made official in 893. The new script became the basis of alphabets used in various languages in Orthodox Church dominated Eastern Europe, both Slavic and non-Slavic (such as Romanian). For centuries Cyrillic was also used by Catholic and Muslim Slavs too (see Bosnian Cyrillic). Cyrillic is derived from the Greek uncial script, augmented by letters from the older Glagolitic alphabet, including some ligatures. These additional letters were used for Old Church Slavonic sounds not found in Greek. The script is named in honor of the Saint Cyril, one of the two Byzantine brothers, Saints Cyril and Methodius, who created the Glagolitic alphabet earlier on. Modern scholars believe that Cyrillic was developed and formalized by the early disciples of Cyril and Methodius in the Preslav Literary School, the most important early literary and cultural centre of the First Bulgarian Empire and of all Slavs: Unlike the Churchmen in Ohrid, Preslav scholars were much more dependent upon Greek models and quickly abandoned the Glagolitic scripts in favor of an adaptation of the Greek uncial to the needs of Slavic, which is now known as the Cyrillic alphabet. The earliest datable Cyrillic inscriptions have been found in the area of Preslav. They have been found in the medieval city itself, and at nearby Patleina Monastery, both in present-day Shumen Province, in the Ravna Monastery and in the Varna Monastery. With the orthographic reform of Saint Evtimiy of Tarnovo and other prominent representatives of the Tarnovo Literary School (14th and 15th centuries) such as Gregory Tsamblak or Constantine of Kostenets the school influenced Russian, Serbian, Wallachian and Moldavian medieval culture. That is famous in Russia as the second South-Slavic influence. In the early 18th century, the Cyrillic script used in Russia was heavily reformed by Peter the Great, who had recently returned from his Grand Embassy in Western Europe. The new letterforms, called the Civil script, became closer to those of the Latin alphabet; several archaic letters were abolished and several letters were designed by Peter himself. Letters became distinguished between upper and lower case. West European typography culture was also adopted. The pre-reform forms of letters called 'Полуустав' were notably kept for use in Church Slavonic and are sometimes used in Russian even today, especially if one wants to give a text a 'Slavic' or 'archaic' feel. Letters Cyrillic script spread throughout the East Slavic and some South Slavic territories, being adopted for writing local languages, such as Old East Slavic. Its adaptation to local languages produced a number of Cyrillic alphabets, discussed below. Capital and lowercase letters were not distinguished in old manuscripts. Yeri () was originally a ligature of Yer and I ( + = ). Iotation was indicated by ligatures formed with the letter І: (not an ancestor of modern Ya, Я, which is derived from ), , (ligature of and ), , . Sometimes different letters were used interchangeably, for example = = , as were typographical variants like = . There were also commonly used ligatures like = . The letters also had numeric values, based not on Cyrillic alphabetical order, but inherited from the letters' Greek ancestors. The early Cyrillic alphabet is difficult to represent on computers. Many of the letterforms differed from those of modern Cyrillic, varied a great deal in manuscripts, and changed over time. Few fonts include glyphs sufficient to reproduce the alphabet. In accordance with Unicode policy, the standard does not include letterform variations or ligatures found in manuscript sources unless they can be shown to conform to the Unicode definition of a character. The Unicode 5.1 standard, released on 4 April 2008, greatly improves computer support for the early Cyrillic and the modern Church Slavonic language. In Microsoft Windows, the Segoe UI user interface font is notable for having complete support for the archaic Cyrillic letters since Windows 8. Currency signs Some currency signs have derived from Cyrillic letters: The Ukrainian hryvnia sign (₴) isfrom the cursive minuscule Ukrainian Cyrillic letter He (г). The Russian ruble sign (₽) from the majuscule Р. The Kyrgyzstani som sign (⃀) from the majuscule С (es) The Kazakhstani tenge sign (₸) from Т The Mongolian tögrög sign (₮) from Т Letterforms and typography The development of Cyrillic typography passed directly from the medieval stage to the late Baroque, without a Renaissance phase as in Western Europe. Late Medieval Cyrillic letters (categorized as vyaz' and still found on many icon inscriptions today) show a marked tendency to be very tall and narrow, with strokes often shared between adjacent letters. Peter the Great, Tsar of Russia, mandated the use of westernized letter forms (ru) in the early 18th century. Over time, these were largely adopted in the other languages that use the script. Thus, unlike the majority of modern Greek fonts that retained their own set of design principles for lower-case letters (such as the placement of serifs, the shapes of stroke ends, and stroke-thickness rules, although Greek capital letters do use Latin design principles), modern Cyrillic fonts are much the same as modern Latin fonts of the same font family. The development of some Cyrillic computer typefaces from Latin ones has also contributed to the visual Latinization of Cyrillic type. Lowercase forms Cyrillic uppercase and lowercase letter forms are not as differentiated as in Latin typography. Upright Cyrillic lowercase letters are essentially small capitals (with exceptions: Cyrillic , , , , , and adopted Western lowercase shapes, lowercase is typically designed under the influence of Latin , lowercase , and are traditional handwritten forms), although a good-quality Cyrillic typeface will still include separate small-caps glyphs. Cyrillic fonts, as well as Latin ones, have roman and italic types (practically all popular modern fonts include parallel sets of Latin and Cyrillic letters, where many glyphs, uppercase as well as lowercase, are simply shared by |
between consonant and vowel is not always clear cut: there are syllabic consonants and non-syllabic vowels in many of the world's languages. One blurry area is in segments variously called semivowels, semiconsonants, or glides. On one side, there are vowel-like segments that are not in themselves syllabic, but form diphthongs as part of the syllable nucleus, as the i in English boil . On the other, there are approximants that behave like consonants in forming onsets, but are articulated very much like vowels, as the y in English yes . Some phonologists model these as both being the underlying vowel , so that the English word bit would phonemically be , beet would be , and yield would be phonemically . Likewise, foot would be , food would be , wood would be , and wooed would be . However, there is a (perhaps allophonic) difference in articulation between these segments, with the in yes and yield and the of wooed having more constriction and a more definite place of articulation than the in boil or bit or the of foot. The other problematic area is that of syllabic consonants, segments articulated as consonants but occupying the nucleus of a syllable. This may be the case for words such as church in rhotic dialects of English, although phoneticians differ in whether they consider this to be a syllabic consonant, , or a rhotic vowel, : Some distinguish an approximant that corresponds to a vowel , for rural as or ; others see these as a single phoneme, . Other languages use fricative and often trilled segments as syllabic nuclei, as in Czech and several languages in Democratic Republic of the Congo, and China, including Mandarin Chinese. In Mandarin, they are historically allophones of , and spelled that way in Pinyin. Ladefoged and Maddieson call these "fricative vowels" and say that "they can usually be thought of as syllabic fricatives that are allophones of vowels". That is, phonetically they are consonants, but phonemically they behave as vowels. Many Slavic languages allow the trill and the lateral as syllabic nuclei (see Words without vowels). In languages like Nuxalk, it is difficult to know what the nucleus of a syllable is, or if all syllables even have nuclei. If the concept of 'syllable' applies in Nuxalk, there are syllabic consonants in words like (?) 'seal fat'. Miyako in Japan is similar, with 'to build' and 'to pull'. Features Each spoken consonant can be distinguished by several phonetic features: The manner of articulation is how air escapes from the vocal tract when the consonant or approximant (vowel-like) sound is made. Manners include stops, fricatives, and nasals. The place of articulation is where in the vocal tract the obstruction of the consonant occurs, and which speech organs are involved. Places include bilabial (both lips), alveolar (tongue against the gum ridge), and velar (tongue against soft palate). In addition, there may be a simultaneous narrowing at another place of articulation, such as palatalisation or pharyngealisation. Consonants with two simultaneous places of articulation are said to be coarticulated. The phonation of a consonant is how the vocal cords vibrate during the articulation. When the vocal cords vibrate fully, the consonant is called voiced; when they do not vibrate at all, it is voiceless. The voice onset time (VOT) indicates the timing of the phonation. Aspiration is a feature of VOT. The airstream mechanism is how the air moving through the vocal tract is powered. Most languages have exclusively pulmonic egressive consonants, which use the lungs and diaphragm, but ejectives, clicks, and implosives use different mechanisms. The length is how long the obstruction of a consonant lasts. This feature is borderline distinctive in English, as in "wholly" vs. "holy" , but cases are limited to morpheme boundaries. Unrelated roots are differentiated in various languages such as Italian, Japanese, and Finnish, with two length levels, "single" and "geminate". Estonian and some Sami languages have three phonemic lengths: short, geminate, and long geminate, although the distinction between the geminate and overlong geminate includes suprasegmental features. The articulatory force is how much muscular energy is involved. This has been proposed many times, but no distinction relying exclusively on force has ever been demonstrated. All English consonants can be classified by a combination of these features, such as "voiceless alveolar stop" . In this case, the airstream mechanism is omitted. Some pairs of consonants like p::b, t::d are sometimes called fortis and lenis, but this is a phonological rather than phonetic distinction. Consonants are scheduled by their features in a number of IPA charts: Examples The recently extinct Ubykh language had only 2 or 3 vowels but 84 consonants; the Taa language has 87 consonants under | Miyako in Japan is similar, with 'to build' and 'to pull'. Features Each spoken consonant can be distinguished by several phonetic features: The manner of articulation is how air escapes from the vocal tract when the consonant or approximant (vowel-like) sound is made. Manners include stops, fricatives, and nasals. The place of articulation is where in the vocal tract the obstruction of the consonant occurs, and which speech organs are involved. Places include bilabial (both lips), alveolar (tongue against the gum ridge), and velar (tongue against soft palate). In addition, there may be a simultaneous narrowing at another place of articulation, such as palatalisation or pharyngealisation. Consonants with two simultaneous places of articulation are said to be coarticulated. The phonation of a consonant is how the vocal cords vibrate during the articulation. When the vocal cords vibrate fully, the consonant is called voiced; when they do not vibrate at all, it is voiceless. The voice onset time (VOT) indicates the timing of the phonation. Aspiration is a feature of VOT. The airstream mechanism is how the air moving through the vocal tract is powered. Most languages have exclusively pulmonic egressive consonants, which use the lungs and diaphragm, but ejectives, clicks, and implosives use different mechanisms. The length is how long the obstruction of a consonant lasts. This feature is borderline distinctive in English, as in "wholly" vs. "holy" , but cases are limited to morpheme boundaries. Unrelated roots are differentiated in various languages such as Italian, Japanese, and Finnish, with two length levels, "single" and "geminate". Estonian and some Sami languages have three phonemic lengths: short, geminate, and long geminate, although the distinction between the geminate and overlong geminate includes suprasegmental features. The articulatory force is how much muscular energy is involved. This has been proposed many times, but no distinction relying exclusively on force has ever been demonstrated. All English consonants can be classified by a combination of these features, such as "voiceless alveolar stop" . In this case, the airstream mechanism is omitted. Some pairs of consonants like p::b, t::d are sometimes called fortis and lenis, but this is a phonological rather than phonetic distinction. Consonants are scheduled by their features in a number of IPA charts: Examples The recently extinct Ubykh language had only 2 or 3 vowels but 84 consonants; the Taa language has 87 consonants under one analysis, 164 under another, plus some 30 vowels and tone. The types of consonants used in various languages are by no means universal. For instance, nearly all Australian languages lack fricatives; a large percentage of the world's languages lack voiced stops such as , , as phonemes, though they may appear phonetically. Most languages, however, do include one or more fricatives, with being the most common, and a liquid consonant or two, with the most common. The approximant is also widespread, and virtually all languages have one or more nasals, though a very few, such as the Central dialect of Rotokas, lack even these. This last language has the smallest number of consonants in the world, with just six. Most common The most frequent consonants in rhotic American English (that is, the ones appearing most frequently during speech) are . ( is less common in non-rhotic accents.) The most frequent consonant in many other languages is . The most universal consonants around the world (that is, the ones appearing in nearly all languages) are the three voiceless stops , , , and the two nasals , . However, even these common five are not completely universal. Several languages in the vicinity of the Sahara Desert, including Arabic, lack . Several languages of North America, such as Mohawk, lack both of the labials and . The Wichita language of Oklahoma and some West African languages, such as Ijo, lack the consonant on a phonemic level, but do use it phonetically, as an allophone of another consonant (of in the case of Ijo, and of in Wichita). A few languages on Bougainville Island and around Puget Sound, such as Makah, lack both of the nasals and altogether, except in special speech registers such as baby-talk. The 'click language' Nǁng lacks , and colloquial Samoan lacks both alveolars, and . Despite the 80-odd consonants of Ubykh, it lacks the plain velar in native words, |
Schiffer, some of the characteristics of costume jewelry in the Art Modern period were: Bold, lavish jewelry Large, chunky bracelets, charm bracelets, Jade/opal, citrine and topaz Poodle pins, Christmas tree pins, and other Christmas jewelry Rhinestones With the advent of the Mod period came "Body Jewelry". Carl Schimel of Kim Craftsmen Jewelry was at the forefront of this style. While Kim Craftsmen closed in the early 1990s, many collectors still forage for their items at antique shows and flea markets. General history Costume jewelry has been part of the culture for almost 300 years. During the 18th century, jewelers began making pieces with inexpensive glass. In the 19th century, costume jewelry made of semi-precious material came into the market. Jewels made of semi-precious material were more affordable, and this affordability gave common people the chance to own costume jewelry. But the real golden era for costume jewelry began in the middle of the 20th century. The new middle class wanted beautiful, but affordable jewelry. The demand for jewelry of this type coincided with the machine age and the industrial revolution. The revolution made the production of carefully executed replicas of admired heirloom pieces possible. As the class structure in America changed, so did measures of real wealth. Women in all social stations, even the working-class woman, could own a small piece of costume jewelry. The average town and countrywoman could acquire and wear a considerable amount of this mass-produced jewelry that was both affordable and stylish. Costume jewelry was also made popular by various designers in the mid-20th century. Some of the most remembered names in costume jewelry include both the high and low priced brands: Crown Trifari, Dior, Chanel, Miriam Haskell, Monet, Napier, Corocraft, Coventry, and Kim Craftsmen. A significant factor in the popularization of costume jewelry was Hollywood movies. The leading female stars of the 1940s and 1950s often wore and then endorsed the pieces produced by a range of designers. If you admired a necklace worn by Bette Davis in The Private Lives of Elizabeth and Essex, you could buy a copy from Joseff of Hollywood, who made the original. Stars such as Vivien Leigh, Elizabeth Taylor, and Jane Russell appeared in adverts for the pieces and the availability of the collections in shops such as Woolworth made it possible for ordinary women to own and wear such jewelry. Coco Chanel greatly popularized the use of faux jewelry in her years as a fashion designer, bringing costume jewelry to life with gold and faux pearls. Kenneth Jay Lane has since the 1960s been known for creating unique pieces for Jackie Onassis, Elizabeth Taylor, Diana Vreeland, and Audrey Hepburn. He is probably best known for his three-strand faux pearl necklace worn by Barbara Bush to her husband's inaugural ball. In many instances, high-end fashion jewelry has achieved a "collectible" status and increased value over time. Today, there is a substantial secondary market for vintage fashion jewelry. The main collecting market is for 'signed pieces', that is pieces that have the maker's mark, usually stamped on the reverse. Amongst the most sought after are Miriam Haskell, Coro, Butler and Wilson, Crown Trifari, and Sphinx. However, there is also demand for good quality 'unsigned' pieces, especially if they are of an unusual design. Business and industry Costume jewelry is considered a discrete category of fashion accessory and | the use of the word "costume" to refer to what is now called an "outfit". Components Originally, costume or fashion jewelry was made of inexpensive simulated gemstones, such as rhinestones or lucite, set in pewter, silver, nickel, or brass. During the depression years, rhinestones were even down-graded by some manufacturers to meet the cost of production. During the World War II era, sterling silver was often incorporated into costume jewelry designs primarily because: The components used for base metal were needed for wartime production (i.e., military applications), and a ban was placed on their use in the private sector. Base metal was originally popular because it could approximate platinum's color, sterling silver fulfilled the same function. This resulted in a number of years during which sterling silver costume jewelry was produced and some can still be found in today's vintage jewelry marketplace. Modern costume jewelry incorporates a wide range of materials. High-end crystals, cubic zirconia simulated diamonds, and some semi-precious stones are used in place of precious stones. Metals include gold- or silver-plated brass, and sometimes vermeil or sterling silver. Lower-priced jewelry may still use gold plating over pewter, nickel, or other metals; items made in countries outside the United States may contain lead. Some pieces incorporate plastic, acrylic, leather, or wood. Historical expression Costume jewelry can be characterized by the period in history in which it was made. Art Deco period (1920–1930s) The Art Deco movement was an attempt to combine the harshness of mass production with the sensitivity of art and design. It was during this period that Coco Chanel introduced costume jewelry to complete the costume. The Art Deco movement died with the onset of the Great Depression and the outbreak of World War II. According to Schiffer, some of the characteristics of the costume jewelry in the Art Deco period were: Free-flowing curves were replaced with a harshly geometric and symmetrical theme Long pendants, bangle bracelets, cocktail rings, and elaborate accessory items such as cigarette cases and holders Retro period (1935 to 1950) In the Retro period, designers struggled with the art versus mass production dilemma. Natural materials merged with plastics. The retro period primarily included American-made jewelry, which had a distinctly American look. With the war in Europe, many European jewelry firms were forced to shut down. Many European designers emigrated to the U.S. since the economy was recovering. According to Schiffer, some of the characteristics of costume jewelry in the Retro period were: Glamour, elegance, and sophistication Flowers, bows, and sunburst designs with a Hollywood flair Moonstones, horse motifs, military influence, and ballerinas Bakelite and other plastic jewelry Art Modern period (1945 to 1960) In the Art Modern period following World War II, jewelry designs became more traditional and understated. The big, bold styles of the Retro period went out of style and were replaced by the more tailored styles of the 1950s and 1960s. According to Schiffer, some of the characteristics of costume jewelry in the Art Modern period were: Bold, lavish jewelry Large, chunky bracelets, charm bracelets, Jade/opal, citrine and topaz Poodle pins, Christmas tree pins, and other Christmas jewelry Rhinestones With the advent of the Mod period came "Body Jewelry". Carl Schimel of Kim Craftsmen Jewelry was at the forefront of this style. While Kim Craftsmen closed in the early 1990s, many collectors still forage for their items at antique shows and flea markets. General history Costume jewelry has been part of the culture for almost 300 years. During the 18th century, jewelers began making pieces with inexpensive glass. In the 19th century, costume jewelry made of semi-precious material came into the market. Jewels made of semi-precious material were more affordable, and this affordability gave common people the chance to own costume jewelry. But the real golden era for costume jewelry began in the |
of them to leave any trace, and the islands continued to be ruled by the king of the Franks and its church remained part of the diocese of Coutances. From the beginning of the ninth century, Norse raiders appeared on the coasts. Norse settlement eventually succeeded initial attacks, and it is from this period that many place names of Norse origin appear, including the modern names of the islands. From the Duchy of Normandy In 933, the islands were granted to William I Longsword by Raoul King of Western Francia and annexed to the Duchy of Normandy. In 1066, William II of Normandy invaded and conquered England, becoming William I of England, also known as William the Conqueror. In the period 1204–1214, King John lost the Angevin lands in northern France, including mainland Normandy, to King Philip II of France, but managed to retain control of the Channel Islands. In 1259, his successor, Henry III of England, by the Treaty of Paris, officially surrendered his claim and title to the Duchy of Normandy, while the King of France gave up claim to the Channel Islands, which was based upon his position as feudal overlord of the Duke of Normandy. Since then, the Channel Islands have been governed as possessions of the Crown and were never absorbed into the Kingdom of England and its successor kingdoms of Great Britain and the United Kingdom. The islands were invaded by the French in 1338, who held some territory until 1345. Edward III of England granted a Charter in July 1341 to Jersey, Guernsey, Sark and Alderney, confirming their customs and laws to secure allegiance to the English Crown. Owain Lawgoch, a mercenary leader of a Free Company in the service of the French Crown, attacked Jersey and Guernsey in 1372, and in 1373 Bertrand du Guesclin besieged Mont Orgueil. The young King Richard II of England reconfirmed in 1378 the Charter rights granted by his grandfather, followed in 1394 with a second Charter granting, because of great loyalty shown to the Crown, exemption for ever, from English tolls, customs and duties. Jersey was occupied by the French in 1461 as part of an exchange of helping the Lancastrians fight against the Yorkists during The War of the Roses. It was retaken by the Yorkists in 1468. In 1483 a Papal bull decreed that the islands would be neutral during time of war. This privilege of neutrality enabled islanders to trade with both France and England and was respected until 1689 when it was abolished by Order in Council following the Glorious Revolution in Great Britain. Various attempts to transfer the islands from the diocese of Coutances (to Nantes (1400), Salisbury (1496), and Winchester (1499)) had little effect until an Order in Council of 1569 brought the islands formally into the diocese of Winchester. Control by the bishop of Winchester was ineffectual as the islands had turned overwhelmingly Calvinist and the episcopacy was not restored until 1620 in Jersey and 1663 in Guernsey. Sark in the 16th century was uninhabited until colonised from Jersey in the 1560s. The grant of seigneurship from Elizabeth I of England in 1565 forms the basis of Sark's constitution today. From the seventeenth century During the Wars of the Three Kingdoms, Jersey held out strongly for the Royalist cause, providing refuge for Charles, Prince of Wales in 1646 and 1649–1650, while the more strongly Presbyterian Guernsey more generally favoured the parliamentary cause (although Castle Cornet was held by Royalists and did not surrender until October 1651). The islands acquired commercial and political interests in the North American colonies. Islanders became involved with the Newfoundland fisheries in the seventeenth century. In recognition for all the help given to him during his exile in Jersey in the 1640s, Charles II gave George Carteret, Bailiff and governor, a large grant of land in the American colonies, which he promptly named New Jersey, now part of the United States of America. Sir Edmund Andros of Guernsey was an early colonial governor in North America, and head of the short-lived Dominion of New England. In the late eighteenth century, the Islands were dubbed "the French Isles". Wealthy French émigrés fleeing the Revolution sought residency in the islands. Many of the town domiciles existing today were built in that time. In Saint Peter Port, a large part of the harbour had been built by 1865. 20th century World War II The islands were the only part of the British Isles to be occupied by the German Army during World War II. The British Government demilitarised the islands in June 1940, and the lieutenant-governors were withdrawn on 21 June, leaving the insular administrations to continue government as best they could under impending military occupation. Before German troops landed, between 30 June and 4 July 1940, evacuation took place. Many young men had already left to join the Allied armed forces, as volunteers. 6,600 out of 50,000 left Jersey while 17,000 out of 42,000 left Guernsey. Thousands of children were evacuated with their schools to England and Scotland. The population of Sark largely remained where they were; but in Alderney, all but six people left. In Alderney, the occupying Germans built four camps in which over 700 people out of a total worker population of about 6,000 died. Due to the destruction of documents, it is impossible to state how many forced workers died in the other islands. Alderney had the only Nazi concentration camps on British soil. The Royal Navy blockaded the islands from time to time, particularly following the Invasion of Normandy in June 1944. There was considerable hunger and privation during the five years of German occupation, particularly in the final months when the population was close to starvation. Intense negotiations resulted in some humanitarian aid being sent via the Red Cross, leading to the arrival of Red Cross parcels in the supply ship SS Vega in December 1944. The German occupation of 1940–45 was harsh: over 2,000 Islanders were deported by the Germans, some Jews were sent to concentration camps; partisan resistance and retribution, accusations of collaboration, and slave labour also occurred. Many Spaniards, initially refugees from the Spanish Civil War, were brought to the islands to build fortifications. Later, Russians and Central Europeans continued the work. Many land mines were laid, with 65,718 land mines laid in Jersey alone. There was no resistance movement in the Channel Islands on the scale of that in mainland France. This has been ascribed to a range of factors including the physical separation of the Islands, the density of troops (up to one German for every two Islanders), the small size of the Islands precluding any hiding places for resistance groups, and the absence of the Gestapo from the occupying forces. Moreover, much of the population of military age had joined the British Army already. The end of the occupation came after VE-Day on 8 May 1945, Jersey and Guernsey being liberated on 9 May. The German garrison in Alderney was left until 16 May, and it was one of the last of the Nazi German remnants to surrender. The first evacuees returned on the first sailing from Great Britain on 23 June, but the people of Alderney were unable to start returning until December 1945. Many of the evacuees who returned home had difficulty reconnecting with their families after five years of separation. Post-1945 Following the liberation of 1945, reconstruction led to a transformation of the economies of the islands, attracting immigration and developing tourism. The legislatures were reformed and non-party governments embarked on social programmes, aided by the incomes from offshore finance, which grew rapidly from the 1960s. The islands decided not to join the European Economic Community when the UK joined, and remain outside. Since the 1990s, declining profitability of agriculture and tourism has challenged the governments of the islands. Flag gallery Governance The Channel Islands fall into two separate self-governing bailiwicks, the Bailiwick of Guernsey and the Bailiwick of Jersey. Both are British Crown dependencies, and neither is a part of the United Kingdom. They have been parts of the Duchy of Normandy since the tenth century, and Queen Elizabeth II is often referred to by her traditional and conventional title of Duke of Normandy. However, pursuant to the Treaty of Paris (1259), she governs in her right as The Queen (the "Crown in right of Jersey", and the "Crown in right of the république of the Bailiwick of Guernsey"), and not as the Duke. This notwithstanding, it is a matter of local pride for monarchists to treat the situation otherwise: the Loyal toast at formal dinners is to 'The Queen, our Duke', rather than to 'Her Majesty, The Queen' as in the UK. A bailiwick is a territory administered by a bailiff. Although the words derive from a common root ('bail' = 'to give charge of') there is a vast difference between the meanings of the word 'bailiff' in Great Britain and in the Channel Islands; a bailiff in Britain is a court-appointed private debt-collector authorised to collect judgment debts, in the Channel Islands, the Bailiff in each bailiwick is the civil head, presiding officer of the States, and also head of the judiciary, and thus the most important citizen in the bailiwick. In the early 21st century, the existence of governmental offices such as the bailiffs' with multiple roles straddling the different branches of government came under increased scrutiny for their apparent contravention of the doctrine of separation of powers—most notably in the Guernsey case of McGonnell -v- United Kingdom (2000) 30 EHRR 289. That case, following final judgement at the European Court of Human Rights, became part of the impetus for much recent constitutional change, particularly the Constitutional Reform Act 2005 (2005 c.4) in the UK, including the separation of the roles of the Lord Chancellor, the abolition of the House of Lords' judicial role, and its replacement by the UK Supreme Court. The islands' bailiffs, however, still retain their historic roles. The systems of government in the islands date from Norman times, which accounts for the names of the legislatures, the States, derived from the Norman 'États' or 'estates' (i.e. the Crown, the Church, and the people). The States have evolved over the centuries into democratic parliaments. The UK Parliament has power to legislate for the islands, but Acts of Parliament do not extend to the islands automatically. Usually, an Act gives power to extend its application to the islands by an Order in Council, after consultation. For the most part the islands legislate for themselves. Each island has its own primary legislature, known as the States of Guernsey and the States of Jersey, with Chief Pleas in Sark and the States of Alderney. The Channel Islands are not represented in the UK Parliament. Laws passed by the States are given royal assent by The Queen in Council, to whom the islands' governments are responsible. The islands have never been part of the European Union, and thus were not a party to the 2016 referendum on the EU membership, but were part of the Customs Territory of the European Community by virtue of Protocol Three to the Treaty on European Union. In September 2010, a Channel Islands Brussels Office was set up jointly by the two Bailiwicks to develop the Channel Islands' influence with the EU, to advise the Channel Islands' governments on European matters, and to promote economic links with the EU. Both bailiwicks are members of the British–Irish Council, and Jèrriais and Guernésiais are recognised regional languages of the islands. The legal courts are separate; separate courts of appeal have been in place since 1961. Among the legal heritage from Norman law is the Clameur de haro. The basis of the legal systems of both Bailiwicks is Norman customary law (Coutume) rather than the English Common Law, although elements of the latter have become established over time. Islanders are full British citizens, but were not classed as European citizens unless by descent from a UK national. Any British citizen who applies for a passport in Jersey or Guernsey receives a passport bearing the words "British Islands, Bailiwick of Jersey" or "British Islands, Bailiwick of Guernsey". Under the provisions of Protocol Three, Channel Islanders who do not have a close connection with the UK (no parent or grandparent from the UK, and have never been resident in the UK for a five-year period) did not automatically benefit from the EU | already included in the diocese of Coutances where they remained until the Reformation. There were probably some Celtic Britons who settled on the Islands in the 5th and 6th centuries AD (the indigenous Celts of Great Britain, and the ancestors of the modern Welsh, Cornish, and Bretons) who had emigrated from Great Britain in the face of invading Anglo-Saxons. But there were not enough of them to leave any trace, and the islands continued to be ruled by the king of the Franks and its church remained part of the diocese of Coutances. From the beginning of the ninth century, Norse raiders appeared on the coasts. Norse settlement eventually succeeded initial attacks, and it is from this period that many place names of Norse origin appear, including the modern names of the islands. From the Duchy of Normandy In 933, the islands were granted to William I Longsword by Raoul King of Western Francia and annexed to the Duchy of Normandy. In 1066, William II of Normandy invaded and conquered England, becoming William I of England, also known as William the Conqueror. In the period 1204–1214, King John lost the Angevin lands in northern France, including mainland Normandy, to King Philip II of France, but managed to retain control of the Channel Islands. In 1259, his successor, Henry III of England, by the Treaty of Paris, officially surrendered his claim and title to the Duchy of Normandy, while the King of France gave up claim to the Channel Islands, which was based upon his position as feudal overlord of the Duke of Normandy. Since then, the Channel Islands have been governed as possessions of the Crown and were never absorbed into the Kingdom of England and its successor kingdoms of Great Britain and the United Kingdom. The islands were invaded by the French in 1338, who held some territory until 1345. Edward III of England granted a Charter in July 1341 to Jersey, Guernsey, Sark and Alderney, confirming their customs and laws to secure allegiance to the English Crown. Owain Lawgoch, a mercenary leader of a Free Company in the service of the French Crown, attacked Jersey and Guernsey in 1372, and in 1373 Bertrand du Guesclin besieged Mont Orgueil. The young King Richard II of England reconfirmed in 1378 the Charter rights granted by his grandfather, followed in 1394 with a second Charter granting, because of great loyalty shown to the Crown, exemption for ever, from English tolls, customs and duties. Jersey was occupied by the French in 1461 as part of an exchange of helping the Lancastrians fight against the Yorkists during The War of the Roses. It was retaken by the Yorkists in 1468. In 1483 a Papal bull decreed that the islands would be neutral during time of war. This privilege of neutrality enabled islanders to trade with both France and England and was respected until 1689 when it was abolished by Order in Council following the Glorious Revolution in Great Britain. Various attempts to transfer the islands from the diocese of Coutances (to Nantes (1400), Salisbury (1496), and Winchester (1499)) had little effect until an Order in Council of 1569 brought the islands formally into the diocese of Winchester. Control by the bishop of Winchester was ineffectual as the islands had turned overwhelmingly Calvinist and the episcopacy was not restored until 1620 in Jersey and 1663 in Guernsey. Sark in the 16th century was uninhabited until colonised from Jersey in the 1560s. The grant of seigneurship from Elizabeth I of England in 1565 forms the basis of Sark's constitution today. From the seventeenth century During the Wars of the Three Kingdoms, Jersey held out strongly for the Royalist cause, providing refuge for Charles, Prince of Wales in 1646 and 1649–1650, while the more strongly Presbyterian Guernsey more generally favoured the parliamentary cause (although Castle Cornet was held by Royalists and did not surrender until October 1651). The islands acquired commercial and political interests in the North American colonies. Islanders became involved with the Newfoundland fisheries in the seventeenth century. In recognition for all the help given to him during his exile in Jersey in the 1640s, Charles II gave George Carteret, Bailiff and governor, a large grant of land in the American colonies, which he promptly named New Jersey, now part of the United States of America. Sir Edmund Andros of Guernsey was an early colonial governor in North America, and head of the short-lived Dominion of New England. In the late eighteenth century, the Islands were dubbed "the French Isles". Wealthy French émigrés fleeing the Revolution sought residency in the islands. Many of the town domiciles existing today were built in that time. In Saint Peter Port, a large part of the harbour had been built by 1865. 20th century World War II The islands were the only part of the British Isles to be occupied by the German Army during World War II. The British Government demilitarised the islands in June 1940, and the lieutenant-governors were withdrawn on 21 June, leaving the insular administrations to continue government as best they could under impending military occupation. Before German troops landed, between 30 June and 4 July 1940, evacuation took place. Many young men had already left to join the Allied armed forces, as volunteers. 6,600 out of 50,000 left Jersey while 17,000 out of 42,000 left Guernsey. Thousands of children were evacuated with their schools to England and Scotland. The population of Sark largely remained where they were; but in Alderney, all but six people left. In Alderney, the occupying Germans built four camps in which over 700 people out of a total worker population of about 6,000 died. Due to the destruction of documents, it is impossible to state how many forced workers died in the other islands. Alderney had the only Nazi concentration camps on British soil. The Royal Navy blockaded the islands from time to time, particularly following the Invasion of Normandy in June 1944. There was considerable hunger and privation during the five years of German occupation, particularly in the final months when the population was close to starvation. Intense negotiations resulted in some humanitarian aid being sent via the Red Cross, leading to the arrival of Red Cross parcels in the supply ship SS Vega in December 1944. The German occupation of 1940–45 was harsh: over 2,000 Islanders were deported by the Germans, some Jews were sent to concentration camps; partisan resistance and retribution, accusations of collaboration, and slave labour also occurred. Many Spaniards, initially refugees from the Spanish Civil War, were brought to the islands to build fortifications. Later, Russians and Central Europeans continued the work. Many land mines were laid, with 65,718 land mines laid in Jersey alone. There was no resistance movement in the Channel Islands on the scale of that in mainland France. This has been ascribed to a range of factors including the physical separation of the Islands, the density of troops (up to one German for every two Islanders), the small size of the Islands precluding any hiding places for resistance groups, and the absence of the Gestapo from the occupying forces. Moreover, much of the population of military age had joined the British Army already. The end of the occupation came after VE-Day on 8 May 1945, Jersey and Guernsey being liberated on 9 May. The German garrison in Alderney was left until 16 May, and it was one of the last of the Nazi German remnants to surrender. The first evacuees returned on the first sailing from Great Britain on 23 June, but the people of Alderney were unable to start returning until December 1945. Many of the evacuees who returned home had difficulty reconnecting with their families after five years of separation. Post-1945 Following the liberation of 1945, reconstruction led to a transformation of the economies of the islands, attracting immigration and developing tourism. The legislatures were reformed and non-party governments embarked on social programmes, aided by the incomes from offshore finance, which grew rapidly from the 1960s. The islands decided not to join the European Economic Community when the UK joined, and remain outside. Since the 1990s, declining profitability of agriculture and tourism has challenged the governments of the islands. Flag gallery Governance The Channel Islands fall into two separate self-governing bailiwicks, the Bailiwick of Guernsey and the Bailiwick of Jersey. Both are British Crown dependencies, and neither is a part of the United Kingdom. They have been parts of the Duchy of Normandy since the tenth century, and Queen Elizabeth II is often referred to by her traditional and conventional title of Duke of Normandy. However, pursuant to the Treaty of Paris (1259), she governs in her right as The Queen (the "Crown in right of Jersey", and the "Crown in right of the république of the Bailiwick of Guernsey"), and not as the Duke. This notwithstanding, it is a matter of local pride for monarchists to treat the situation otherwise: the Loyal toast at formal dinners is to 'The Queen, our Duke', rather than to 'Her Majesty, The Queen' as in the UK. A bailiwick is a territory administered by a bailiff. Although the words derive from a common root ('bail' = 'to give charge of') there is a vast difference between the meanings of the word 'bailiff' in Great Britain and in the Channel Islands; a bailiff in Britain is a court-appointed private debt-collector authorised to collect judgment debts, in the Channel Islands, the Bailiff in each bailiwick is the civil head, presiding officer of the States, and also head of the judiciary, and thus the most important citizen in the bailiwick. In the early 21st century, the existence of governmental offices such as the bailiffs' with multiple roles straddling the different branches of government came under increased scrutiny for their apparent contravention of the doctrine of separation of powers—most notably in the Guernsey case of McGonnell -v- United Kingdom (2000) 30 EHRR 289. That case, following final judgement at the European Court of Human Rights, became part of the impetus for much recent constitutional change, particularly the Constitutional Reform Act 2005 (2005 c.4) in the UK, including the separation of the roles of the Lord Chancellor, the abolition of the House of Lords' judicial role, and its replacement by the UK Supreme Court. The islands' bailiffs, however, still retain their historic roles. The systems of government in the islands date from Norman times, which accounts for the names of the legislatures, the States, derived from the Norman 'États' or 'estates' (i.e. the Crown, the Church, and the people). The States have evolved over the centuries into democratic parliaments. The UK Parliament has power to legislate for the islands, but |
others contain political or social commentary (such as The King of Comedy and Wag the Dog). In The Screenwriters Taxonomy (2017), Eric R. Williams contends that film genres are fundamentally based upon a film's atmosphere, character and story, and therefore the labels "drama" and "comedy" are too broad to be considered a genre. Instead, his comedy taxonomy argues that comedy is a type of film that contains at least a dozen different sub-types. History Silent film era The first comedy film was L'Arroseur Arrosé (1895), directed and produced by film pioneer Louis Lumière. Less than 60 seconds long, it shows a boy playing a prank on a gardener. The most noted comedy actors of the silent film era (1895-1927) were Charlie Chaplin, Harold Lloyd, and Buster Keaton. Sub-types Anarchic comedy The anarchic comedy film, as its name suggests, is a random or stream-of-consciousness type of humour which often lampoons a form of authority. The genre dates from the silent era. Notable examples of this type of film are those produced by Monty Python. Others include National Lampoon's Animal House (1978) and Marx Brothers films such as Duck Soup (1933). Bathroom comedy (or gross out comedy) Gross out films are a relatively recent development and rely heavily on vulgar, sexual or "toilet" humor. They often contain a healthy dose of profanity. Examples include Porky's (1982), Dumb and Dumber (1994), There's Something About Mary (1998), and American Pie (1999). Comedy of ideas This sub-type uses comedy to explore serious ideas such as religion, sex or politics. Often the characters represent particular divergent world views and are forced to interact for comedic effect and social commentary. Some examples include Bob Roberts (1992) and MASH (1970). Comedy of manners A comedy of manners satirizes the mores and affectations of a social class. The plot of a comedy of manners is often concerned with an illicit love affair or some other scandal. However, the plot is generally less important for its comedic effect than its witty dialogue. This form of comedy has a long ancestry, dating back at least as far as Much Ado about Nothing created by William Shakespeare, published in 1623. Examples for comedy of manners films include Breakfast at Tiffany's (1961) and Under the Tuscan Sun (2003). Black comedy The black comedy film deals with taboo subjectsincluding death, murder, crime, suicide, and warin a satirical manner. Examples include Arsenic and Old Lace (1944), Monsieur Verdoux (1947), Kind Hearts and Coronets (1949), The Ladykillers (1955), Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb (1964), The Loved One (1965), MASH (1970), S.O.B. (1981), The King of Comedy (1983), Monty Python's The Meaning of Life (1983), Brazil (1985), After Hours (1985), The War of the Roses (1989), Heathers (1989), Wag the Dog (1997), Your Friends & Neighbors (1998), Dev.D (2009), Lock, Stock and Two Smoking Barrels (1998), Panchathanthiram (2002), Keeping Mum (2005), Thank You for Smoking (2005), Burn After Reading (2008), The Wolf of Wall Street (2013), Three Billboards Outside Ebbing, Missouri (2017), The Favourite (2018), Parasite (2019), AK vs AK (2020) and Doctor (2021). Farce Farcical films exaggerate situations beyond the realm of possibilitythereby making them entertaining. Film examples include In the Loop (2009) and Some Like It Hot (1959). Mockumentary Mockumentary comedies are fictional, but use a documentary style that includes interviews and "documentary" footage along regular scenes. Examples include The Gods Must Be Crazy (1980), This Is Spinal Tap (1984), Waiting for Guffman (1996), Best In Show (2000), and Reboot Camp (2020). Observational humor These films find humor in the common practices of everyday life. Some film examples of observational humor include Carnage (2011) and The Divine Secrets of the Ya-Ya Sisterhood (2002). Parody (or spoof) A parody or spoof film satirizes other film genres or classic films. Such films employ sarcasm, stereotyping, mockery of scenes from other films, and the obviousness of meaning in a character's actions. Examples of this form include Mud and Sand (1922), Blazing Saddles (1974), Airplane! (1980), Young Frankenstein (1974), Spaceballs (1987), and Scary Movie (2000). Sex comedy The humor in sex comedy is primarily derived from sexual situations and desire, as in Choke (2008) and Knocked Up (2007). Situational comedy Humor that comes from knowing a stock group of characters (or character types) and then exposing them to different situations to create humorous and ironic juxtaposition; case in point: Galaxy Quest (1999) and Madea's Big Happy Family (2011). Straight comedy This broad sub-type applies to films that do not attempt a specific approach to comedy but, rather, used comedy for comedic sake. Clueless (1995) and Mrs. Doubtfire (1993) are examples of straight comedy films. Slapstick films Slapstick films involve exaggerated, boisterous physical action to create impossible and humorous situations. Because it relies predominately on visual depictions of events, it does not require sound. Accordingly, the subgenre was ideal for silent movies and was prevalent during that era. Popular silent stars of the slapstick genre include Buster Keaton, Charlie Chaplin, Roscoe Arbuckle, and Harold Lloyd. Some of these stars, as well as acts such as Laurel and Hardy and the Three Stooges, also found success incorporating slapstick comedy into sound films. Modern examples of slapstick comedy include Mr. Bean's Holiday (2007) and The Three Stooges (2012). Surreal comedy Although not specifically linked to the history of surrealism, these comedies includes behavior and storytelling techniques that are illogicalincluding bizarre juxtapositions, absurd situations and unpredictable reactions to normal situations. Some examples are Monty Python and the Holy Grail (1975) and Swiss Army Man (2016). Hybrid subgenres According to Williams' taxonomy, all film descriptions should contain their type (comedy or drama) combined with one (or more) subgenres. This combination does not create a separate genre, but rather, provides a better understanding of the film. Action comedy Films in this type blend comic antics and action where the stars combine one-liners with a thrilling plot and daring stunts. The genre became a specific draw in North America in the eighties when comedians such as Eddie Murphy started taking more action-oriented roles, such as in 48 Hrs. (1982) and Beverly Hills Cop (1984). Sub-genres of the action comedy (labeled macro-genres by Williams) include: Martial arts films Slapstick martial arts films became a mainstay of Hong Kong action cinema through the work of Jackie Chan among others, such as Who Am I? (1998). Kung Fu Panda is an action comedy that focuses on the martial art of kung fu. Superhero films Some action films focus on superheroes; for example The Incredibles, Hancock, Kick-Ass, and Mystery Men. Other categories of the action comedy include: Buddy films Films starring mismatched partners for comedic effect, such as in Midnight Run, Rush Hour, 21 Jump Street, Bad Boys, Starsky and Hutch, Booksmart and Ted. Comedy thriller Comedy thriller is a type that combines elements of humor and suspense. Films such as Silver Streak, Charade, Kiss Kiss Bang Bang, In Bruges, Mr. and Mrs. Smith, Grosse | to interact for comedic effect and social commentary. Some examples include Bob Roberts (1992) and MASH (1970). Comedy of manners A comedy of manners satirizes the mores and affectations of a social class. The plot of a comedy of manners is often concerned with an illicit love affair or some other scandal. However, the plot is generally less important for its comedic effect than its witty dialogue. This form of comedy has a long ancestry, dating back at least as far as Much Ado about Nothing created by William Shakespeare, published in 1623. Examples for comedy of manners films include Breakfast at Tiffany's (1961) and Under the Tuscan Sun (2003). Black comedy The black comedy film deals with taboo subjectsincluding death, murder, crime, suicide, and warin a satirical manner. Examples include Arsenic and Old Lace (1944), Monsieur Verdoux (1947), Kind Hearts and Coronets (1949), The Ladykillers (1955), Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb (1964), The Loved One (1965), MASH (1970), S.O.B. (1981), The King of Comedy (1983), Monty Python's The Meaning of Life (1983), Brazil (1985), After Hours (1985), The War of the Roses (1989), Heathers (1989), Wag the Dog (1997), Your Friends & Neighbors (1998), Dev.D (2009), Lock, Stock and Two Smoking Barrels (1998), Panchathanthiram (2002), Keeping Mum (2005), Thank You for Smoking (2005), Burn After Reading (2008), The Wolf of Wall Street (2013), Three Billboards Outside Ebbing, Missouri (2017), The Favourite (2018), Parasite (2019), AK vs AK (2020) and Doctor (2021). Farce Farcical films exaggerate situations beyond the realm of possibilitythereby making them entertaining. Film examples include In the Loop (2009) and Some Like It Hot (1959). Mockumentary Mockumentary comedies are fictional, but use a documentary style that includes interviews and "documentary" footage along regular scenes. Examples include The Gods Must Be Crazy (1980), This Is Spinal Tap (1984), Waiting for Guffman (1996), Best In Show (2000), and Reboot Camp (2020). Observational humor These films find humor in the common practices of everyday life. Some film examples of observational humor include Carnage (2011) and The Divine Secrets of the Ya-Ya Sisterhood (2002). Parody (or spoof) A parody or spoof film satirizes other film genres or classic films. Such films employ sarcasm, stereotyping, mockery of scenes from other films, and the obviousness of meaning in a character's actions. Examples of this form include Mud and Sand (1922), Blazing Saddles (1974), Airplane! (1980), Young Frankenstein (1974), Spaceballs (1987), and Scary Movie (2000). Sex comedy The humor in sex comedy is primarily derived from sexual situations and desire, as in Choke (2008) and Knocked Up (2007). Situational comedy Humor that comes from knowing a stock group of characters (or character types) and then exposing them to different situations to create humorous and ironic juxtaposition; case in point: Galaxy Quest (1999) and Madea's Big Happy Family (2011). Straight comedy This broad sub-type applies to films that do not attempt a specific approach to comedy but, rather, used comedy for comedic sake. Clueless (1995) and Mrs. Doubtfire (1993) are examples of straight comedy films. Slapstick films Slapstick films involve exaggerated, boisterous physical action to create impossible and humorous situations. Because it relies predominately on visual depictions of events, it does not require sound. Accordingly, the subgenre was ideal for silent movies and was prevalent during that era. Popular silent stars of the slapstick genre include Buster Keaton, Charlie Chaplin, Roscoe Arbuckle, and Harold Lloyd. Some of these stars, as well as acts such as Laurel and Hardy and the Three Stooges, also found success incorporating slapstick comedy into sound films. Modern examples of slapstick comedy include Mr. Bean's Holiday (2007) and The Three Stooges (2012). Surreal comedy Although not specifically linked to the history of surrealism, these comedies includes behavior and storytelling techniques that are illogicalincluding bizarre juxtapositions, absurd situations and unpredictable reactions to normal situations. Some examples are Monty Python and the Holy Grail (1975) and Swiss Army Man (2016). Hybrid subgenres According to Williams' taxonomy, all film descriptions should contain their type (comedy or drama) combined with one (or more) subgenres. This combination does not create a separate genre, but rather, provides a better understanding of the film. Action comedy Films in this type blend comic antics and action where the stars combine one-liners with a thrilling plot and daring stunts. The genre became a specific draw in North America in the eighties when comedians such as Eddie Murphy started taking more action-oriented roles, such as in 48 Hrs. (1982) and Beverly Hills Cop (1984). Sub-genres of the action comedy (labeled macro-genres by Williams) include: Martial arts films Slapstick martial arts films became a mainstay of Hong Kong action cinema through |
present. Previously lost scenes cut by studios can be re-added and restore a director's original vision, which draws similar fanfare and acclaim from fans. Imports are sometimes censored to remove elements that would be controversial, such as references to Islamic spirituality in Indonesian cult films. Academics have written of how transgressive themes in cult films can be regressive. David Church and Chuck Kleinhans describe an uncritical celebration of transgressive themes in cult films, including misogyny and racism. Church has also criticized gendered descriptions of transgressive content that celebrate masculinity. Joanne Hollows further identifies a gendered component to the celebration of transgressive themes in cult films, where male terms are used to describe films outside the mainstream while female terms are used to describe mainstream, conformist cinema. Jacinda Read's expansion states that cult films, despite their potential for empowerment of the marginalized, are more often used by politically incorrect males. Knowledgeable about feminism and multiculturalism, they seek a refuge from the academic acceptance of these progressive ideals. Their playful and ironic acceptance of regressive lad culture invites, and even dares, condemnation from academics and the uncool. Thus, cult films become a tool to reinforce mainstream values through transgressive content; Rebecca Feasy states that cultural hierarchies can also be reaffirmed through mockery of films perceived to be lacking masculinity. However, the sexploitation films of Doris Wishman took a feminist approach which avoids and subverts the male gaze and traditional goal-oriented methods. Wishman's subject matter, though exploitative and transgressive, was always framed in terms of female empowerment and the feminine spectator. Her use of common cult film motifs – female nudity and ambiguous gender – were repurposed to comment on feminist topics. Similarly, the films of Russ Meyer were a complicated combination of transgressive, mainstream, progressive, and regressive elements. They attracted both acclaim and denouncement from critics and progressives. Transgressive films imported from cultures that are recognizably different yet still relatable can be used to progressively examine issues in another culture. Subcultural appeal and fandom Cult films can be used to help define or create groups as a form of subcultural capital; knowledge of cult films proves that one is "authentic" or "non-mainstream". They can be used to provoke an outraged response from the mainstream, which further defines the subculture, as only members could possibly tolerate such deviant entertainment. More accessible films have less subcultural capital; among extremists, banned films will have the most. By referencing cult films, media can identify desired demographics, strengthen bonds with specific subcultures, and stand out among those who understand the intertextuality. Popular films from previous eras may be reclaimed by genre fans long after they have been forgotten by the original audiences. This can be done for authenticity, such as horror fans who seek out now-obscure titles from the 1950s instead of the modern, well-known remakes. Authenticity may also drive fans to deny genre categorization to films perceived as too mainstream or accessible. Authenticity in performance and expertise can drive fan acclaim. Authenticity can also drive fans to decry the mainstream in the form of hostile critics and censors. Especially when promoted by enthusiastic and knowledgeable programmers, choice of venue can be an important part of expressing individuality. Besides creating new communities, cult films can link formerly disparate groups, such as fans and critics. As these groups intermix, they can influence each other, though this may be resisted by older fans, unfamiliar with these new references. In extreme cases, cult films can lead to the creation of religions, such as Dudeism. For their avoidance of mainstream culture and audiences, enjoyment of irony, and celebration of obscure subcultures, academic Martin Roberts compares cult film fans to hipsters. A film can become the object of a cult following within a particular region or culture if it has unusual significance. For example, Norman Wisdom's films, friendly to Marxist interpretation, amassed a cult following in Albania, as they were among the few Western films allowed by the country's Communist rulers. The Wizard of Oz (1939) and its star, Judy Garland, hold special significance to American and British gay culture, although it is a widely viewed and historically important film in greater American culture. Similarly, James Dean and his brief film career have become icons of alienated youth. Cult films can have such niche appeal that they are only popular within certain subcultures, such as Reefer Madness (1936) and Hemp for Victory (1942) among the stoner subculture. Beach party musicals, popular among American surfers, failed to find an equivalent audience when imported to the United Kingdom. When films target subcultures like this, they may seem unintelligible without the proper cultural capital. Films which appeal to teenagers may offer subcultural identities that are easily recognized and differentiate various subcultural groups. Films which appeal to stereotypical male activities, such as sports, can easily gain strong male cult followings. Sports metaphors are often used in the marketing of cult films to males, such as emphasizing the "extreme" nature of the film, which increases the appeal to youth subcultures fond of extreme sports. Matt Hills' concept of the "cult blockbuster" involves cult followings inside larger, mainstream films. Although these are big budget, mainstream films, they still attract cult followings. The cult fans differentiate themselves from ordinary fans in several ways: longstanding devotion to the film, distinctive interpretations, and fan works. Hills identifies three different cult followings for The Lord of the Rings, each with their own fandom separate from the mainstream. Academic Emma Pett identifies Back to the Future (1985) as another example of a cult blockbuster. Although the film topped the charts when it was released, it has developed a nostalgic cult following over the years. The hammy acting by Christopher Lloyd and quotable dialogue draw a cult following, as they mimic traditional cult films. Blockbuster science fiction films that include philosophical subtexts, such as The Matrix, allow cult film fans to enjoy them on a higher level than the mainstream. Star Wars, with its large cult following in geek subculture, has been cited as both a cult blockbuster and a cult film. Although a mainstream epic, Star Wars has provided its fans with a spirituality and culture outside of the mainstream. Fans, in response to the popularity of these blockbusters, will claim elements for themselves while rejecting others. For example, in the Star Wars film series, mainstream criticism of Jar Jar Binks focused on racial stereotyping; although cult film fans will use that to bolster their arguments, he is rejected because he represents mainstream appeal and marketing. Also, instead of valuing textual rarity, fans of cult blockbusters will value repeat viewings. They may also engage in behaviors more traditional for fans of cult television and other serial media, as cult blockbusters are often franchised, preconceived as a film series, or both. To reduce mainstream accessibility, a film series can be self-reflexive and full of in-jokes that only longtime fans can understand. Mainstream critics may ridicule commercially successful directors of cult blockbusters, such as James Cameron, Michael Bay, and Luc Besson, whose films have been called simplistic. This critical backlash may serve to embellish the filmmakers' reception as cult auteurs. In the same way, critics may ridicule fans of cult blockbusters as immature or shallow. Cult films can create their own subculture. Rocky Horror, originally made to exploit the popularity of glam subculture, became what academic Gina Marchetti called a "sub-subculture", a variant that outlived its parent subculture. Although often described as primarily composed of obsessed fans, cult film fandom can include many newer, less experienced members. Familiar with the film's reputation and having watched clips on YouTube, these fans may take the next step and enter the film's fandom. If they are the majority, they may alter or ignore long-standing traditions, such as audience participation rituals; rituals which lack perceived authenticity may be criticized, but accepted rituals bring subcultural capital to veteran fans who introduce them to the newer members. Fans who flaunt their knowledge receive negative reactions. Newer fans may cite the film itself as their reason for attending a showing, but longtime fans often cite the community. Organized fandoms may spread and become popular as a way of introducing new people to the film, as well as theatrical screenings being privileged by the media and fandom itself. Fandom can also be used as a process of legitimation. Fans of cult films, as in media fandom, are frequently producers instead of mere consumers. Unconcerned with traditional views on intellectual property, these fan works are often unsanctioned, transformative, and ignore fictional canon. Like cult films themselves, magazines and websites dedicated to cult films revel in their self-conscious offensiveness. They maintain a sense of exclusivity by offending mainstream audiences with misogyny, gore, and racism. Obsessive trivia can be used to bore mainstream audiences while building up subcultural capital. Specialist stores on the fringes of society (or websites which prominently partner with hardcore pornographic sites) can be used to reinforce the outsider nature of cult film fandom, especially when they use erotic or gory imagery. By assuming a preexisting knowledge of trivia, non-fans can be excluded. Previous articles and controversies can also be alluded to without explanation. Casual readers and non-fans will thus be left out of discussions and debates, as they lack enough information to meaningfully contribute. When fans like a cult film for the wrong reasons, such as casting or characters aimed at mainstream appeal, they may be ridiculed. Thus, fandom can keep the mainstream at bay while defining themselves in terms of the "Other", a philosophical construct divergent from social norms. Commercial aspects of fandom (such as magazines or books) can also be defined in terms of "otherness" and thus valid to consume: consumers purchasing independent or niche publications are discerning consumers, but the mainstream is denigrated. Irony or self-deprecating humor can also be used. In online communities, different subcultures attracted to transgressive films can clash over values and criteria for subcultural capital. Even within subcultures, fans who break subcultural scripts, such as denying the affectivity of a disturbing film, will be ridiculed for their lack of authenticity. Types "So bad it's good" The critic Michael Medved characterized examples of the "so bad it's good" class of low-budget cult film through books such as The Golden Turkey Awards. These films include financially fruitless and critically scorned films that have become inadvertent comedies to film buffs, such as Plan 9 from Outer Space (1959), The Room (2003), and the Ugandan action-comedy film Who Killed Captain Alex? (2010). Similarly, Paul Verhoeven's Showgirls (1995) bombed in theaters but developed a cult following on video. Catching on, Metro-Goldwyn-Mayer capitalized on the film's ironic appeal and marketed it as a cult film. Sometimes, fans will impose their own interpretation of films which have attracted derision, such as reinterpreting an earnest melodrama as a comedy. Jacob deNobel of the Carroll County Times states that films can be perceived as nonsensical or inept when audiences misunderstand avant-garde filmmaking or misinterpret parody. Films such as Rocky Horror can be misinterpreted as "weird for weirdness' sake" by people unfamiliar with the cult films that it parodies. deNobel ultimately rejects the use of the label "so bad it's good" as mean-spirited and often misapplied. Alamo Drafthouse programmer Zack Carlson has further said that any film which succeeds in entertaining an audience is good, regardless of irony. In francophone culture, "so bad it's good" films, known as , have given rise to a subculture with dedicated websites such as Nanarland, film festivals and viewings in theaters, as well as various books analyzing the phenomenon. The rise of the Internet and on-demand films has led critics to question whether "so bad it's good" films have a future now that people have such diverse options in both availability and catalog, though fans eager to experience the worst films ever made can lead to lucrative showings for local theaters and merchandisers. Camp and guilty pleasures Chuck Kleinhans states that the difference between a guilty pleasure and a cult film can be as simple as the number of fans; David Church raises the question of how many people it takes to form a cult following, especially now that home video makes fans difficult to count. As these cult films become more popular, they can bring varied responses from fans that depend on different interpretations, such as camp, irony, genuine affection, or combinations thereof. Earnest fans, who recognize and accept the film's faults, can make minor celebrities of the film's cast, though the benefits are not always clear. Cult film stars known for their camp can inject subtle parody or signal when films should not be taken seriously. Campy actors can also provide comic book supervillains for serious, artistic-minded films. This can draw fan acclaim and obsession more readily than subtle, method-inspired acting. Mark Chalon Smith of the Los Angeles Times says technical faults may be forgiven if a film makes up for them in other areas, such as camp or transgressive content. Smith states that the early films of John Waters are amateurish and less influential than claimed, but Waters' outrageous vision cements his place in cult cinema. Films such as Myra Breckinridge (1970) and Beyond the Valley of the Dolls (1970) can experience critical reappraisal later, once their camp excess and avant-garde filmmaking are better accepted, and films that are initially dismissed as frivolous are often reassessed as campy. Films that intentionally try to appeal to fans of camp may end up alienating them, as the films become perceived as trying too hard or not authentic. Nostalgia According to academic Brigid Cherry, nostalgia "is a strong element of certain kinds of cult appeal." When Veoh added many cult films to their site, they cited nostalgia as a factor for their popularity. Academic I. Q. Hunter describes cult films as "New Hollywood in extremis" and a form of nostalgia for that period. Ernest Mathijs instead states that cult films use nostalgia as a form of resistance against progress and capitalistic ideas of a time-based economy. By virtue of the time travel plot, Back to the Future permits nostalgia for both the 1950s and 1980s. Many members of its nostalgic cult following are too young to have been alive during those periods, which Emma Pett interprets as fondness for retro aesthetics, nostalgia for when they saw the film rather than when it was released, and looking to the past to find a better time period. Similarly, films directed by John Hughes have taken hold in midnight movie venues, trading off of nostalgia for the 1980s and an ironic appreciation for their optimism. Mathijs and Sexton describe Grease (1978) as a film nostalgic about an imagined past that has acquired a nostalgic cult following. Other cult films, such as Streets of Fire (1984), create a new fictional world based on nostalgic views of the past. Cult films may also subvert nostalgia, such as The Big Lebowski, which introduces many nostalgic elements and then reveals them as fake and hollow. Scott Pilgrim vs. the World is a recent example, containing extensive nostalgia for the music and video gaming culture of the 2000s. Nathan Lee of the New York Sun identifies the retro aesthetic and nostalgic pastiche in films such as Donnie Darko as factors in its popularity among midnight movie crowds. Midnight movies Author Tomas Crowder-Taraborrelli describes midnight movies as a reaction against the political and cultural conservatism in America, and Joan Hawkins identifies the movement as running the gamut from anarchist to libertarian, united in their anti-establishment attitude and punk aesthetic. These films are resistant to simple categorization and are defined by the fanaticism and ritualistic behaviors of their audiences. Midnight movies require a night life and an audience willing to invest themselves actively. Hawkins states that these films took a rather bleak point of view due to the living conditions of the artists and the economic prospects of the 1970s. Like the surrealists and dadaists, they not only satirically attacked society but also the very structure of film – a counter-cinema that deconstructs narrative and traditional processes. In the late 1980s and 1990s, midnight movies transitioned from underground showings to home video viewings; eventually, a desire for community brought a resurgence, and The Big Lebowski kick-started a new generation. Demographics shifted, and more hip and mainstream audiences were drawn to them. Although studios expressed skepticism, large audiences were drawn to box office flops, such as Donnie Darko (2001), The Warriors (1979) and Office Space (1999). Modern midnight movies retain their popularity and have been strongly diverging from mainstream films shown at midnight. Mainstream cinemas, eager to disassociate themselves from negative associations and increase profits, have begun abandoning midnight screenings. Although classic midnight movies have dropped off in popularity, they still bring reliable crowds. Art and exploitation Although seemingly at odds with each other, art and exploitation films are frequently treated as equal and interchangeable in cult fandom, listed alongside each other and described in similar terms: their ability to provoke a response. The most exploitative aspects of art films are thus played up and their academic recognition ignored. This flattening of culture follows the popularity of post-structuralism, which rejects a hierarchy of artistic merit and equates exploitation and art. Mathijs and Sexton state that although cult films are not synonymous with exploitation, as is occasionally assumed, this is a key component; they write that exploitation, which exists on the fringes of the mainstream and deals with taboo subjects, is well-suited for cult followings. Academic David Andrews writes that cult softcore films are "the most masculinized, youth-oriented, populist, and openly pornographic softcore area." The sexploitation films of Russ Meyer were among the first to abandon all hypocritical pretenses of morality and were technically proficient enough to gain a cult following. His persistent vision saw him received as an auteur worthy of academic study; director John Waters attributes this to Meyer's ability to create complicated, sexually charged films without resorting to explicit sex. Myrna Oliver described Doris Wishman's exploitation films as "crass, coarse, and camp ... perfect fodder for a cult following." "Sick films", the most disturbing and graphically transgressive films, have their own distinct cult following; these films transcend their roots in exploitation, horror, and art films. In 1960s and 1970s America, exploitation and art films shared audiences and marketing, especially in New York City's grindhouse cinemas. B and genre films Mathijs and Sexton state that genre is an important part of cult films; cult films will often mix, mock, or exaggerate the tropes associated with traditional genres. Science fiction, fantasy, and horror are known for their large and dedicated cult followings; as science fiction films become more popular, fans emphasize non-mainstream and less commercial aspects of it. B films, which are often conflated with exploitation, are as important to cult films as exploitation. Teodor Reljic of Malta Today states that cult B films are a realistic goal for Malta's burgeoning film industry. Genre films, B films that strictly adhere to genre limitations, can appeal to cult film fans: given their transgressive excesses, horror films are likely to become to cult films; films like Galaxy Quest (1999) highlight the importance of cult followings and fandom to science fiction; and authentic martial arts skills in Hong Kong action films can drive them to become cult favorites. Cult musicals can range from the traditional, such as Singin' in the Rain (1952), which appeal to cult audiences through nostalgia, camp, and spectacle, to | competitor to Hollywood, which mirrored Jackson's career trajectory. Heavenly Creatures (1994) acquired its own cult following, became a part of New Zealand's national identity, and paved the way for big-budget, Hollywood-style epics, such as Jackson's The Lord of the Rings trilogy. Mathijs states that cult films and fandom frequently involve nontraditional elements of time and time management. Fans will often watch films obsessively, an activity that is viewed by the mainstream as wasting time yet can be seen as resisting the commodification of leisure time. They may also watch films idiosyncratically: sped up, slowed down, frequently paused, or at odd hours. Cult films themselves subvert traditional views of time – time travel, non-linear narratives, and ambiguous establishments of time are all popular. Mathijs also identifies specific cult film viewing habits, such as viewing horror films on Halloween, sentimental melodrama on Christmas, and romantic films on Valentine's Day. These films are often viewed as marathons where fans can gorge themselves on their favorites. Mathijs states that cult films broadcast on Christmas have a nostalgic factor. These films, ritually watched every season, give a sense of community and shared nostalgia to viewers. New films often have trouble making inroads against the institutions of It's A Wonderful Life (1946) and Miracle on 34th Street (1947). These films provide mild criticism of consumerism while encouraging family values. Halloween, on the other hand, allows flaunting society's taboos and testing one's fears. Horror films have appropriated the holiday, and many horror films debut on Halloween. Mathijs criticizes the over-cultified, commercialized nature of Halloween and horror films, which feed into each other so much that Halloween has turned into an image or product with no real community. Mathijs states that Halloween horror conventions can provide the missing community aspect. Despite their oppositional nature, cult films can produce celebrities. Like cult films themselves, authenticity is an important aspect of their popularity. Actors can become typecast as they become strongly associated with such iconic roles. Tim Curry, despite his acknowledged range as an actor, found casting difficult after he achieved fame in The Rocky Horror Picture Show. Even when discussing unrelated projects, interviewers frequently bring up the role, which causes him to tire of discussing it. Mary Woronov, known for her transgressive roles in cult films, eventually transitioned to mainstream films. She was expected to recreate the transgressive elements of her cult films within the confines of mainstream cinema. Instead of the complex gender deconstructions of her Andy Warhol films, she became typecast as a lesbian or domineering woman. Sylvia Kristel, after starring in Emmanuelle (1974), found herself highly associated with the film and the sexual liberation of the 1970s. Caught between the transgressive elements of her cult film and the mainstream appeal of soft-core pornography, she was unable to work in anything but exploitation films and Emmanuelle sequels. Despite her immense popularity and cult following, she would rate only a footnote in most histories of European cinema if she was even mentioned. Similarly, Chloë Sevigny has struggled with her reputation as a cult independent film star famous for her daring roles in transgressive films. Cult films can also trap directors. Leonard Kastle, who directed The Honeymoon Killers (1969), never directed another film again. Despite his cult following, which included François Truffaut, he was unable to find financing for any of his other screenplays. Qualities that bring cult films to prominence – such as an uncompromising, unorthodox vision – caused Alejandro Jodorowsky to languish in obscurity for years. Transgression and censorship Transgressive films as a distinct artistic movement began in the 1970s. Unconcerned with genre distinctions, they drew inspiration equally from the nonconformity of European art cinema and experimental film, the gritty subject matter of Italian neorealism, and the shocking images of 1960s exploitation. Some used hardcore pornography and horror, occasionally at the same time. In the 1980s, filmmaker Nick Zedd identified this movement as the Cinema of Transgression and later wrote a manifesto. Popular in midnight showings, they were mainly limited to large urban areas, which led academic Joan Hawkins to label them as "downtown culture". These films acquired a legendary reputation as they were discussed and debated in alternative weeklies, such as The Village Voice. Home video would finally allow general audiences to see them, which gave many people their first taste of underground film. Ernest Mathijs says that cult films often disrupt viewer expectations, such as giving characters transgressive motivations or focusing attention on elements outside the film. Cult films can also transgress national stereotypes and genre conventions, such as Battle Royale (2000), which broke many rules of teenage slasher films. The reverse – when films based on cult properties lose their transgressive edge – can result in derision and rejection by fans. Audience participation itself can be transgressive, such as breaking long-standing taboos against talking during films and throwing things at the screen. According to Mathijs, critical reception is important to a film's perception as cult, through topicality and controversy. Topicality, which can be regional (such as objection to government funding of the film) or critical (such as philosophical objections to the themes), enables attention and a contextual response. Cultural topics make the film relevant and can lead to controversy, such as a moral panic, which provides opposition. Cultural values transgressed in the film, such as sexual promiscuity, can be attacked by proxy, through attacks on the film. These concerns can vary from culture to culture, and they need not be at all similar. However, Mathijs says the film must invoke metacommentary for it to be more than simply culturally important. While referencing previous arguments, critics may attack its choice of genre or its very right to exist. Taking stances on these varied issues, critics assure their own relevance while helping to elevate the film to cult status. Perceived racist and reductive remarks by critics can rally fans and raise the profile of cult films, an example of which would be Rex Reed's comments about Korean culture in his review of Oldboy (2003). Critics can also polarize audiences and lead debates, such as how Joe Bob Briggs and Roger Ebert dueled over I Spit On Your Grave (1978). Briggs would later contribute a commentary track to the DVD release in which he describes it as a feminist film. Films which do not attract enough controversy may be ridiculed and rejected when suggested as cult films. Academic Peter Hutchings, noting the many definitions of a cult film that require transgressive elements, states that cult films are known in part for their excesses. Both subject matter and its depiction are portrayed in extreme ways that break taboos of good taste and aesthetic norms. Violence, gore, sexual perversity, and even the music can be pushed to stylistic excess far beyond that allowed by mainstream cinema. Film censorship can make these films obscure and difficult to find, common criteria used to define cult films. Despite this, these films remain well-known and prized among collectors. Fans will occasionally express frustration with dismissive critics and conventional analysis, which they believe marginalizes and misinterprets paracinema. In marketing these films, young men are predominantly targeted. Horror films in particular can draw fans who seek the most extreme films. Audiences can also ironically latch on to offensive themes, such as misogyny, using these films as catharsis for the things that they hate most in life. Exploitative, transgressive elements can be pushed to excessive extremes for both humor and satire. Frank Henenlotter faced censorship and ridicule, but he found acceptance among audiences receptive to themes that Hollywood was reluctant to touch, such as violence, drug addiction, and misogyny. Lloyd Kaufman sees his films' political statements as more populist and authentic than the hypocrisy of mainstream films and celebrities. Despite featuring an abundance of fake blood, vomit, and diarrhea, Kaufman's films have attracted positive attention from critics and academics. Excess can also exist as camp, such as films that highlight the excesses of 1980s fashion and commercialism. Films that are influenced by unpopular styles or genres can become cult films. Director Jean Rollin worked within cinéma fantastique, an unpopular genre in modern France. Influenced by American films and early French fantasists, he drifted between art, exploitation, and pornography. His films were reviled by critics, but he retained a cult following drawn by the nudity and eroticism. Similarly, Jess Franco chafed under fascist censorship in Spain but became influential in Spain's horror boom of the 1960s. These transgressive films that straddle the line between art and horror may have overlapping cult followings, each with their own interpretation and reasons for appreciating it. The films that followed Jess Franco were unique in their rejection of mainstream art. Popular among fans of European horror for their subversiveness and obscurity, these later Spanish films allowed political dissidents to criticize the fascist regime within the cloak of exploitation and horror. Unlike most exploitation directors, they were not trying to establish a reputation. They were already established in the art-house world and intentionally chose to work within paracinema as a reaction against the New Spanish Cinema, an artistic revival supported by the fascists. As late as the 1980s, critics still cited Pedro Almodóvar's anti-macho iconoclasm as a rebellion against fascist mores, as he grew from countercultural rebel to mainstream respectability. Transgressive elements that limit a director's appeal in one country can be celebrated or highlighted in another. Takashi Miike has been marketed in the West as a shocking and avant-garde filmmaker despite his many family-friendly comedies, which have not been imported. The transgressive nature of cult films can lead to their censorship. During the 1970s and early 1980s, a wave of explicit, graphic exploitation films caused controversy. Called "video nasties" within the UK, they ignited calls for censorship and stricter laws on home video releases, which were largely unregulated. Consequently, the British Board of Film Classification banned many popular cult films due to issues of sex, violence, and incitement to crime. Released during the cannibal boom, Cannibal Holocaust (1980) was banned in dozens of countries and caused the director to be briefly jailed over fears that it was a real snuff film. Although opposed to censorship, director Ruggero Deodato would later agree with cuts made by the BBFC which removed unsimulated animal killings, which limited the film's distribution. Frequently banned films may introduce questions of authenticity as fans question whether they have seen a truly uncensored cut. Cult films have been falsely claimed to have been banned to increase their transgressive reputation and explain their lack of mainstream penetration. Marketing campaigns have also used such claims to raise interest among curious audiences. Home video has allowed cult film fans to import rare or banned films, finally giving them a chance to complete their collection with imports and bootlegs. Cult films previously banned are sometimes released with much fanfare and the fans assumed to be already familiar with the controversy. Personal responsibility is often highlighted, and a strong anti-censorship message may be present. Previously lost scenes cut by studios can be re-added and restore a director's original vision, which draws similar fanfare and acclaim from fans. Imports are sometimes censored to remove elements that would be controversial, such as references to Islamic spirituality in Indonesian cult films. Academics have written of how transgressive themes in cult films can be regressive. David Church and Chuck Kleinhans describe an uncritical celebration of transgressive themes in cult films, including misogyny and racism. Church has also criticized gendered descriptions of transgressive content that celebrate masculinity. Joanne Hollows further identifies a gendered component to the celebration of transgressive themes in cult films, where male terms are used to describe films outside the mainstream while female terms are used to describe mainstream, conformist cinema. Jacinda Read's expansion states that cult films, despite their potential for empowerment of the marginalized, are more often used by politically incorrect males. Knowledgeable about feminism and multiculturalism, they seek a refuge from the academic acceptance of these progressive ideals. Their playful and ironic acceptance of regressive lad culture invites, and even dares, condemnation from academics and the uncool. Thus, cult films become a tool to reinforce mainstream values through transgressive content; Rebecca Feasy states that cultural hierarchies can also be reaffirmed through mockery of films perceived to be lacking masculinity. However, the sexploitation films of Doris Wishman took a feminist approach which avoids and subverts the male gaze and traditional goal-oriented methods. Wishman's subject matter, though exploitative and transgressive, was always framed in terms of female empowerment and the feminine spectator. Her use of common cult film motifs – female nudity and ambiguous gender – were repurposed to comment on feminist topics. Similarly, the films of Russ Meyer were a complicated combination of transgressive, mainstream, progressive, and regressive elements. They attracted both acclaim and denouncement from critics and progressives. Transgressive films imported from cultures that are recognizably different yet still relatable can be used to progressively examine issues in another culture. Subcultural appeal and fandom Cult films can be used to help define or create groups as a form of subcultural capital; knowledge of cult films proves that one is "authentic" or "non-mainstream". They can be used to provoke an outraged response from the mainstream, which further defines the subculture, as only members could possibly tolerate such deviant entertainment. More accessible films have less subcultural capital; among extremists, banned films will have the most. By referencing cult films, media can identify desired demographics, strengthen bonds with specific subcultures, and stand out among those who understand the intertextuality. Popular films from previous eras may be reclaimed by genre fans long after they have been forgotten by the original audiences. This can be done for authenticity, such as horror fans who seek out now-obscure titles from the 1950s instead of the modern, well-known remakes. Authenticity may also drive fans to deny genre categorization to films perceived as too mainstream or accessible. Authenticity in performance and expertise can drive fan acclaim. Authenticity can also drive fans to decry the mainstream in the form of hostile critics and censors. Especially when promoted by enthusiastic and knowledgeable programmers, choice of venue can be an important part of expressing individuality. Besides creating new communities, cult films can link formerly disparate groups, such as fans and critics. As these groups intermix, they can influence each other, though this may be resisted by older fans, unfamiliar with these new references. In extreme cases, cult films can lead to the creation of religions, such as Dudeism. For their avoidance of mainstream culture and audiences, enjoyment of irony, and celebration of obscure subcultures, academic Martin Roberts compares cult film fans to hipsters. A film can become the object of a cult following within a particular region or culture if it has unusual significance. For example, Norman Wisdom's films, friendly to Marxist interpretation, amassed a cult following in Albania, as they were among the few |
the city was located), Pontus and Asia comparable to the 100-mile extraordinary jurisdiction of the prefect of Rome. The emperor Valens, who hated the city and spent only one year there, nevertheless built the Palace of Hebdomon on the shore of the Propontis near the Golden Gate, probably for use when reviewing troops. All the emperors up to Zeno and Basiliscus were crowned and acclaimed at the Hebdomon. Theodosius I founded the Church of John the Baptist to house the skull of the saint (today preserved at the Topkapı Palace), put up a memorial pillar to himself in the Forum of Taurus, and turned the ruined temple of Aphrodite into a coach house for the Praetorian Prefect; Arcadius built a new forum named after himself on the Mese, near the walls of Constantine. After the shock of the Battle of Adrianople in 378, in which the emperor Valens with the flower of the Roman armies was destroyed by the Visigoths within a few days' march, the city looked to its defences, and in 413–414 Theodosius II built the 18-metre (60-foot)-tall triple-wall fortifications, which were not to be breached until the coming of gunpowder. Theodosius also founded a University near the Forum of Taurus, on 27 February 425. Uldin, a prince of the Huns, appeared on the Danube about this time and advanced into Thrace, but he was deserted by many of his followers, who joined with the Romans in driving their king back north of the river. Subsequent to this, new walls were built to defend the city and the fleet on the Danube improved. After the barbarians overran the Western Roman Empire, Constantinople became the indisputable capital city of the Roman Empire. Emperors were no longer peripatetic between various court capitals and palaces. They remained in their palace in the Great City and sent generals to command their armies. The wealth of the eastern Mediterranean and western Asia flowed into Constantinople. 527–565: Constantinople in the Age of Justinian The emperor Justinian I (527–565) was known for his successes in war, for his legal reforms and for his public works. It was from Constantinople that his expedition for the reconquest of the former Diocese of Africa set sail on or about 21 June 533. Before their departure, the ship of the commander Belisarius was anchored in front of the Imperial palace, and the Patriarch offered prayers for the success of the enterprise. After the victory, in 534, the Temple treasure of Jerusalem, looted by the Romans in AD 70 and taken to Carthage by the Vandals after their sack of Rome in 455, was brought to Constantinople and deposited for a time, perhaps in the Church of St Polyeuctus, before being returned to Jerusalem in either the Church of the Resurrection or the New Church. Chariot-racing had been important in Rome for centuries. In Constantinople, the hippodrome became over time increasingly a place of political significance. It was where (as a shadow of the popular elections of old Rome) the people by acclamation showed their approval of a new emperor, and also where they openly criticized the government, or clamoured for the removal of unpopular ministers. In the time of Justinian, public order in Constantinople became a critical political issue. Throughout the late Roman and early Byzantine periods, Christianity was resolving fundamental questions of identity, and the dispute between the orthodox and the monophysites became the cause of serious disorder, expressed through allegiance to the chariot-racing parties of the Blues and the Greens. The partisans of the Blues and the Greens were said to affect untrimmed facial hair, head hair shaved at the front and grown long at the back, and wide-sleeved tunics tight at the wrist; and to form gangs to engage in night-time muggings and street violence. At last these disorders took the form of a major rebellion of 532, known as the "Nika" riots (from the battle-cry of "Conquer!" of those involved). Fires started by the Nika rioters consumed the Theodosian basilica of Hagia Sophia (Holy Wisdom), the city's cathedral, which lay to the north of the Augustaeum and had itself replaced the Constantinian basilica founded by Constantius II to replace the first Byzantine cathedral, Hagia Irene (Holy Peace). Justinian commissioned Anthemius of Tralles and Isidore of Miletus to replace it with a new and incomparable Hagia Sophia. This was the great cathedral of the city, whose dome was said to be held aloft by God alone, and which was directly connected to the palace so that the imperial family could attend services without passing through the streets. The dedication took place on 26 December 537 in the presence of the emperor, who was later reported to have exclaimed, "O Solomon, I have outdone thee!" Hagia Sophia was served by 600 people including 80 priests, and cost 20,000 pounds of gold to build. Justinian also had Anthemius and Isidore demolish and replace the original Church of the Holy Apostles and Hagia Irene built by Constantine with new churches under the same dedication. The Justinianic Church of the Holy Apostles was designed in the form of an equal-armed cross with five domes, and ornamented with beautiful mosaics. This church was to remain the burial place of the Emperors from Constantine himself until the 11th century. When the city fell to the Turks in 1453, the church was demolished to make room for the tomb of Mehmet II the Conqueror. Justinian was also concerned with other aspects of the city's built environment, legislating against the abuse of laws prohibiting building within of the sea front, in order to protect the view. During Justinian I's reign, the city's population reached about 500,000 people. However, the social fabric of Constantinople was also damaged by the onset of the Plague of Justinian between 541–542 AD. It killed perhaps 40% of the city's inhabitants. Survival, 565–717: Constantinople during the Byzantine Dark Ages In the early 7th century, the Avars and later the Bulgars overwhelmed much of the Balkans, threatening Constantinople with attack from the west. Simultaneously, the Persian Sassanids overwhelmed the Prefecture of the East and penetrated deep into Anatolia. Heraclius, son of the exarch of Africa, set sail for the city and assumed the throne. He found the military situation so dire that he is said to have contemplated withdrawing the imperial capital to Carthage, but relented after the people of Constantinople begged him to stay. The citizens lost their right to free grain in 618 when Heraclius realized that the city could no longer be supplied from Egypt as a result of the Persian wars: the population fell substantially as a result. While the city withstood a siege by the Sassanids and Avars in 626, Heraclius campaigned deep into Persian territory and briefly restored the status quo in 628, when the Persians surrendered all their conquests. However, further sieges followed the Arab conquests, first from 674 to 678 and then in 717 to 718. The Theodosian Walls kept the city impenetrable from the land, while a newly discovered incendiary substance known as Greek Fire allowed the Byzantine navy to destroy the Arab fleets and keep the city supplied. In the second siege, the second ruler of Bulgaria, Khan Tervel, rendered decisive help. He was called Saviour of Europe. 717–1025: Constantinople during the Macedonian Renaissance In the 730s Leo III carried out extensive repairs of the Theodosian walls, which had been damaged by frequent and violent attacks; this work was financed by a special tax on all the subjects of the Empire. Theodora, widow of the Emperor Theophilus (died 842), acted as regent during the minority of her son Michael III, who was said to have been introduced to dissolute habits by her brother Bardas. When Michael assumed power in 856, he became known for excessive drunkenness, appeared in the hippodrome as a charioteer and burlesqued the religious processions of the clergy. He removed Theodora from the Great Palace to the Carian Palace and later to the monastery of Gastria, but, after the death of Bardas, she was released to live in the palace of St Mamas; she also had a rural residence at the Anthemian Palace, where Michael was assassinated in 867. In 860, an attack was made on the city by a new principality set up a few years earlier at Kyiv by Askold and Dir, two Varangian chiefs: Two hundred small vessels passed through the Bosporus and plundered the monasteries and other properties on the suburban Prince's Islands. Oryphas, the admiral of the Byzantine fleet, alerted the emperor Michael, who promptly put the invaders to flight; but the suddenness and savagery of the onslaught made a deep impression on the citizens. In 980, the emperor Basil II received an unusual gift from Prince Vladimir of Kyiv: 6,000 Varangian warriors, which Basil formed into a new bodyguard known as the Varangian Guard. They were known for their ferocity, honour, and loyalty. It is said that, in 1038, they were dispersed in winter quarters in the Thracesian Theme when one of their number attempted to violate a countrywoman, but in the struggle she seized his sword and killed him; instead of taking revenge, however, his comrades applauded her conduct, compensated her with all his possessions, and exposed his body without burial as if he had committed suicide. However, following the death of an Emperor, they became known also for plunder in the Imperial palaces. Later in the 11th Century the Varangian Guard became dominated by Anglo-Saxons who preferred this way of life to subjugation by the new Norman kings of England. The Book of the Eparch, which dates to the 10th century, gives a detailed picture of the city's commercial life and its organization at that time. The corporations in which the tradesmen of Constantinople were organised were supervised by the Eparch, who regulated such matters as production, prices, import, and export. Each guild had its own monopoly, and tradesmen might not belong to more than one. It is an impressive testament to the strength of tradition how little these arrangements had changed since the office, then known by the Latin version of its title, had been set up in 330 to mirror the urban prefecture of Rome. In the 9th and 10th centuries, Constantinople had a population of between 500,000 and 800,000. Iconoclast controversy in Constantinople In the 8th and 9th centuries, the iconoclast movement caused serious political unrest throughout the Empire. The emperor Leo III issued a decree in 726 against images, and ordered the destruction of a statue of Christ over one of the doors of the Chalke, an act that was fiercely resisted by the citizens. Constantine V convoked a church council in 754, which condemned the worship of images, after which many treasures were broken, burned, or painted over with depictions of trees, birds or animals: One source refers to the church of the Holy Virgin at Blachernae as having been transformed into a "fruit store and aviary". Following the death of her husband Leo IV in 780, the empress Irene restored the veneration of images through the agency of the Second Council of Nicaea in 787. The iconoclast controversy returned in the early 9th century, only to be resolved once more in 843 during the regency of Empress Theodora, who restored the icons. These controversies contributed to the deterioration of relations between the Western and the Eastern Churches. 1025–1081: Constantinople after Basil II In the late 11th century catastrophe struck with the unexpected and calamitous defeat of the imperial armies at the Battle of Manzikert in Armenia in 1071. The Emperor Romanus Diogenes was captured. The peace terms demanded by Alp Arslan, sultan of the Seljuk Turks, were not excessive, and Romanus accepted them. On his release, however, Romanus found that enemies had placed their own candidate on the throne in his absence; he surrendered to them and suffered death by torture, and the new ruler, Michael VII Ducas, refused to honour the treaty. In response, the Turks began to move into Anatolia in 1073. The collapse of the old defensive system meant that they met no opposition, and the empire's resources were distracted and squandered in a series of civil wars. Thousands of Turkoman tribesmen crossed the unguarded frontier and moved into Anatolia. By 1080, a huge area had been lost to the Empire, and the Turks were within striking distance of Constantinople. 1081–1185: Constantinople under the Comneni Under the Comnenian dynasty (1081–1185), Byzantium staged a remarkable recovery. In 1090–91, the nomadic Pechenegs reached the walls of Constantinople, where Emperor Alexius I with the aid of the Kipchaks annihilated their army. In response to a call for aid from Alexius, the First Crusade assembled at Constantinople in 1096, but declining to put itself under Byzantine command set out for Jerusalem on its own account. John II built the monastery of the Pantocrator (Almighty) with a hospital for the poor of 50 beds. With the restoration of firm central government, the empire became fabulously wealthy. The population was rising (estimates for Constantinople in the 12th century vary from some 100,000 to 500,000), and towns and cities across the realm flourished. Meanwhile, the volume of money in circulation dramatically increased. This was reflected in Constantinople by the construction of the Blachernae palace, the creation of brilliant new works of art, and general prosperity at this time: an increase in trade, made possible by the growth of the Italian city-states, may have helped the growth of the economy. It is certain that the Venetians and others were active traders in Constantinople, making a living out of shipping goods between the Crusader Kingdoms of Outremer and the West, while also trading extensively with Byzantium and Egypt. The Venetians had factories on the north side of the Golden Horn, and large numbers of westerners were present in the city throughout the 12th century. Toward the end of Manuel I Komnenos's reign, the number of foreigners in the city reached about 60,000–80,000 people out of a total population of about 400,000 people. In 1171, Constantinople also contained a small community of 2,500 Jews. In 1182, most Latin (Western European) inhabitants of Constantinople were massacred. In artistic terms, the 12th century was a very productive period. There was a revival in the mosaic art, for example: Mosaics became more realistic and vivid, with an increased emphasis on depicting three-dimensional forms. There was an increased demand for art, with more people having access to the necessary wealth to commission and pay for such work. According to N.H. Baynes (Byzantium, An Introduction to East Roman Civilization): 1185–1261: Constantinople during the Imperial Exile On 25 July 1197, Constantinople was struck by a severe fire which burned the Latin Quarter and the area around the Gate of the Droungarios () on the Golden Horn. Nevertheless, the destruction wrought by the 1197 fire paled in comparison with that brought by the Crusaders. In the course of a plot between Philip of Swabia, Boniface of Montferrat and the Doge of Venice, the Fourth Crusade was, despite papal excommunication, diverted in 1203 against Constantinople, ostensibly promoting the claims of Alexios IV Angelos brother-in-law of Philip, son of the deposed emperor Isaac II Angelos. The reigning emperor Alexios III Angelos had made no preparation. The Crusaders occupied Galata, broke the defensive chain protecting the Golden Horn, and entered the harbour, where on 27 July they breached the sea walls: Alexios III fled. But the new Alexios IV Angelos found the Treasury inadequate, and was unable to make good the rewards he had promised to his western allies. Tension between the citizens and the Latin soldiers increased. In January 1204, the protovestiarius Alexios Murzuphlos provoked a riot, it is presumed, to intimidate Alexios IV, but whose only result was the destruction of the great statue of Athena Promachos, the work of Phidias, which stood in the principal forum facing west. In February 1204, the people rose again: Alexios IV was imprisoned and executed, and Murzuphlos took the purple as Alexios V Doukas. He made some attempt to repair the walls and organise the citizenry, but there had been no opportunity to bring in troops from the provinces and the guards were demoralised by the revolution. An attack by the Crusaders on 6 April failed, but a second from the Golden Horn on 12 April succeeded, and the invaders poured in. Alexios V fled. The Senate met in Hagia Sophia and offered the crown to Theodore Lascaris, who had married into the Angelos dynasty, but it was too late. He came out with the Patriarch to the Golden Milestone before the Great Palace and addressed the Varangian Guard. Then the two of them slipped away with many of the nobility and embarked for Asia. By the next day the Doge and the leading Franks were installed in the Great Palace, and the city was given over to pillage for three days. Sir Steven Runciman, historian of the Crusades, wrote that the sack of Constantinople is "unparalleled in history". For the next half-century, Constantinople was the seat of the Latin Empire. Under the rulers of the Latin Empire, the city declined, both in population and the condition of its buildings. Alice-Mary Talbot cites an estimated population for Constantinople of 400,000 inhabitants; after the destruction wrought by the Crusaders on the city, about one third were homeless, and numerous courtiers, nobility, and higher clergy, followed various leading personages into exile. "As a result Constantinople became seriously depopulated," Talbot concludes. The Latins took over at least 20 churches and 13 monasteries, most prominently the Hagia Sophia, which became the cathedral of the Latin Patriarch of Constantinople. It is to these that E.H. Swift attributed the construction of a series of flying buttresses to shore up the walls of the church, which had been weakened over the centuries by earthquake tremors. However, this act of maintenance is an exception: for the most part, the Latin occupiers were too few to maintain all of the buildings, either secular and sacred, and many became targets for vandalism or dismantling. Bronze and lead were removed from the roofs of abandoned buildings and melted down and sold to provide money to the chronically under-funded Empire for defense and to support the court; Deno John Geanokoplos writes that "it may well be that a division is suggested here: Latin laymen stripped secular buildings, ecclesiastics, the churches." Buildings were not the only targets of officials looking to raise funds for the impoverished Latin Empire: the monumental sculptures which adorned the Hippodrome and fora of the city were pulled down and melted for coinage. "Among the masterpieces destroyed, writes Talbot, "were a Herakles attributed to the fourth-century B.C. sculptor Lysippos, and monumental figures of Hera, Paris, and Helen." The Nicaean emperor John III Vatatzes reportedly saved several churches from being dismantled for their valuable building materials; by sending money to the Latins "to buy them off" (exonesamenos), he prevented the destruction of several churches. According to Talbot, these included the churches of Blachernae, Rouphinianai, and St. Michael at Anaplous. He also granted funds for the restoration of the Church of the Holy Apostles, which had been seriously damaged in an earthquake. The Byzantine nobility scattered, many going to Nicaea, where Theodore Lascaris set up an imperial court, or to Epirus, where Theodore Angelus did the same; others fled to Trebizond, where one of the Comneni had already with Georgian support established an independent seat of empire. Nicaea and Epirus both vied for the imperial title, and tried to recover Constantinople. In 1261, Constantinople was captured from its last Latin ruler, Baldwin II, by the forces of the Nicaean emperor Michael VIII Palaiologos. 1261–1453: Palaiologan Era and the Fall of Constantinople Although Constantinople was retaken by Michael VIII Palaiologos, the Empire had lost many of its key economic resources, and struggled to survive. The palace of Blachernae in the north-west of the city became the main Imperial residence, with the old Great Palace on the shores of the Bosporus going into decline. When Michael VIII captured the city, its population was 35,000 people, but, by the end of his reign, he had succeeded in increasing the population to about 70,000 people. The Emperor achieved this by summoning former residents who had fled the city when the crusaders captured it, and by relocating Greeks from the recently reconquered Peloponnese to the capital. Military defeats, civil wars, earthquakes and natural disasters were joined by the Black Death, which in 1347 spread to Constantinople exacerbated the people's sense that they were doomed by God. In 1453, when the Ottoman Turks captured the city, it contained approximately 50,000 people. Constantinople was conquered by the Ottoman Empire on 29 May 1453. The Ottomans were commanded by 21-year-old Ottoman Sultan Mehmed II. The conquest of Constantinople followed a seven-week siege which had begun on 6 April 1453. 1453–1922: Ottoman Kostantiniyye The Christian Orthodox city of Constantinople was now under Ottoman control. When Mehmed II finally entered Constantinople through the Gate of Charisius (today known as Edirnekapı or Adrianople Gate), he immediately rode his horse to the Hagia Sophia, where after the doors were axed down, the thousands of citizens hiding within the sanctuary were raped and enslaved, often with slavers fighting each other to the death over particularly beautiful and valuable slave girls. Moreover, symbols of Christianity were everywhere vandalized or destroyed, including the crucifix of Hagia Sophia which was paraded through the sultan's camps. Afterwards he ordered his soldiers to stop hacking at the city's valuable marbles and 'be satisfied with the booty and captives; as for all the buildings, they belonged to him'. He ordered that an imam meet him there in order to chant the adhan thus transforming the Orthodox cathedral into a Muslim mosque, solidifying Islamic rule in Constantinople. Mehmed's main concern with Constantinople had to do with solidifying control over the city and rebuilding its defenses. After 45,000 captives were marched from the city, building projects were commenced immediately after the conquest, | 478 BC when as part of the Greek counterattack to the Second Persian invasion of Greece, a Greek army led by the Spartan general Pausanias captured the city which remained an independent, yet subordinate, city under the Athenians, and later to the Spartans after 411 BC. A farsighted treaty with the emergent power of Rome in which stipulated tribute in exchange for independent status allowed it to enter Roman rule unscathed. This treaty would pay dividends retrospectively as Byzantium would maintain this independent status, and prosper under peace and stability in the Pax Romana, for nearly three centuries until the late 2nd century AD. Byzantium was never a major influential city-state like that of Athens, Corinth or Sparta, but the city enjoyed relative peace and steady growth as a prosperous trading city lent by its remarkable position. The site lay astride the land route from Europe to Asia and the seaway from the Black Sea to the Mediterranean, and had in the Golden Horn an excellent and spacious harbor. Already then, in Greek and early Roman times, Byzantium was famous for the strategic geographic position that made it difficult to besiege and capture, and its position at the crossroads of the Asiatic-European trade route over land and as the gateway between the Mediterranean and Black Seas made it too valuable a settlement to abandon, as Emperor Septimius Severus later realized when he razed the city to the ground for supporting Pescennius Niger's claimancy. It was a move greatly criticized by the contemporary consul and historian Cassius Dio who said that Severus had destroyed "a strong Roman outpost and a base of operations against the barbarians from Pontus and Asia". He would later rebuild Byzantium towards the end of his reign, in which it would be briefly renamed Augusta Antonina, fortifying it with a new city wall in his name, the Severan Wall. 324–337: The refoundation as Constantinople Constantine had altogether more colourful plans. Having restored the unity of the Empire, and, being in the course of major governmental reforms as well as of sponsoring the consolidation of the Christian church, he was well aware that Rome was an unsatisfactory capital. Rome was too far from the frontiers, and hence from the armies and the imperial courts, and it offered an undesirable playground for disaffected politicians. Yet it had been the capital of the state for over a thousand years, and it might have seemed unthinkable to suggest that the capital be moved to a different location. Nevertheless, Constantine identified the site of Byzantium as the right place: a place where an emperor could sit, readily defended, with easy access to the Danube or the Euphrates frontiers, his court supplied from the rich gardens and sophisticated workshops of Roman Asia, his treasuries filled by the wealthiest provinces of the Empire. Constantinople was built over six years, and consecrated on 11 May 330. Constantine divided the expanded city, like Rome, into 14 regions, and ornamented it with public works worthy of an imperial metropolis. Yet, at first, Constantine's new Rome did not have all the dignities of old Rome. It possessed a proconsul, rather than an urban prefect. It had no praetors, tribunes, or quaestors. Although it did have senators, they held the title clarus, not clarissimus, like those of Rome. It also lacked the panoply of other administrative offices regulating the food supply, police, statues, temples, sewers, aqueducts, or other public works. The new programme of building was carried out in great haste: columns, marbles, doors, and tiles were taken wholesale from the temples of the empire and moved to the new city. In similar fashion, many of the greatest works of Greek and Roman art were soon to be seen in its squares and streets. The emperor stimulated private building by promising householders gifts of land from the imperial estates in Asiana and Pontica and on 18 May 332 he announced that, as in Rome, free distributions of food would be made to the citizens. At the time, the amount is said to have been 80,000 rations a day, doled out from 117 distribution points around the city. Constantine laid out a new square at the centre of old Byzantium, naming it the Augustaeum. The new senate-house (or Curia) was housed in a basilica on the east side. On the south side of the great square was erected the Great Palace of the Emperor with its imposing entrance, the Chalke, and its ceremonial suite known as the Palace of Daphne. Nearby was the vast Hippodrome for chariot-races, seating over 80,000 spectators, and the famed Baths of Zeuxippus. At the western entrance to the Augustaeum was the Milion, a vaulted monument from which distances were measured across the Eastern Roman Empire. From the Augustaeum led a great street, the Mese, lined with colonnades. As it descended the First Hill of the city and climbed the Second Hill, it passed on the left the Praetorium or law-court. Then it passed through the oval Forum of Constantine where there was a second Senate-house and a high column with a statue of Constantine himself in the guise of Helios, crowned with a halo of seven rays and looking toward the rising sun. From there, the Mese passed on and through the Forum Tauri and then the Forum Bovis, and finally up the Seventh Hill (or Xerolophus) and through to the Golden Gate in the Constantinian Wall. After the construction of the Theodosian Walls in the early 5th century, it was extended to the new Golden Gate, reaching a total length of seven Roman miles. After the construction of the Theodosian Walls, Constantinople consisted of an area approximately the size of Old Rome within the Aurelian walls, or some 1,400 ha. 337–529: Constantinople during the Barbarian Invasions and the fall of the West The importance of Constantinople increased, but it was gradual. From the death of Constantine in 337 to the accession of Theodosius I, emperors had been resident only in the years 337–338, 347–351, 358–361, 368–369. Its status as a capital was recognized by the appointment of the first known Urban Prefect of the City Honoratus, who held office from 11 December 359 until 361. The urban prefects had concurrent jurisdiction over three provinces each in the adjacent dioceses of Thrace (in which the city was located), Pontus and Asia comparable to the 100-mile extraordinary jurisdiction of the prefect of Rome. The emperor Valens, who hated the city and spent only one year there, nevertheless built the Palace of Hebdomon on the shore of the Propontis near the Golden Gate, probably for use when reviewing troops. All the emperors up to Zeno and Basiliscus were crowned and acclaimed at the Hebdomon. Theodosius I founded the Church of John the Baptist to house the skull of the saint (today preserved at the Topkapı Palace), put up a memorial pillar to himself in the Forum of Taurus, and turned the ruined temple of Aphrodite into a coach house for the Praetorian Prefect; Arcadius built a new forum named after himself on the Mese, near the walls of Constantine. After the shock of the Battle of Adrianople in 378, in which the emperor Valens with the flower of the Roman armies was destroyed by the Visigoths within a few days' march, the city looked to its defences, and in 413–414 Theodosius II built the 18-metre (60-foot)-tall triple-wall fortifications, which were not to be breached until the coming of gunpowder. Theodosius also founded a University near the Forum of Taurus, on 27 February 425. Uldin, a prince of the Huns, appeared on the Danube about this time and advanced into Thrace, but he was deserted by many of his followers, who joined with the Romans in driving their king back north of the river. Subsequent to this, new walls were built to defend the city and the fleet on the Danube improved. After the barbarians overran the Western Roman Empire, Constantinople became the indisputable capital city of the Roman Empire. Emperors were no longer peripatetic between various court capitals and palaces. They remained in their palace in the Great City and sent generals to command their armies. The wealth of the eastern Mediterranean and western Asia flowed into Constantinople. 527–565: Constantinople in the Age of Justinian The emperor Justinian I (527–565) was known for his successes in war, for his legal reforms and for his public works. It was from Constantinople that his expedition for the reconquest of the former Diocese of Africa set sail on or about 21 June 533. Before their departure, the ship of the commander Belisarius was anchored in front of the Imperial palace, and the Patriarch offered prayers for the success of the enterprise. After the victory, in 534, the Temple treasure of Jerusalem, looted by the Romans in AD 70 and taken to Carthage by the Vandals after their sack of Rome in 455, was brought to Constantinople and deposited for a time, perhaps in the Church of St Polyeuctus, before being returned to Jerusalem in either the Church of the Resurrection or the New Church. Chariot-racing had been important in Rome for centuries. In Constantinople, the hippodrome became over time increasingly a place of political significance. It was where (as a shadow of the popular elections of old Rome) the people by acclamation showed their approval of a new emperor, and also where they openly criticized the government, or clamoured for the removal of unpopular ministers. In the time of Justinian, public order in Constantinople became a critical political issue. Throughout the late Roman and early Byzantine periods, Christianity was resolving fundamental questions of identity, and the dispute between the orthodox and the monophysites became the cause of serious disorder, expressed through allegiance to the chariot-racing parties of the Blues and the Greens. The partisans of the Blues and the Greens were said to affect untrimmed facial hair, head hair shaved at the front and grown long at the back, and wide-sleeved tunics tight at the wrist; and to form gangs to engage in night-time muggings and street violence. At last these disorders took the form of a major rebellion of 532, known as the "Nika" riots (from the battle-cry of "Conquer!" of those involved). Fires started by the Nika rioters consumed the Theodosian basilica of Hagia Sophia (Holy Wisdom), the city's cathedral, which lay to the north of the Augustaeum and had itself replaced the Constantinian basilica founded by Constantius II to replace the first Byzantine cathedral, Hagia Irene (Holy Peace). Justinian commissioned Anthemius of Tralles and Isidore of Miletus to replace it with a new and incomparable Hagia Sophia. This was the great cathedral of the city, whose dome was said to be held aloft by God alone, and which was directly connected to the palace so that the imperial family could attend services without passing through the streets. The dedication took place on 26 December 537 in the presence of the emperor, who was later reported to have exclaimed, "O Solomon, I have outdone thee!" Hagia Sophia was served by 600 people including 80 priests, and cost 20,000 pounds of gold to build. Justinian also had Anthemius and Isidore demolish and replace the original Church of the Holy Apostles and Hagia Irene built by Constantine with new churches under the same dedication. The Justinianic Church of the Holy Apostles was designed in the form of an equal-armed cross with five domes, and ornamented with beautiful mosaics. This church was to remain the burial place of the Emperors from Constantine himself until the 11th century. When the city fell to the Turks in 1453, the church was demolished to make room for the tomb of Mehmet II the Conqueror. Justinian was also concerned with other aspects of the city's built environment, legislating against the abuse of laws prohibiting building within of the sea front, in order to protect the view. During Justinian I's reign, the city's population reached about 500,000 people. However, the social fabric of Constantinople was also damaged by the onset of the Plague of Justinian between 541–542 AD. It killed perhaps 40% of the city's inhabitants. Survival, 565–717: Constantinople during the Byzantine Dark Ages In the early 7th century, the Avars and later the Bulgars overwhelmed much of the Balkans, threatening Constantinople with attack from the west. Simultaneously, the Persian Sassanids overwhelmed the Prefecture of the East and penetrated deep into Anatolia. Heraclius, son of the exarch of Africa, set sail for the city and assumed the throne. He found the military situation so dire that he is said to have contemplated withdrawing the imperial capital to Carthage, but relented after the people of Constantinople begged him to stay. The citizens lost their right to free grain in 618 when Heraclius realized that the city could no longer be supplied from Egypt as a result of the Persian wars: the population fell substantially as a result. While the city withstood a siege by the Sassanids and Avars in 626, Heraclius campaigned deep into Persian territory and briefly restored the status quo in 628, when the Persians surrendered all their conquests. However, further sieges followed the Arab conquests, first from 674 to 678 and then in 717 to 718. The Theodosian Walls kept the city impenetrable from the land, while a newly discovered incendiary substance known as Greek Fire allowed the Byzantine navy to destroy the Arab fleets and keep the city supplied. In the second siege, the second ruler of Bulgaria, Khan Tervel, rendered decisive help. He was called Saviour of Europe. 717–1025: Constantinople during the Macedonian Renaissance In the 730s Leo III carried out extensive repairs of the Theodosian walls, which had been damaged by frequent and violent attacks; this work was financed by a special tax on all the subjects of the Empire. Theodora, widow of the Emperor Theophilus (died 842), acted as regent during the minority of her son Michael III, who was said to have been introduced to dissolute habits by her brother Bardas. When Michael assumed power in 856, he became known for excessive drunkenness, appeared in the hippodrome as a charioteer and burlesqued the religious processions of the clergy. He removed Theodora from the Great Palace to the Carian Palace and later to the monastery of Gastria, but, after the death of Bardas, she was released to live in the palace of St Mamas; she also had a rural residence at the Anthemian Palace, where Michael was assassinated in 867. In 860, an attack was made on the city by a new principality set up a few years earlier at Kyiv by Askold and Dir, two Varangian chiefs: Two hundred small vessels passed through the Bosporus and plundered the monasteries and other properties on the suburban Prince's Islands. Oryphas, the admiral of the Byzantine fleet, alerted the emperor Michael, who promptly put the invaders to flight; but the suddenness and savagery of the onslaught made a deep impression on the citizens. In 980, the emperor Basil II received an unusual gift from Prince Vladimir of Kyiv: 6,000 Varangian warriors, which Basil formed into a new bodyguard known as the Varangian Guard. They were known for their ferocity, honour, and loyalty. It is said that, in 1038, they were dispersed in winter quarters in the Thracesian Theme when one of their number attempted to violate a countrywoman, but in the struggle she seized his sword and killed him; instead of taking revenge, however, his comrades applauded her conduct, compensated her with all his possessions, and exposed his body without burial as if he had committed suicide. However, following the death of an Emperor, they became known also for plunder in the Imperial palaces. Later in the 11th Century the Varangian Guard became dominated by Anglo-Saxons who preferred this way of life to subjugation by the new Norman kings of England. The Book of the Eparch, which dates to the 10th century, gives a detailed picture of the city's commercial life and its organization at that time. The corporations in which the tradesmen of Constantinople were organised were supervised by the Eparch, who regulated such matters as production, prices, import, and export. Each guild had its own monopoly, and tradesmen might not belong to more than one. It is an impressive testament to the strength of tradition how little these arrangements had changed since the office, then known by the Latin version of its title, had been set up in 330 to mirror the urban prefecture of Rome. In the 9th and 10th centuries, Constantinople had a population of between 500,000 and 800,000. Iconoclast controversy in Constantinople In the 8th and 9th centuries, the iconoclast movement caused serious political unrest throughout the Empire. The emperor Leo III issued a decree in 726 against images, and ordered the destruction of a statue of Christ over one of the doors of the Chalke, an act that was fiercely resisted by the citizens. Constantine V convoked a church council in 754, which condemned the worship of images, after which many treasures were broken, burned, or painted over with depictions of trees, birds or animals: One source refers to the church of the Holy Virgin at Blachernae as having been transformed into a "fruit store and aviary". Following the death of her husband Leo IV in 780, the empress Irene restored the veneration of images through the agency of the Second Council of Nicaea in 787. The iconoclast controversy returned in the early 9th century, only to be resolved once more in 843 during the regency of Empress Theodora, who restored the icons. These controversies contributed to the deterioration of relations between the Western and the Eastern Churches. 1025–1081: Constantinople after Basil II In the late 11th century catastrophe struck with the unexpected and calamitous defeat of the imperial armies at the Battle of Manzikert in Armenia in 1071. The Emperor Romanus Diogenes was captured. The peace terms demanded by Alp Arslan, sultan of the Seljuk Turks, were not excessive, and Romanus accepted them. On his release, however, Romanus found that enemies had placed their own candidate on the throne in his absence; he surrendered to them and suffered death by torture, and the new ruler, Michael VII Ducas, refused to honour the treaty. In response, the Turks began to move into Anatolia in 1073. The collapse of the old defensive system meant that they met no opposition, and the empire's resources were distracted and squandered in a series of civil wars. Thousands of Turkoman tribesmen crossed the unguarded frontier and moved into Anatolia. By 1080, a huge area had been lost to the Empire, and the Turks were within striking distance of Constantinople. 1081–1185: Constantinople under the Comneni Under the Comnenian dynasty (1081–1185), Byzantium staged a remarkable recovery. In 1090–91, the nomadic Pechenegs reached the walls of Constantinople, where Emperor Alexius I with the aid of the Kipchaks annihilated their army. In response to a call for aid from Alexius, the First Crusade assembled at Constantinople in 1096, but declining to put itself under Byzantine command set out for Jerusalem on its own account. John II built the monastery of the Pantocrator (Almighty) with a hospital for the poor of 50 beds. With the restoration of firm central government, the empire became fabulously wealthy. The population was rising (estimates for Constantinople in the 12th century vary from some 100,000 to 500,000), and towns and cities across the realm flourished. Meanwhile, the volume of money in circulation dramatically increased. This was reflected in Constantinople by the construction of the Blachernae palace, the creation of brilliant new works of art, and general prosperity at this time: an increase in trade, made possible by the growth of the Italian city-states, may have helped the growth of the economy. It is certain that the Venetians and others were active traders in Constantinople, making a living out of shipping goods between the Crusader Kingdoms of Outremer and the West, while also trading extensively with Byzantium and Egypt. The Venetians had factories on the north side of the Golden Horn, and large numbers of westerners were present in the city throughout the 12th century. Toward the end of Manuel I Komnenos's reign, the number of foreigners in the city reached about 60,000–80,000 people out of a total population of about 400,000 people. In 1171, Constantinople also contained a small community of 2,500 Jews. In 1182, most Latin (Western European) inhabitants of Constantinople were massacred. In artistic terms, the 12th century was a very productive period. There |
Columbus, Missouri Columbus, Montana Columbus, Nebraska Columbus, New Jersey Columbus, New Mexico Columbus, New York Columbus, North Carolina Columbus, North Dakota Columbus, Ohio, the largest city in United States with this name Columbus, Texas Columbus, Wisconsin Columbus (town), Wisconsin Columbus Avenue (disambiguation) Columbus Circle, a traffic circle in Manhattan, New York Columbus City (disambiguation) Columbus Township (disambiguation) Persons with the name Forename Columbus Caldwell (1830–1908), American politician Columbus Germain (1827–1880), American politician Columbus Short (born 1982), American choreographer and actor Surname Bartholomew Columbus (c. 1461–1515), Christopher Columbus's younger brother Chris Columbus (filmmaker) (born 1958), American filmmaker Diego Columbus (1479/80–1526), Christopher Columbus' eldest son Ferdinand Columbus (1488–1539), Christopher Columbus' second son Scott Columbus (1956–2011), long-time drummer for the heavy metal band Manowar Arts, entertainment, and media Films Columbus (2015 film), an Indian comedy, subtitled "Discovering Love" Columbus (2017 film), an American drama set amidst the architecture of Columbus, Indiana Columbus (Star Trek), a shuttlecraft in the Star Trek series Music Opera | eldest son Ferdinand Columbus (1488–1539), Christopher Columbus' second son Scott Columbus (1956–2011), long-time drummer for the heavy metal band Manowar Arts, entertainment, and media Films Columbus (2015 film), an Indian comedy, subtitled "Discovering Love" Columbus (2017 film), an American drama set amidst the architecture of Columbus, Indiana Columbus (Star Trek), a shuttlecraft in the Star Trek series Music Opera Columbus (Egk), German-language opera by Egk, 1943 Columbus, 1855 opera by František Škroup Christophe Colomb, French-language opera by Milhaud often referred to as Columbus in English sources Other uses in music Columbus (Herzogenberg), large scale cantata by Heinrich von Herzogenberg 1870 "Colombus", song by Mary Black from No Frontiers "Columbus" (song), a song by the band Kent from their album Tillbaka till samtiden Christopher Columbus, pastiche of music by Offenbach to a new English libretto by Don White recorded by the Opera Rara label in 1977 Other uses in arts, entertainment, and media Columbus (novel), a 1941 novel about Christopher Columbus by Rafael Sabatini Columbus (Bartholdi), a statue depicting Christopher Columbus by Frédéric Auguste Bartholdi, in Providence, Rhode Island, US Columbus Edwards, the character known as Lum of Lum and Abner Brands and enterprises COLUMBUS, ab initio quantum chemistry software ColumBus, former name of Howard Transit in Howard County, Maryland Columbus Communications, a cable television and broadband speed Internet service provider in the Caribbean region Columbus Salame, an American food processing company Columbus Tubing, an Italian manufacturer of bicycle frame tubing Columbus Buggy Company, an American automotive manufacturer from 1875 to 1913 Ships Columbus (1824), a disposable |
the name John le Carré, lived and worked in Cornwall. Nobel Prize-winning novelist William Golding was born in St Columb Minor in 1911, and returned to live near Truro from 1985 until his death in 1993. D. H. Lawrence spent a short time living in Cornwall. Rosamunde Pilcher grew up in Cornwall, and several of her books take place there. Poetry The late Poet Laureate Sir John Betjeman was famously fond of Cornwall and it featured prominently in his poetry. He is buried in the churchyard at St Enodoc's Church, Trebetherick. Charles Causley, the poet, was born in Launceston and is perhaps the best known of Cornish poets. Jack Clemo and the scholar A. L. Rowse were also notable Cornishmen known for their poetry; The Rev. R. S. Hawker of Morwenstow wrote some poetry which was very popular in the Victorian period. The Scottish poet W. S. Graham lived in West Cornwall from 1944 until his death in 1986. The poet Laurence Binyon wrote "For the Fallen" (first published in 1914) while sitting on the cliffs between Pentire Point and The Rumps and a stone plaque was erected in 2001 to commemorate the fact. The plaque bears the inscription "FOR THE FALLEN / Composed on these cliffs, 1914". The plaque also bears below this the fourth stanza (sometimes referred to as "The Ode") of the poem: They shall grow not old, as we that are left grow old Age shall not weary them, nor the years condemn At the going down of the sun and in the morning We will remember them Other literary works Cornwall produced a substantial number of passion plays such as the Ordinalia during the Middle Ages. Many are still extant, and provide valuable information about the Cornish language. See also Cornish literature Colin Wilson, a prolific writer who is best known for his debut work The Outsider (1956) and for The Mind Parasites (1967), lived in Gorran Haven, a small village on the southern Cornish coast. The writer D. M. Thomas was born in Redruth but lived and worked in Australia and the United States before returning to his native Cornwall. He has written novels, poetry, and other works, including translations from Russian. Thomas Hardy's drama The Queen of Cornwall (1923) is a version of the Tristan story; the second act of Richard Wagner's opera Tristan und Isolde takes place in Cornwall, as do Gilbert and Sullivan's operettas The Pirates of Penzance and Ruddigore. Clara Vyvyan was the author of various books about many aspects of Cornish life such as Our Cornwall. She once wrote: "The Loneliness of Cornwall is a loneliness unchanged by the presence of men, its freedoms a freedom inexpressible by description or epitaph. Your cannot say Cornwall is this or that. Your cannot describe it in a word or visualise it in a second. You may know the country from east to west and sea to sea, but if you close your eyes and think about it no clear-cut image rises before you. In this quality of changefulness have we possibly surprised the secret of Cornwall's wild spirit--in this intimacy the essence of its charm? Cornwall!". A level of Tomb Raider: Legend, a game dealing with Arthurian Legend, takes place in Cornwall at a museum above King Arthur's tomb. The adventure game The Lost Crown is set in the fictional town of Saxton, which uses the Cornish settlements of Polperro, Talland and Looe as its model. The fairy tale Jack the Giant Killer takes place in Cornwall. Sports The main sports played in Cornwall are rugby, football and cricket. Athletes from Truro have done well in Olympic and Commonwealth Games fencing, winning several medals. Surfing is popular, particularly with tourists, thousands of which take to the water throughout the summer months. Some towns and villages have bowling clubs, and a wide variety of British sports are played throughout Cornwall. Cornwall is also one of the few places in England where shinty is played; the English Shinty Association is based in Penryn. The Cornwall County Cricket Club plays as one of the minor counties of English cricket. Truro, and all of the towns and some villages have football clubs belonging to the Cornwall County Football Association. Rugby football Viewed as an "important identifier of ethnic affiliation", rugby union has become a sport strongly tied to notions of Cornishness. and since the 20th century, rugby union has emerged as one of the most popular spectator and team sports in Cornwall (perhaps the most popular), with professional Cornish rugby footballers being described as a "formidable force", "naturally independent, both in thought and deed, yet paradoxically staunch English patriots whose top players have represented England with pride and passion". In 1985, sports journalist Alan Gibson made a direct connection between love of rugby in Cornwall and the ancient parish games of hurling and wrestling that existed for centuries before rugby officially began. Among Cornwall's native sports are a distinctive form of Celtic wrestling related to Breton wrestling, and Cornish hurling, a kind of mediaeval football played with a silver ball (distinct from Irish Hurling). Cornish Wrestling is Cornwall's oldest sport and as Cornwall's native tradition it has travelled the world to places like Victoria, Australia and Grass Valley, California following the miners and gold rushes. Cornish hurling now takes place at St. Columb Major, St Ives, and less frequently at Bodmin. In rugby league, Cornwall R.L.F.C., founded in 2021, will represent the county in the professional league system. The semi-pro club will start in the third tier RFL League 1. At amateur level, the county is represented by Cornish Rebels. Surfing and watersports Due to its long coastline, various maritime sports are popular in Cornwall, notably sailing and surfing. International events in both are held in Cornwall. Cornwall hosted the Inter-Celtic Watersports Festival in 2006. Surfing in particular is very popular, as locations such as Bude and Newquay offer some of the best surf in the UK. Pilot gig rowing has been popular for many years and the World championships takes place annually on the Isles of Scilly. On 2 September 2007, 300 surfers at Polzeath beach set a new world record for the highest number of surfers riding the same wave as part of the Global Surf Challenge and part of a project called Earthwave to raise awareness about global warming. Fencing As its population is comparatively small, and largely rural, Cornwall's contribution to British national sport in the United Kingdom has been limited; the county's greatest successes have come in fencing. In 2014, half of the men's GB team fenced for Truro Fencing Club, and 3 Truro fencers appeared at the 2012 Olympics. Cuisine Cornwall has a strong culinary heritage. Surrounded on three sides by the sea amid fertile fishing grounds, Cornwall naturally has fresh seafood readily available; Newlyn is the largest fishing port in the UK by value of fish landed, and is known for its wide range of restaurants. Television chef Rick Stein has long operated a fish restaurant in Padstow for this reason, and Jamie Oliver chose to open his second restaurant, Fifteen, in Watergate Bay near Newquay. MasterChef host and founder of Smiths of Smithfield, John Torode, in 2007 purchased Seiners in Perranporth. One famous local fish dish is Stargazy pie, a fish-based pie in which the heads of the fish stick through the piecrust, as though "star-gazing". The pie is cooked as part of traditional celebrations for Tom Bawcock's Eve, but is not generally eaten at any other time. Cornwall is perhaps best known though for its pasties, a savoury dish made with pastry. Today's pasties usually contain a filling of beef steak, onion, potato and swede with salt and white pepper, but historically pasties had a variety of different fillings. "Turmut, 'tates and mate" (i.e. "Turnip, potatoes and meat", turnip being the Cornish and Scottish term for swede, itself an abbreviation of 'Swedish Turnip', the British term for rutabaga) describes a filling once very common. For instance, the licky pasty contained mostly leeks, and the herb pasty contained watercress, parsley, and shallots. Pasties are often locally referred to as oggies. Historically, pasties were also often made with sweet fillings such as jam, apple and blackberry, plums or cherries. The wet climate and relatively poor soil of Cornwall make it unsuitable for growing many arable crops. However, it is ideal for growing the rich grass required for dairying, leading to the production of Cornwall's other famous export, clotted cream. This forms the basis for many local specialities including Cornish fudge and Cornish ice cream. Cornish clotted cream has Protected Geographical Status under EU law, and cannot be made anywhere else. Its principal manufacturer is A. E. Rodda & Son of Scorrier. Local cakes and desserts include Saffron cake, Cornish heavy (hevva) cake, Cornish fairings biscuits, figgy 'obbin, Cream tea and whortleberry pie. There are also many types of beers brewed in Cornwall—those produced by Sharp's Brewery, Skinner's Brewery, Keltek Brewery and St Austell Brewery are the best known—including stouts, ales and other beer types. There is some small scale production of wine, mead and cider. Politics and administration Cornish national identity Cornwall is recognised by Cornish and Celtic political groups as one of six Celtic nations, alongside Brittany, Ireland, the Isle of Man, Scotland and Wales. (The Isle of Man Government and the Welsh Government also recognise Asturias and Galicia.) Cornwall is represented, as one of the Celtic nations, at the Festival Interceltique de Lorient, an annual celebration of Celtic culture held in Brittany. Cornwall Council consider Cornwall's unique cultural heritage and distinctiveness to be one of the area's major assets. They see Cornwall's language, landscape, Celtic identity, political history, patterns of settlement, maritime tradition, industrial heritage, and non-conformist tradition, to be among the features making up its "distinctive" culture. However, it is uncertain how many of the people living in Cornwall consider themselves to be Cornish; results from different surveys (including the national census) have varied. In the 2001 census, 7 per cent of people in Cornwall identified themselves as Cornish, rather than British or English. However, activists have argued that this underestimated the true number as there was no explicit "Cornish" option included in the official census form. Subsequent surveys have suggested that as many as 44 per cent identify as Cornish. Many people in Cornwall say that this issue would be resolved if a Cornish option became available on the census. The question and content recommendations for the 2011 Census provided an explanation of the process of selecting an ethnic identity which is relevant to the understanding of the often quoted figure of 37,000 who claim Cornish identity. On 24 April 2014 it was announced that Cornish people have been granted minority status under the European Framework Convention for the Protection of National Minorities. Local politics With the exception of the Isles of Scilly, Cornwall is governed by a unitary authority, Cornwall Council, based in Truro. The Crown Court is based at the Courts of Justice in Truro. Magistrates' Courts are found in Truro (but at a different location to the Crown Court) and at Bodmin. The Isles of Scilly form part of the ceremonial county of Cornwall, and have, at times, been served by the same county administration. Since 1890 they have been administered by their own unitary authority, the Council of the Isles of Scilly. They are grouped with Cornwall for other administrative purposes, such as the National Health Service and Devon and Cornwall Police. Before reorganisation on 1 April 2009, council functions throughout the rest of Cornwall were organised in two tiers, with Cornwall County Council and district councils for its six districts, Caradon, Carrick, Kerrier, North Cornwall, Penwith, and Restormel. While projected to streamline services, cut red tape and save around £17 million a year, the reorganisation was met with wide opposition, with a poll in 2008 showing 89% disapproval from Cornish residents. The first elections for the unitary authority were held on 4 June 2009. The council has 123 seats; the largest party (in 2017) is the Conservatives, with 46 seats. The Liberal Democrats are the second-largest party, with 37 seats, with the Independents the third-largest grouping with 30. Before the creation of the unitary council, the former county council had 82 seats, the majority of which were held by the Liberal Democrats, elected at the 2005 county council elections. The six former districts had a total of 249 council seats, and the groups with greatest numbers of councillors were Liberal Democrats, Conservatives and Independents. Parliament and national politics Following a review by the Boundary Commission for England taking effect at the 2010 general election, Cornwall is divided into six county constituencies to elect MPs to the House of Commons of the United Kingdom. Before the 2010 boundary changes Cornwall had five constituencies, all of which were won by Liberal Democrats at the 2005 general election. In the 2010 general election Liberal Democrat candidates won three constituencies and Conservative candidates won three other constituencies. At the 2015 general election all six Cornish seats were won by Conservative candidates; all these Conservative MPs retained their seats at the 2017 general election, and the Conservatives won all six constituencies again at the 2019 general election. Until 1832, Cornwall had 44 MPs—more than any other county—reflecting the importance of tin to the Crown. Most of the increase in numbers of MPs came between 1529 and 1584 after which there was no change until 1832. Devolution movement Cornish nationalists have organised into two political parties: Mebyon Kernow, formed in 1951, and the Cornish Nationalist Party. In addition to the political parties, there are various interest groups such as the Revived Cornish Stannary Parliament and the Celtic League. The Cornish Constitutional Convention was formed in 2000 as a cross-party organisation including representatives from the private, public and voluntary sectors to campaign for the creation of a Cornish Assembly, along the lines of the National Assembly for Wales, Northern Ireland Assembly and the Scottish Parliament. Between 5 March 2000 and December 2001, the campaign collected the signatures of 41,650 Cornish residents endorsing the call for a devolved assembly, along with 8,896 signatories from outside Cornwall. The resulting petition was presented to the Prime Minister, Tony Blair. Emergency services Devon and Cornwall Police Cornwall Fire and Rescue Service South Western Ambulance Service Cornwall Air Ambulance HM Coastguard Cornwall Search & Rescue Team British Transport Police Economy Cornwall is one of the poorest parts of the United Kingdom in terms of per capita GDP and average household incomes. At the same time, parts of the county, especially on the coast, have high house prices, driven up by demand from relatively wealthy retired people and second-home owners. The GVA per head was 65% of the UK average for 2004. The GDP per head for Cornwall and the Isles of Scilly was 79.2% of the EU-27 average for 2004, the UK per head average was 123.0%. In 2011, the latest available figures, Cornwall's (including the Isles of Scilly) measure of wealth was 64% of the European average per capita. Historically mining of tin (and later also of copper) was important in the Cornish economy. The first reference to this appears to be by Pytheas: see above. Julius Caesar was the last classical writer to mention the tin trade, which appears to have declined during the Roman occupation. The tin trade revived in the Middle Ages and its importance to the Kings of England resulted in certain privileges being granted to the tinners; the Cornish rebellion of 1497 is attributed to grievances of the tin miners. In the mid-19th century, however, the tin trade again fell into decline. Other primary sector industries that have declined since the 1960s include china clay production, fishing and farming. Today, the Cornish economy depends heavily on its tourist industry, which makes up around a quarter of the economy. The official measures of deprivation and poverty at district and 'sub-ward' level show that there is great variation in poverty and prosperity in Cornwall with some areas among the poorest in England and others among the top half in prosperity. For example, the ranking of 32,482 sub-wards in England in the index of multiple deprivation (2006) ranged from 819th (part of Penzance East) to 30,899th (part of Saltash Burraton in Caradon), where the lower number represents the greater deprivation. Cornwall is one of two UK areas designated as 'less developed regions' which qualify for Cohesion Policy grants from the European Union. It was | In other words, he incorporated Cornwall ecclesiastically with the West Saxon diocese of Sherborne, and endowed Eahlstan, his fighting bishop, who took part in the campaign, with an extensive Cornish estate consisting of Callington and Lawhitton, both in the Tamar valley, and Pawton near Padstow. In 838, the Cornish and their Danish allies were defeated by Egbert in the Battle of Hingston Down at Hengestesdune (probably Hingston Down in Cornwall). In 875, the last recorded king of Cornwall, Dumgarth, is said to have drowned. Around the 880s, Anglo-Saxons from Wessex had established modest land holdings in the eastern part of Cornwall; notably Alfred the Great who had acquired a few estates. William of Malmesbury, writing around 1120, says that King Athelstan of England (924–939) fixed the boundary between English and Cornish people at the east bank of the River Tamar. Breton–Norman period One interpretation of the Domesday Book is that by this time the native Cornish landowning class had been almost completely dispossessed and replaced by English landowners, particularly Harold Godwinson himself. However, the Bodmin manumissions show that two leading Cornish figures nominally had Saxon names, but these were both glossed with native Cornish names. In 1068 Brian of Brittany may have been created Earl of Cornwall, and naming evidence cited by medievalist Edith Ditmas suggests that many other post-Conquest landowners in Cornwall were Breton allies of the Normans, the Bretons being descended from Britons who had fled to what is today Brittany during the early years of the Anglo-Saxon conquest. She also proposed this period for the early composition of the Tristan and Iseult cycle by poets such as Béroul from a pre-existing shared Brittonic oral tradition. Soon after the Norman conquest most of the land was transferred to the new Breton–Norman aristocracy, with the lion's share going to Robert, Count of Mortain, half-brother of King William and the largest landholder in England after the king with his stronghold at Trematon Castle near the mouth of the Tamar. Later medieval administration and society Subsequently, however, Norman absentee landlords became replaced by a new Cornish-Norman ruling class including scholars such as Richard Rufus of Cornwall. These families eventually became the new rulers of Cornwall, typically speaking Norman French, Breton-Cornish, Latin, and eventually English, with many becoming involved in the operation of the Stannary Parliament system, the Earldom and eventually the Duchy of Cornwall. The Cornish language continued to be spoken and acquired a number of characteristics establishing its identity as a separate language from Breton. Stannary parliaments The stannary parliaments and stannary courts were legislative and legal institutions in Cornwall and in Devon (in the Dartmoor area). The stannary courts administered equity for the region's tin-miners and tin mining interests, and they were also courts of record for the towns dependent on the mines. The separate and powerful government institutions available to the tin miners reflected the enormous importance of the tin industry to the English economy during the Middle Ages. Special laws for tin miners pre-date written legal codes in Britain, and ancient traditions exempted everyone connected with tin mining in Cornwall and Devon from any jurisdiction other than the stannary courts in all but the most exceptional circumstances. Piracy and smuggling Cornish piracy was active during the Elizabethan era on the west coast of Britain. Cornwall is well known for its wreckers who preyed on ships passing Cornwall's rocky coastline. During the 17th and 18th centuries Cornwall was a major smuggling area. Heraldry In later times, Cornwall was known to the Anglo-Saxons as "West Wales" to distinguish it from "North Wales" (the modern nation of Wales). The name appears in the Anglo-Saxon Chronicle in 891 as On Corn walum. In the Domesday Book it was referred to as Cornualia and in c. 1198 as Cornwal. Other names for the county include a latinisation of the name as Cornubia (first appears in a mid-9th-century deed purporting to be a copy of one dating from c. 705), and as Cornugallia in 1086. Physical geography Cornwall forms the tip of the south-west peninsula of the island of Great Britain, and is therefore exposed to the full force of the prevailing winds that blow in from the Atlantic Ocean. The coastline is composed mainly of resistant rocks that give rise in many places to tall cliffs. Cornwall has a border with only one other county, Devon, which is formed almost entirely by the River Tamar, and the remainder (to the north) by the Marsland Valley. Coastal areas The north and south coasts have different characteristics. The north coast on the Celtic Sea, part of the Atlantic Ocean, is more exposed and therefore has a wilder nature. The prosaically named High Cliff, between Boscastle and St Gennys, is the highest sheer-drop cliff in Cornwall at . However, there are also many extensive stretches of fine golden sand which form the beaches important to the tourist industry, such as those at Bude, Polzeath, Watergate Bay, Perranporth, Porthtowan, Fistral Beach, Newquay, St Agnes, St Ives, and on the south coast Gyllyngvase beach in Falmouth and the large beach at Praa Sands further to the south-west. There are two river estuaries on the north coast: Hayle Estuary and the estuary of the River Camel, which provides Padstow and Rock with a safe harbour. The seaside town of Newlyn is a popular holiday destination, as it is one of the last remaining traditional Cornish fishing ports, with views reaching over Mount's Bay. The south coast, dubbed the "Cornish Riviera", is more sheltered and there are several broad estuaries offering safe anchorages, such as at Falmouth and Fowey. Beaches on the south coast usually consist of coarser sand and shingle, interspersed with rocky sections of wave-cut platform. Also on the south coast, the picturesque fishing village of Polperro, at the mouth of the Pol River, and the fishing port of Looe on the River Looe are both popular with tourists. Inland areas The interior of the county consists of a roughly east–west spine of infertile and exposed upland, with a series of granite intrusions, such as Bodmin Moor, which contains the highest land within Cornwall. From east to west, and with approximately descending altitude, these are Bodmin Moor, Hensbarrow north of St Austell, Carnmenellis to the south of Camborne, and the Penwith or Land's End peninsula. These intrusions are the central part of the granite outcrops that form the exposed parts of the Cornubian batholith of south-west Britain, which also includes Dartmoor to the east in Devon and the Isles of Scilly to the west, the latter now being partially submerged. The intrusion of the granite into the surrounding sedimentary rocks gave rise to extensive metamorphism and mineralisation, and this led to Cornwall being one of the most important mining areas in Europe until the early 20th century. It is thought tin was mined here as early as the Bronze Age, and copper, lead, zinc and silver have all been mined in Cornwall. Alteration of the granite also gave rise to extensive deposits of China Clay, especially in the area to the north of St Austell, and the extraction of this remains an important industry. The uplands are surrounded by more fertile, mainly pastoral farmland. Near the south coast, deep wooded valleys provide sheltered conditions for flora that like shade and a moist, mild climate. These areas lie mainly on Devonian sandstone and slate. The north east of Cornwall lies on Carboniferous rocks known as the Culm Measures. In places these have been subjected to severe folding, as can be seen on the north coast near Crackington Haven and in several other locations. Lizard Peninsula The geology of the Lizard peninsula is unusual, in that it is mainland Britain's only example of an ophiolite, a section of oceanic crust now found on land. Much of the peninsula consists of the dark green and red Precambrian serpentinite, which forms spectacular cliffs, notably at Kynance Cove, and carved and polished serpentine ornaments are sold in local gift shops. This ultramafic rock also forms a very infertile soil which covers the flat and marshy heaths of the interior of the peninsula. This is home to rare plants, such as the Cornish Heath, which has been adopted as the county flower. Hills and high points Settlements and transport Cornwall's only city, and the home of the council headquarters, is Truro. Nearby Falmouth is notable as a port. St Just in Penwith is the westernmost town in England, though the same claim has been made for Penzance, which is larger. St Ives and Padstow are today small vessel ports with a major tourism and leisure sector in their economies. Newquay on the north coast is another major urban settlement which is known for its beaches and is a popular surfing destination, as is Bude further north, but Newquay is now also becoming important for its aviation-related industries. Camborne is the county's largest town and more populous than the capital Truro. Together with the neighbouring town of Redruth, it forms the largest urban area in Cornwall, and both towns were significant as centres of the global tin mining industry in the 19th century; nearby copper mines were also very productive during that period. St Austell is also larger than Truro and was the centre of the china clay industry in Cornwall. Until four new parishes were created for the St Austell area on 1 April 2009 St Austell was the largest settlement in Cornwall. Cornwall borders the county of Devon at the River Tamar. Major roads between Cornwall and the rest of Great Britain are the A38 which crosses the Tamar at Plymouth via the Tamar Bridge and the town of Saltash, the A39 road (Atlantic Highway) from Barnstaple, passing through North Cornwall to end in Falmouth, and the A30 which connects Cornwall to the M5 motorway at Exeter, crosses the border south of Launceston, crosses Bodmin Moor and connects Bodmin, Truro, Redruth, Camborne, Hayle and Penzance. Torpoint Ferry links Plymouth with Torpoint on the opposite side of the Hamoaze. A rail bridge, the Royal Albert Bridge built by Isambard Kingdom Brunel (1859), provides the other main land transport link. The city of Plymouth, a large urban centre in south west Devon, is an important location for services such as hospitals, department stores, road and rail transport, and cultural venues, particularly for people living in east Cornwall. Cardiff and Swansea, across the Bristol Channel, have at some times in the past been connected to Cornwall by ferry, but these do not operate now. The Isles of Scilly are served by ferry (from Penzance) and by aeroplane, having its own airport: St Mary's Airport. There are regular flights between St Mary's and Land's End Airport, near St Just, and Newquay Airport; during the summer season, a service is also provided between St Mary's and Exeter Airport, in Devon. Ecology Flora and fauna Cornwall has varied habitats including terrestrial and marine ecosystems. One noted species in decline locally is the Reindeer lichen, which species has been made a priority for protection under the national UK Biodiversity Action Plan. Botanists divide Cornwall and Scilly into two vice-counties: West (1) and East (2). The standard flora is by F. H. Davey Flora of Cornwall (1909). Davey was assisted by A. O. Hume and he thanks Hume, his companion on excursions in Cornwall and Devon, and for help in the compilation of that Flora, publication of which was financed by him. Climate Cornwall has a temperate Oceanic climate (Köppen climate classification: Cfb), with mild winters and cool summers. Cornwall has the mildest and one of the sunniest climates of the United Kingdom, as a result of its oceanic setting and the influence of the Gulf Stream. The average annual temperature in Cornwall ranges from on the Isles of Scilly to in the central uplands. Winters are among the warmest in the country due to the moderating effects of the warm ocean currents, and frost and snow are very rare at the coast and are also rare in the central upland areas. Summers are, however, not as warm as in other parts of southern England. The surrounding sea and its southwesterly position mean that Cornwall's weather can be relatively changeable. Cornwall is one of the sunniest areas in the UK. It has more than 1,541 hours of sunshine per year, with the highest average of 7.6 hours of sunshine per day in July. The moist, mild air coming from the southwest brings higher amounts of rainfall than in eastern Great Britain, at per year. However, this is not as much as in more northern areas of the west coast. The Isles of Scilly, for example, where there are on average fewer than two days of air frost per year, is the only area in the UK to be in the Hardiness zone 10. The islands have, on average, less than one day of air temperature exceeding 30 °C per year and are in the AHS Heat Zone 1. Extreme temperatures in Cornwall are particularly rare; however, extreme weather in the form of storms and floods is common. Culture Language Cornish language Cornish, a member of the Brythonic branch of the Celtic language family, is a revived language that died out as a first language in the late 18th century. It is closely related to the other Brythonic languages, Breton and Welsh, and less so to the Goidelic languages. Cornish has no legal status in the UK. There has been a revival of the language by academics and optimistic enthusiasts since the mid-19th century that gained momentum from the publication in 1904 of Henry Jenner's Handbook of the Cornish Language. It is a social networking community language rather than a social community group language. Cornwall Council encourages and facilitates language classes within the county, in schools and within the wider community. In 2002, Cornish was named as a UK regional language in the European Charter for Regional or Minority Languages. As a result, in 2005 its promoters received limited government funding. Several words originating in Cornish are used in the mining terminology of English, such as costean, gossan, gunnies, kibbal, kieve and vug. English dialect The Cornish language and culture influenced the emergence of particular pronunciations and grammar not used elsewhere in England. The Cornish dialect is spoken to varying degrees; however, someone speaking in broad Cornish may be practically unintelligible to one not accustomed to it. Cornish dialect has generally declined, as in most places it is now little more than a regional accent and grammatical differences have been eroded over time. Marked differences in vocabulary and usage still exist between the eastern and western parts of Cornwall. Flag Saint Piran's Flag is the national flag and ancient banner of Cornwall, and an emblem of the Cornish people. It is regarded as the county flag by Cornwall Council. The banner of Saint Piran is a white cross on a black background (in terms of heraldry 'sable, a cross argent'). According to legend Saint Piran adopted these colours from seeing the white tin in the black coals and ashes during his discovery of tin. The Cornish flag is an exact reverse of the former Breton black cross national flag and is known by the same name "Kroaz Du". Arts Since the 19th century, Cornwall, with its unspoilt maritime scenery and strong light, has sustained a vibrant visual art scene of international renown. Artistic activity within Cornwall was initially centred on the art-colony of Newlyn, most active at the turn of the 20th century. This Newlyn School is associated with the names of Stanhope Forbes, Elizabeth Forbes, Norman Garstin and Lamorna Birch. Modernist writers such as D. H. Lawrence and Virginia Woolf lived in Cornwall between the wars, and Ben Nicholson, the painter, having visited in the 1920s came to live in St Ives with his then wife, the sculptor Barbara Hepworth, at the outbreak of the Second World War. They were later joined by the Russian emigrant Naum Gabo, and other artists. These included Peter Lanyon, Terry Frost, Patrick Heron, Bryan Wynter and Roger Hilton. St Ives also houses the Leach Pottery, where Bernard Leach, and his followers championed Japanese inspired studio pottery. Much of this modernist work can be seen in Tate St Ives. The Newlyn Society and Penwith Society of Arts continue to be active, and contemporary visual art is documented in a dedicated online journal. Music Cornwall has a folk music tradition that has survived into the present and is well known for its unusual folk survivals such as Mummers Plays, the Furry Dance in Helston played by the famous Helston Town Band, and Obby Oss in Padstow. Newlyn is home to a food and music festival that hosts live music, cooking demonstrations, and displays of locally caught fish. As in other former mining districts of Britain, male voice choirs and brass bands, such as Brass on the Grass concerts during the summer at Constantine, are still very popular in Cornwall. Cornwall also has around 40 brass bands, including the six-times National Champions of Great Britain, Camborne Youth Band, and the bands of Lanner and St Dennis. Cornish players are regular participants in inter-Celtic festivals, and Cornwall itself has several inter-Celtic festivals such as Perranporth's Lowender Peran folk festival. Contemporary musician Richard D. James (also known as Aphex Twin) grew up in Cornwall, as did Luke Vibert and Alex Parks, winner of Fame Academy 2003. Roger Taylor, the drummer from the band Queen was also raised in the county, and currently lives not far from Falmouth. The American singer-songwriter Tori Amos now resides predominantly in North Cornwall not far from Bude with her family. The lutenist, lutarist, composer and festival director Ben Salfield lives in Truro. Mick Fleetwood of Fleetwood Mac was born in Redruth. Literature Cornwall's rich heritage and dramatic landscape have inspired numerous writers. Fiction Sir Arthur Quiller-Couch, author of many novels and works of literary criticism, lived in Fowey: his novels are mainly set in Cornwall. Daphne du Maurier lived at Menabilly near Fowey and many of her novels had Cornish settings, including Rebecca, Jamaica Inn, Frenchman's Creek, My Cousin Rachel, and The House on the Strand. She is also noted for writing Vanishing Cornwall. Cornwall provided the inspiration for The Birds, one of her terrifying series of short stories, made famous as a film by Alfred Hitchcock. Conan Doyle's The Adventure of the Devil's Foot featuring Sherlock Holmes is set in Cornwall. Winston Graham's series Poldark, Kate Tremayne's Adam Loveday series, Susan Cooper's novels Over Sea, Under Stone and Greenwitch, and Mary Wesley's The Camomile Lawn are all set in Cornwall. Writing under the pseudonym of Alexander Kent, Douglas Reeman sets parts of his Richard Bolitho and Adam Bolitho series in the Cornwall of the late 18th and the early 19th centuries, particularly in Falmouth. Gilbert K. Chesterton placed the action of many of his stories there. Medieval Cornwall is the setting of the trilogy by Monica Furlong, Wise Child, Juniper and Colman, as well as part of Charles Kingsley's Hereward the Wake. Hammond Innes's novel, The Killer Mine; Charles de Lint's novel The Little Country; and Chapters 24-25 of J. K. Rowling's Harry Potter and the Deathly Hallows take place in Cornwall (Shell Cottage, on the beach outside the fictional village of Tinworth). David Cornwell, who wrote espionage novels under the name John le Carré, lived and worked in Cornwall. Nobel Prize-winning novelist William Golding was born in St Columb Minor in 1911, and returned to live near Truro from 1985 until his death in 1993. D. H. Lawrence spent a short time living in Cornwall. Rosamunde Pilcher grew up in Cornwall, and several of her books take place there. Poetry The late Poet Laureate Sir John Betjeman was famously fond of Cornwall and it featured prominently in his poetry. He is buried in the churchyard at St Enodoc's Church, Trebetherick. Charles Causley, the poet, was born in Launceston and is perhaps the best known of Cornish poets. Jack Clemo and the scholar A. L. Rowse were also notable Cornishmen known for their poetry; The Rev. R. S. Hawker of Morwenstow wrote some poetry which was very popular in the Victorian period. The Scottish poet W. S. Graham lived in West Cornwall from 1944 until his death in 1986. The poet Laurence Binyon wrote "For the Fallen" (first published in 1914) while sitting on the cliffs between Pentire Point and The Rumps and a stone plaque was erected in 2001 to commemorate the fact. The plaque bears the inscription "FOR THE FALLEN / Composed on these cliffs, 1914". The plaque also bears below this the fourth stanza (sometimes referred to as "The Ode") of the poem: They shall grow not old, as we that are left grow old Age shall not weary them, nor the years condemn At the going down of the sun and in the morning We will remember them Other literary works Cornwall produced a substantial number of passion plays such as the Ordinalia during the Middle Ages. Many are still extant, and provide valuable information about the Cornish language. See also Cornish literature Colin Wilson, a prolific writer who is best known for his debut work The Outsider (1956) and for The Mind Parasites (1967), lived in Gorran Haven, a small village on the southern Cornish coast. The writer D. M. Thomas was born in Redruth but lived and worked in Australia and the United States before returning to his native Cornwall. He has written novels, poetry, and other works, including translations from Russian. Thomas Hardy's drama The Queen of Cornwall (1923) is a version of the Tristan story; the second act of |
differ from absolute monarchies (in which a monarch whether limited by a constitution or not is the only one to decide) in that they are bound to exercise powers and authorities within limits prescribed by an established legal framework. Constitutional monarchies range from countries such as Liechtenstein, Monaco, Morocco, Jordan, Kuwait, and Bahrain, where the constitution grants substantial discretionary powers to the sovereign, to countries such as Australia, the United Kingdom, Canada, the Netherlands, Spain, Belgium, Sweden, Malaysia, and Japan, where the monarch retains significantly less personal discretion in the exercise of their authority. Constitutional monarchy may refer to a system in which the monarch acts as a non-party political head of state under the constitution, whether written or unwritten. While most monarchs may hold formal authority and the government may legally operate in the monarch's name, in the form typical in Europe the monarch no longer personally sets public policy or chooses political leaders. Political scientist Vernon Bogdanor, paraphrasing Thomas Macaulay, has defined a constitutional monarch as "A sovereign who reigns but does not rule". In addition to acting as a visible symbol of national unity, a constitutional monarch may hold formal powers such as dissolving parliament or giving royal assent to legislation. However, such powers generally may only be exercised strictly in accordance with either written constitutional principles or unwritten constitutional conventions, rather than any personal political preferences of the sovereign. In The English Constitution, British political theorist Walter Bagehot identified three main political rights which a constitutional monarch may freely exercise: the right to be consulted, the right to encourage, and the right to warn. Many constitutional monarchies still retain significant authorities or political influence, however, such as through certain reserve powers and who may also play an important political role. The United Kingdom and the other Commonwealth realms are all constitutional monarchies in the Westminster system of constitutional governance. Two constitutional monarchies – Malaysia and Cambodia – are elective monarchies, wherein the ruler is periodically selected by a small electoral college. Strongly limited constitutional monarchies, such as the United Kingdom and Australia, have been referred to as crowned republics by writers H. G. Wells and Glenn Patmore. The concept of semi-constitutional monarch identifies constitutional monarchies where the monarch retains substantial powers, on a par with a president in the semi-presidential system. As a result, constitutional monarchies where the monarch has a largely ceremonial role may also be referred to as 'parliamentary monarchies' to differentiate them from semi-constitutional monarchies. History The oldest constitutional monarchy dating back to ancient times was that of the Hittites. They were an ancient Anatolian people that lived during the Bronze Age whose king or queen had to share their authority with an assembly, called the Panku, which was the equivalent to a modern-day deliberative assembly or a legislature. Members of the Panku came from scattered noble families who worked as representatives of their subjects in an adjutant or subaltern federal-type landscape. Constitutional and absolute monarchy England, Scotland and the United Kingdom In the Kingdom of England, the Glorious Revolution of 1688 furthered the constitutional monarchy, restricted by laws such as the Bill of Rights 1689 and the Act of Settlement 1701, although the first form of constitution was enacted with the Magna Carta of 1215. At the same time, in Scotland, the Convention of Estates enacted the Claim of Right Act 1689, which placed similar limits on the Scottish monarchy. Although Queen Anne was the last monarch to veto an Act of Parliament when, on 11 March 1708, she blocked the Scottish Militia Bill, Hanoverian monarchs continued to selectively dictate government policies. For instance King George III constantly blocked Catholic Emancipation, eventually precipitating the resignation of William Pitt the Younger as prime minister in 1801. The sovereign's influence on the choice of prime minister gradually declined over this period, King William IV being the last monarch to dismiss a prime minister, when in 1834 he removed Lord Melbourne as a result of Melbourne's choice of Lord John Russell as Leader of the House of Commons. Queen Victoria was the last monarch to exercise real personal power, but this diminished over the course of her reign. In 1839, she became the last sovereign to keep a prime minister in power against the will of Parliament when the Bedchamber crisis resulted in the retention of Lord Melbourne's administration. By the end of her reign, however, she could do nothing to block the unacceptable (to her) premierships of William Gladstone, although she still exercised power in appointments to the Cabinet, for example in 1886 preventing Gladstone's choice of Hugh Childers as War Secretary in favour of Sir Henry Campbell-Bannerman. Today, the role of the British monarch is by convention effectively ceremonial. Instead, the British Parliament and the Government – chiefly in the office of Prime Minister of the United Kingdom – exercise their powers under "Royal (or Crown) Prerogative": on behalf of the monarch and through powers still formally possessed by the Monarch. No person may accept significant public office without swearing an oath of allegiance to the Queen. With few exceptions, the monarch is bound by constitutional convention to act on the advice of the Government. Continental Europe Poland developed the first constitution for a monarchy in continental Europe, with the Constitution of 3 May 1791; it was the second single-document constitution in the world just after the first republican Constitution of the United States. Constitutional monarchy also occurred briefly in the early years of the French Revolution, but much more widely afterwards. Napoleon Bonaparte is considered the first monarch proclaiming himself as an embodiment of the nation, rather than as a divinely appointed ruler; this interpretation of monarchy is germane to continental constitutional monarchies. German philosopher Georg Wilhelm Friedrich Hegel, in his work Elements of the Philosophy of Right (1820), gave the concept a philosophical justification that concurred with evolving contemporary political theory and the Protestant Christian view of natural law. Hegel's forecast of a constitutional monarch with very limited powers whose function is to embody the national character and provide constitutional continuity in times of emergency was reflected in the development of constitutional monarchies in Europe and Japan. Executive monarchy versus ceremonial monarchy There exist at least two different types of constitutional monarchies in the modern world — executive and ceremonial. In executive monarchies, the monarch wields significant (though not absolute) power. The monarchy under this system of government is a powerful political (and social) institution. By contrast, in ceremonial monarchies, the monarch holds little or no actual power or direct political influence, though they frequently have a great deal of social and cultural influence. Executive constitutional monarchies: Bhutan, Bahrain, Jordan, Kuwait, Liechtenstein, Monaco, Morocco, Qatar (de jure), and Tonga. Ceremonial constitutional monarchies (informally referred to as crowned republics): Andorra, Antigua and Barbuda, Australia, The Bahamas, Belgium, Belize, Cambodia, Canada, Denmark, Grenada, Jamaica, Japan, Lesotho, Luxembourg, Malaysia, the Netherlands, New Zealand, Norway, Papua New Guinea, Saint Kitts and Nevis, Saint Lucia, Saint Vincent and the Grenadines, Solomon Islands, Spain, Sweden, Thailand, Tuvalu and the United Kingdom. Ceremonial and executive monarchy, should not be confused with democratic and non-democratic monarchical systems. For example, in Liechtenstein and Monaco, the ruling monarchs wield significant executive power. However, they are not absolute monarchs, and these countries are generally reckoned as democracies. Modern constitutional monarchy As originally conceived, a constitutional monarch was head of the executive branch and quite a powerful figure even though his or her power was limited by the constitution and the elected parliament. Some of the framers of the U.S. Constitution may have envisioned the president as an elected constitutional monarch, as the term was then understood, following Montesquieu's account of the separation of powers. The present-day concept of a constitutional monarchy developed in the United Kingdom, where the democratically elected parliaments, and their leader, the prime minister, exercise power, with the monarchs having ceded power and remaining as a titular position. In many cases the monarchs, while still at the very top of the political and social hierarchy, were given the status of "servants of the people" to reflect the new, egalitarian position. In the course of France's July Monarchy, Louis-Philippe I was styled "King of the French" rather than "King of France". Following the Unification of Germany, Otto von Bismarck rejected the British model. In the constitutional monarchy established under the Constitution of the German Empire which Bismarck inspired, the Kaiser retained considerable actual executive power, while the Imperial Chancellor needed no parliamentary vote of confidence and ruled solely by the imperial mandate. However, this model of constitutional monarchy was discredited and abolished following Germany's defeat in the First World War. Later, Fascist Italy could also be considered a constitutional monarchy, in that there was a king as the titular head of state while actual power was held by Benito Mussolini under a constitution. This eventually discredited the Italian monarchy and led to its abolition in 1946. After the Second World War, surviving European monarchies almost invariably adopted some variant of the constitutional monarchy model originally developed in Britain. Nowadays a parliamentary democracy that is a constitutional monarchy is considered to differ from one that is a republic only in detail rather than in substance. In both cases, the titular head of state—monarch or president—serves the traditional role of embodying and representing the nation, while the government is carried on by a cabinet composed predominantly of elected Members of Parliament. However, three important factors distinguish monarchies such as the United Kingdom from systems where greater power might otherwise rest with Parliament. These are: the Royal Prerogative under which the monarch may exercise power under certain very limited circumstances; Sovereign Immunity under which the monarch may do no wrong under the law because the responsible government is instead deemed accountable; and the monarch may not be subject to the same taxation or property use restrictions as most citizens. Other privileges may be nominal or ceremonial (e.g., where the executive, judiciary, police or armed forces act on the authority of or owe allegiance to the Crown). Today slightly more than a quarter of constitutional monarchies are Western European countries, including the United Kingdom, Spain, the Netherlands, Belgium, Norway, Denmark, Luxembourg, Monaco, Liechtenstein and Sweden. However, the two most populous constitutional monarchies in the world are in Asia: Japan and Thailand. In these countries, the prime minister holds the day-to-day powers of governance, while the monarch retains residual (but not always insignificant) powers. The powers of the monarch differ between countries. In Denmark and in Belgium, for example, the Monarch formally appoints a representative to preside over the creation of a coalition government following a parliamentary election, while in Norway the King chairs special meetings of the cabinet. In nearly all cases, the monarch is still the nominal chief executive but is bound by convention to act on the advice of the Cabinet. Only a few monarchies (most notably Japan and Sweden) have amended their constitutions so that the monarch is | invariably adopted some variant of the constitutional monarchy model originally developed in Britain. Nowadays a parliamentary democracy that is a constitutional monarchy is considered to differ from one that is a republic only in detail rather than in substance. In both cases, the titular head of state—monarch or president—serves the traditional role of embodying and representing the nation, while the government is carried on by a cabinet composed predominantly of elected Members of Parliament. However, three important factors distinguish monarchies such as the United Kingdom from systems where greater power might otherwise rest with Parliament. These are: the Royal Prerogative under which the monarch may exercise power under certain very limited circumstances; Sovereign Immunity under which the monarch may do no wrong under the law because the responsible government is instead deemed accountable; and the monarch may not be subject to the same taxation or property use restrictions as most citizens. Other privileges may be nominal or ceremonial (e.g., where the executive, judiciary, police or armed forces act on the authority of or owe allegiance to the Crown). Today slightly more than a quarter of constitutional monarchies are Western European countries, including the United Kingdom, Spain, the Netherlands, Belgium, Norway, Denmark, Luxembourg, Monaco, Liechtenstein and Sweden. However, the two most populous constitutional monarchies in the world are in Asia: Japan and Thailand. In these countries, the prime minister holds the day-to-day powers of governance, while the monarch retains residual (but not always insignificant) powers. The powers of the monarch differ between countries. In Denmark and in Belgium, for example, the Monarch formally appoints a representative to preside over the creation of a coalition government following a parliamentary election, while in Norway the King chairs special meetings of the cabinet. In nearly all cases, the monarch is still the nominal chief executive but is bound by convention to act on the advice of the Cabinet. Only a few monarchies (most notably Japan and Sweden) have amended their constitutions so that the monarch is no longer even the nominal chief executive. There are fifteen constitutional monarchies under Queen Elizabeth II, which are known as Commonwealth realms. Unlike some of their continental European counterparts, the Monarch and her Governors-General in the Commonwealth realms hold significant "reserve" or "prerogative" powers, to be wielded in times of extreme emergency or constitutional crises, usually to uphold parliamentary government. An instance of a Governor-General exercising such power occurred during the 1975 Australian constitutional crisis, when the Australian Prime Minister, Gough Whitlam, was dismissed by the Governor-General. The Australian Senate had threatened to block the Government's budget by refusing to pass the necessary appropriation bills. On 11 November 1975, Whitlam intended to call a half-Senate election in an attempt to break the deadlock. When he sought the Governor-General's approval of the election, the Governor-General instead dismissed him as Prime Minister. Shortly after that installed leader of the opposition Malcolm Fraser in his place. Acting quickly before all parliamentarians became aware of the government change, Fraser and his allies secured passage of the appropriation bills, and the Governor-General dissolved Parliament for a double dissolution election. Fraser and his government were returned with a massive majority. This led to much speculation among Whitlam's supporters as to whether this use of the Governor-General's reserve powers was appropriate, and whether Australia should become a republic. Among supporters of constitutional monarchy, however, the experience confirmed the monarchy's value as a source of checks and balances against elected politicians who might seek powers in excess of those conferred by the constitution, and ultimately as a safeguard against dictatorship. In Thailand's constitutional monarchy, the monarch is recognized as the Head of State, Head of the Armed Forces, Upholder of the Buddhist Religion, and Defender of the Faith. The immediate former King, Bhumibol Adulyadej, was the longest-reigning monarch in the world and in all of Thailand's history, before passing away on 13 October 2016. Bhumibol reigned through several political changes in the Thai government. He played an influential role in each incident, often acting as mediator between disputing political opponents. (See Bhumibol's role in Thai Politics.) Among the powers retained by the Thai monarch under the constitution, lèse majesté protects the image of the monarch and enables him to play a role in politics. It carries strict criminal penalties for violators. Generally, the Thai people were reverent of Bhumibol. Much of his social influence arose from this reverence and from the socioeconomic improvement efforts undertaken by the royal family. In the United Kingdom, a frequent debate centres on when it is appropriate for a British monarch to act. When a monarch does act, political controversy can often ensue, partially because the neutrality of the crown is seen to be compromised in favour of a partisan goal, while some political scientists champion the idea of an "interventionist monarch" as a check against possible illegal action by politicians. For instance, the monarch of the United Kingdom can theoretically exercise an absolute veto over legislation by withholding royal assent. However, no monarch has done so since 1708, and it is widely believed that this and many of the monarch's other political powers are lapsed powers. There are currently 43 monarchies worldwide. List of current constitutional monarchies Former constitutional monarchies The Anglo-Corsican Kingdom was a brief period in the history of Corsica (1794–1796) when the island broke with Revolutionary France and sought military protection from Great Britain. Corsica became an independent kingdom under George III of the United Kingdom, but with its own elected parliament and a written constitution guaranteeing local autonomy and democratic rights. Barbados from gaining its independence in 1966 until 2021, was a constitutional monarchy in the Commonwealth of Nations with a Governor-General representing the Monarchy of Barbados. After an extensive history of republican movements, a republic was declared on 30 November 2021. Brazil from 1822, with the proclamation of independence and rise of the Empire of Brazil by Pedro I of Brazil to 1889, when Pedro II was deposed by a military coup. Kingdom of Bulgaria until 1946 when Tsar Simeon was deposed by the communist assembly. Many republics in the Commonwealth of Nations were constitutional monarchies for some period after their independence, including South Africa (1910-1964), Ceylon from 1948 to 1972 (now Sri Lanka), Fiji (1970–1987), Gambia (1965–1970), Ghana (1957–1960), Guyana (1966–1970), Trinidad and Tobago (1962–1976), and Barbados (1966-2021). The Grand Principality of Finland was a constitutional monarchy though its ruler, Alexander I, was simultaneously an autocrat and absolute ruler in Russia. France, several times from 1789 through the 19th century. The transformation of the Estates General of 1789 into the National Assembly initiated an ad-hoc transition from the absolute monarchy of the Ancien Régime to a new constitutional system. France formally became an executive constitutional monarchy with the promulgation of the French Constitution of 1791, which took effect on 1 October of that year. This first French constitutional monarchy was short-lived, ending with the overthrow of the monarchy and establishment of the French First Republic after the Insurrection of 10 August 1792. Several years later, in 1804, Napoleon Bonaparte proclaimed himself Emperor of the French in what was ostensibly a constitutional monarchy, though modern historians |
required two men to lift them'". Variants of the third law The third law has inspired many snowclones and other variations: Any sufficiently advanced extraterrestrial intelligence is indistinguishable from God. (Shermer's last law) Any sufficiently advanced act of benevolence is indistinguishable from malevolence (referring to artificial intelligence) The following two variants are very similar, and combine the third law with Hanlon's razor Any sufficiently advanced cluelessness is indistinguishable from malice (Clark's law) Any sufficiently advanced incompetence is indistinguishable from malice (Grey's law) Any sufficiently advanced troll is indistinguishable from a genuine kook or the viewpoints of even the most extreme crank are indistinguishable from sufficiently advanced satire (Poe's law) Any sufficiently advanced technology is indistinguishable from a rigged demo Any sufficiently advanced idea is distinguishable from mere magical incantation provided the former is presented as a mathematical proof, verifiable by sufficiently competent mathematicians Any sufficiently crappy research is indistinguishable from fraud (Andrew Gelman) Any sufficiently advanced hobby is indistinguishable from work A contrapositive of the third law is: Any technology distinguishable from magic is insufficiently advanced. (Gehm's corollary) The third law has been reversed for fictional universes involving magic in fiction: "Any sufficiently analyzed magic is indistinguishable from science!" Corollaries Isaac Asimov's Corollary to Clarke's First Law: "When, however, the lay public | also echoes a statement in a 1942 story by Leigh Brackett: "Witchcraft to the ignorant, … simple science to the learned". Even earlier examples of this sentiment may be found in Wild Talents (1932) by Charles Fort: "…a performance that may someday be considered understandable, but that, in these primitive times, so transcends what is said to be the known that it is what I mean by magic," and in the short story The Hound of Death (1933) by Agatha Christie: "The supernatural is only the nature of which the laws are not yet understood." Virginia Woolf's 1928 novel Orlando: A Biography explicitly compares advanced technology to magic: Clarke gave an example of the third law when he said that while he "would have believed anyone who told him back in 1962 that there would one day exist a book-sized object capable of holding the content of an entire library, he would never have accepted that the same device could find a page or word in a second and then convert it into any typeface and size from Albertus Extra Bold to Zurich Calligraphic", referring to his memory of "seeing and hearing Linotype machines which slowly converted 'molten lead into front pages that required two men to lift them'". Variants of the third law The third law has inspired many snowclones and other variations: Any sufficiently advanced extraterrestrial intelligence is indistinguishable from God. (Shermer's last law) Any sufficiently advanced act of benevolence is indistinguishable from malevolence (referring to artificial intelligence) The following two variants are very similar, and combine the third law with Hanlon's razor Any sufficiently advanced cluelessness is indistinguishable from malice (Clark's law) Any sufficiently advanced incompetence is indistinguishable from malice (Grey's law) Any sufficiently advanced troll is indistinguishable from a genuine kook or the viewpoints of even the most extreme crank are indistinguishable from sufficiently advanced satire (Poe's law) Any sufficiently advanced technology is indistinguishable from a rigged demo Any sufficiently advanced idea |
was Friedrich who first felt the wholly detached and distinctive features of a natural life. Instead of many tones, he sought the one; and so, in his landscape, he subordinated the composite chord into one single basic note". Bare oak trees and tree stumps, such as those in Raven Tree (c. 1822), Man and Woman Contemplating the Moon (c. 1824), and Willow Bush under a Setting Sun (c. 1835), are recurring elements of Friedrich's paintings, symbolizing death. Countering the sense of despair are Friedrich's symbols for redemption: the cross and the clearing sky promise eternal life, and the slender moon suggests hope and the growing closeness of Christ. In his paintings of the sea, anchors often appear on the shore, also indicating a spiritual hope. German literature scholar Alice Kuzniar finds in Friedrich's painting a temporality—an evocation of the passage of time—that is rarely highlighted in the visual arts. For example, in The Abbey in the Oakwood, the movement of the monks away from the open grave and toward the cross and the horizon imparts Friedrich's message that the final destination of man's life lies beyond the grave. With dawn and dusk constituting prominent themes of his landscapes, Friedrich's own later years were characterized by a growing pessimism. His work becomes darker, revealing a fearsome monumentality. The Wreck of the Hope—also known as The Polar Sea or The Sea of Ice (1823–24)—perhaps best summarizes Friedrich's ideas and aims at this point, though in such a radical way that the painting was not well received. Completed in 1824, it depicted a grim subject, a shipwreck in the Arctic Ocean; "the image he produced, with its grinding slabs of travertine-colored floe ice chewing up a wooden ship, goes beyond documentary into allegory: the frail bark of human aspiration crushed by the world's immense and glacial indifference." Friedrich's written commentary on aesthetics was limited to a collection of aphorisms set down in 1830, in which he explained the need for the artist to match natural observation with an introspective scrutiny of his own personality. His best-known remark advises the artist to "close your bodily eye so that you may see your picture first with the spiritual eye. Then bring to the light of day that which you have seen in the darkness so that it may react upon others from the outside inwards." He rejected the overreaching portrayals of nature in its "totality", as found in the work of contemporary painters like Adrian Ludwig Richter (1803–84) and Joseph Anton Koch (1768–1839). Loneliness and death Both Friedrich's life and art have at times been perceived by some to have been marked with an overwhelming sense of loneliness. Art historians and some of his contemporaries attribute such interpretations to the losses suffered during his youth to the bleak outlook of his adulthood, while Friedrich's pale and withdrawn appearance helped reinforce the popular notion of the "taciturn man from the North". Friedrich suffered depressive episodes in 1799, 1803–1805, c.1813, in 1816 and between 1824 and 1826. There are noticeable thematic shifts in the works he produced during these episodes, which see the emergence of such motifs and symbols as vultures, owls, graveyards and ruins. From 1826 these motifs became a permanent feature of his output, while his use of color became more dark and muted. Carus wrote in 1929 that Friedrich "is surrounded by a thick, gloomy cloud of spiritual uncertainty", though the noted art historian and curator Hubertus Gassner disagrees with such notions, seeing in Friedrich's work a positive and life-affirming subtext inspired by Freemasonry and religion. Germanic folklore Reflecting Friedrich's patriotism and resentment during the 1813 French occupation of the dominion of Pomerania, motifs from German folklore became increasingly prominent in his work. An anti-French German nationalist, Friedrich used motifs from his native landscape to celebrate Germanic culture, customs and mythology. He was impressed by the anti-Napoleonic poetry of Ernst Moritz Arndt and Theodor Körner, and the patriotic literature of Adam Müller and Heinrich von Kleist. Moved by the deaths of three friends killed in battle against France, as well as by Kleist's 1808 drama Die Hermannsschlacht, Friedrich undertook a number of paintings in which he intended to convey political symbols solely by means of the landscape—a first in the history of art. In Old Heroes' Graves (1812), a dilapidated monument inscribed "Arminius" invokes the Germanic chieftain, a symbol of nationalism, while the four tombs of fallen heroes are slightly ajar, freeing their spirits for eternity. Two French soldiers appear as small figures before a cave, lower and deep in a grotto surrounded by rock, as if farther from heaven. A second political painting, Fir Forest with the French Dragoon and the Raven (c. 1813), depicts a lost French soldier dwarfed by a dense forest, while on a tree stump a raven is perched—a prophet of doom, symbolizing the anticipated defeat of France. Legacy Influence Alongside other Romantic painters, Friedrich helped position landscape painting as a major genre within Western art. Of his contemporaries, Friedrich's style most influenced the painting of Johan Christian Dahl (1788–1857). Among later generations, Arnold Böcklin (1827–1901) was strongly influenced by his work, and the substantial presence of Friedrich's works in Russian collections influenced many Russian painters, in particular Arkhip Kuindzhi (c. 1842–1910) and Ivan Shishkin (1832–98). Friedrich's spirituality anticipated American painters such as Albert Pinkham Ryder (1847–1917), Ralph Blakelock (1847–1919), the painters of the Hudson River School and the New England Luminists. At the turn of the 20th century, Friedrich was rediscovered by the Norwegian art historian Andreas Aubert (1851–1913), whose writing initiated modern Friedrich scholarship, and by the Symbolist painters, who valued his visionary and allegorical landscapes. The Norwegian Symbolist Edvard Munch (1863–1944) would have seen Friedrich's work during a visit to Berlin in the 1880s. Munch's 1899 print The Lonely Ones echoes Friedrich's Rückenfigur (back figure), although in Munch's work the focus has shifted away from the broad landscape and toward the sense of dislocation between the two melancholy figures in the foreground. Friedrich's modern revival gained momentum in 1906, when thirty-two of his works were featured in an exhibition in Berlin of Romantic-era art. His landscapes exercised a strong influence on the work of German artist Max Ernst (1891–1976), and as a result other Surrealists came to view Friedrich as a precursor to their movement. In 1934, the Belgian painter René Magritte (1898–1967) paid tribute in his work The Human Condition, which directly echoes motifs from Friedrich's art in its questioning of perception and the role of the viewer. A few years later, the Surrealist journal Minotaure featured Friedrich in a 1939 article by critic Marie Landsberger, thereby exposing his work to a far wider circle of artists. The influence of The Wreck of Hope (or The Sea of Ice) is evident in the 1940–41 painting Totes Meer by Paul Nash (1889–1946), a fervent admirer of Ernst. Friedrich's work has been cited as an inspiration by other major 20th-century artists, including Mark Rothko (1903–1970), Gerhard Richter (b. 1932), Gotthard Graubner and Anselm Kiefer (b. 1945). Friedrich's Romantic paintings have also been singled out by writer Samuel Beckett (1906–89), who, standing before Man and Woman Contemplating the Moon, said "This was the source of Waiting for Godot, you know." In his 1961 article "The Abstract Sublime", originally published in ARTnews, the art historian Robert Rosenblum drew comparisons between the Romantic landscape paintings of both Friedrich and Turner with the Abstract Expressionist paintings of Mark Rothko. Rosenblum specifically describes Friedrich's 1809 painting The Monk by the Sea, Turner's The Evening Star and Rothko's 1954 Light, Earth and Blue as revealing affinities of vision and feeling. According to Rosenblum, "Rothko, like Friedrich and Turner, places us on the threshold of those shapeless infinities discussed by the aestheticians of the Sublime. The tiny monk | mystic ... They did not see Friedrich's faithful and conscientious study of nature in everything he represented". During this period Friedrich frequently sketched memorial monuments and sculptures for mausoleums, reflecting his obsession with death and the afterlife; he even created designs for some of the funerary art in Dresden's cemeteries. Some of these works were lost in the fire that destroyed Munich's Glass Palace (1931) and later in the 1945 bombing of Dresden. Later life and death Friedrich's reputation steadily declined over the final fifteen years of his life. As the ideals of early Romanticism passed from fashion, he came to be viewed as an eccentric and melancholy character, out of touch with the times. Gradually his patrons fell away. By 1820, he was living as a recluse and was described by friends as the "most solitary of the solitary". Towards the end of his life he lived in relative poverty. He became isolated and spent long periods of the day and night walking alone through woods and fields, often beginning his strolls before sunrise. In June 1835, Friedrich suffered his first stroke, which left him with minor limb paralysis and greatly reduced his ability to paint. As a result, he was unable to work in oil; instead he was limited to watercolour, sepia and reworking older compositions. Although his vision remained strong, he had lost the full strength of his hand. Yet he was able to produce a final 'black painting', Seashore by Moonlight (1835–36), described by Vaughan as the "darkest of all his shorelines, in which richness of tonality compensates for the lack of his former finesse". Symbols of death appeared in his other work from this period. Soon after his stroke, the Russian royal family purchased a number of his earlier works, and the proceeds allowed him to travel to Teplitz—in today's Czech Republic—to recover. During the mid-1830s, Friedrich began a series of portraits and he returned to observing himself in nature. As the art historian William Vaughan has observed, however, "He can see himself as a man greatly changed. He is no longer the upright, supportive figure that appeared in Two Men Contemplating the Moon in 1819. He is old and stiff ... he moves with a stoop". By 1838, he was capable only of working in a small format. He and his family were living in poverty and grew increasingly dependent for support on the charity of friends. Friedrich died in Dresden on 7 May 1840, and was buried in Dresden's Trinitatis-Friedhof (Trinity Cemetery) east of the city centre (the entrance to which he had painted some 15 years earlier). The simple flat gravestone lies north-west of the central roundel within the main avenue. By the time of his death, his reputation and fame were waning, and his passing was little noticed within the artistic community. His artwork had certainly been acknowledged during his lifetime, but not widely. While the close study of landscape and an emphasis on the spiritual elements of nature were commonplace in contemporary art, his work was too original and personal to be well understood. By 1838, his work no longer sold or received attention from critics; the Romantic movement had been moving away from the early idealism that the artist had helped found. After his death, Carl Gustav Carus wrote a series of articles which paid tribute to Friedrich's transformation of the conventions of landscape painting. However, Carus' articles placed Friedrich firmly in his time, and did not place the artist within a continuing tradition. Only one of his paintings had been reproduced as a print, and that was produced in very few copies. Themes Landscape and the sublime The visualisation and portrayal of landscape in an entirely new manner was Friedrich's key innovation. He sought not just to explore the blissful enjoyment of a beautiful view, as in the classic conception, but rather to examine an instant of sublimity, a reunion with the spiritual self through the contemplation of nature. Friedrich was instrumental in transforming landscape in art from a backdrop subordinated to human drama to a self-contained emotive subject. Friedrich's paintings commonly employed the Rückenfigur—a person seen from behind, contemplating the view. The viewer is encouraged to place himself in the position of the Rückenfigur, by which means he experiences the sublime potential of nature, understanding that the scene is as perceived and idealised by a human. Friedrich created the notion of a landscape full of romantic feeling—die romantische Stimmungslandschaft. His art details a wide range of geographical features, such as rock coasts, forests, and mountain scenes. He often used the landscape to express religious themes. During his time, most of the best-known paintings were viewed as expressions of a religious mysticism. Friedrich said, "The artist should paint not only what he sees before him, but also what he sees within him. If, however, he sees nothing within him, then he should also refrain from painting that which he sees before him. Otherwise, his pictures will be like those folding screens behind which one expects to find only the sick or the dead." Expansive skies, storms, mist, forests, ruins and crosses bearing witness to the presence of God are frequent elements in Friedrich's landscapes. Though death finds symbolic expression in boats that move away from shore—a Charon-like motif—and in the poplar tree, it is referenced more directly in paintings like The Abbey in the Oakwood (1808–10), in which monks carry a coffin past an open grave, toward a cross, and through the portal of a church in ruins. He was one of the first artists to portray winter landscapes in which the land is rendered as stark and dead. Friedrich's winter scenes are solemn and still—according to the art historian Hermann Beenken, Friedrich painted winter scenes in which "no man has yet set his foot. The theme of nearly all the older winter pictures had been less winter itself than life in winter. In the 16th and 17th centuries, it was thought impossible to leave out such motifs as the crowd of skaters, the wanderer... It was Friedrich who first felt the wholly detached and distinctive features of a natural life. Instead of many tones, he sought the one; and so, in his landscape, he subordinated the composite chord into one single basic note". Bare oak trees and tree stumps, such as those in Raven Tree (c. 1822), Man and Woman Contemplating the Moon (c. 1824), and Willow Bush under a Setting Sun (c. 1835), are recurring elements of Friedrich's paintings, symbolizing death. Countering the sense of despair are Friedrich's symbols for redemption: the cross and the clearing sky promise eternal life, and the slender moon suggests hope and the growing closeness of Christ. In his paintings of the sea, anchors often appear on the shore, also indicating a spiritual hope. German literature scholar Alice Kuzniar finds in Friedrich's painting a temporality—an evocation of the passage of time—that is rarely highlighted in the visual arts. For example, in The Abbey in the Oakwood, the movement of the monks away from the open grave and toward the cross and the horizon imparts Friedrich's message that the final destination of man's life lies beyond the grave. With dawn and dusk constituting prominent themes of his landscapes, Friedrich's own later years were characterized by a growing pessimism. His work becomes darker, revealing a fearsome monumentality. The Wreck of the Hope—also known as The Polar Sea or The Sea of Ice (1823–24)—perhaps best summarizes Friedrich's ideas and aims at this point, though in such a radical way that the painting was not well received. Completed in 1824, it depicted a grim subject, a shipwreck in the Arctic Ocean; "the image he produced, with its grinding slabs of travertine-colored floe ice chewing up a wooden ship, goes beyond documentary into allegory: the frail bark of human aspiration crushed by the world's immense and glacial indifference." Friedrich's written commentary on aesthetics was limited to a collection of aphorisms set down in 1830, in which he explained the need for the artist to match natural observation with an introspective scrutiny of his own personality. His best-known remark advises the artist to "close your bodily eye so that you may see your picture first with the spiritual eye. Then bring to the light of day that which you have seen in the darkness so that it may react upon others from the outside inwards." He rejected the overreaching portrayals of nature in its "totality", as found in the work of contemporary painters like Adrian Ludwig Richter (1803–84) and Joseph Anton Koch (1768–1839). Loneliness and death Both Friedrich's life and art have at times been perceived by some to have been marked with an overwhelming sense of loneliness. Art historians and some of his contemporaries attribute such interpretations to the losses suffered during his youth to the bleak outlook of his adulthood, while Friedrich's pale and withdrawn appearance helped reinforce the popular notion of the "taciturn man from the North". Friedrich suffered depressive episodes in 1799, 1803–1805, c.1813, in 1816 and between 1824 and 1826. There are noticeable thematic shifts in the works he produced during these episodes, which see the emergence of such motifs and symbols as vultures, owls, graveyards and ruins. From 1826 these motifs became a permanent feature of his output, while his use of color became more dark and muted. Carus wrote in 1929 that Friedrich "is surrounded by a thick, gloomy cloud of spiritual uncertainty", though the noted art historian and curator Hubertus Gassner disagrees with such notions, seeing in Friedrich's work a positive and life-affirming subtext inspired by Freemasonry and religion. Germanic folklore Reflecting Friedrich's patriotism and resentment during the 1813 French occupation of the dominion of Pomerania, motifs from German folklore became increasingly prominent in his work. An anti-French German nationalist, Friedrich used motifs from his native landscape to celebrate Germanic culture, customs and mythology. He was impressed by the anti-Napoleonic poetry of Ernst Moritz Arndt and Theodor Körner, and the patriotic literature of Adam Müller and Heinrich von Kleist. Moved by the deaths of three friends killed in battle against France, as well as by Kleist's 1808 drama Die Hermannsschlacht, Friedrich undertook a number of paintings in which he intended to convey political symbols solely by means of the landscape—a first in the history of art. In Old Heroes' Graves (1812), a dilapidated monument inscribed "Arminius" invokes the Germanic chieftain, a symbol of nationalism, while the four tombs of fallen heroes are slightly ajar, freeing their spirits for eternity. Two French soldiers appear as small figures before a cave, lower and deep in a grotto surrounded by rock, as if farther from heaven. A second political painting, Fir Forest with the French Dragoon and the Raven (c. 1813), depicts a lost French soldier dwarfed by a dense forest, while on a tree stump a raven is perched—a prophet of doom, symbolizing the anticipated defeat of France. Legacy Influence Alongside other Romantic painters, Friedrich helped position landscape painting as a major genre within Western art. Of his contemporaries, Friedrich's style most influenced the painting of Johan Christian Dahl (1788–1857). Among later generations, Arnold Böcklin (1827–1901) was strongly influenced by his work, and the substantial presence of Friedrich's works in Russian collections influenced many Russian painters, in particular Arkhip Kuindzhi (c. 1842–1910) and Ivan Shishkin (1832–98). Friedrich's spirituality anticipated American painters such as Albert Pinkham Ryder (1847–1917), Ralph Blakelock (1847–1919), the painters of the Hudson River School and the New England Luminists. At the turn of the 20th century, Friedrich was rediscovered by the Norwegian art historian Andreas Aubert (1851–1913), whose writing initiated modern Friedrich scholarship, and by the Symbolist painters, who valued his visionary and allegorical landscapes. The Norwegian Symbolist Edvard Munch (1863–1944) would have seen Friedrich's work during a visit to Berlin in the 1880s. Munch's 1899 print The Lonely Ones echoes Friedrich's Rückenfigur (back figure), although in Munch's work the focus has shifted away from the broad landscape and toward the sense of dislocation between the two melancholy figures in the foreground. Friedrich's modern revival gained momentum in 1906, when thirty-two of his works were featured in an exhibition in Berlin of Romantic-era art. His landscapes exercised a strong influence on the work of German artist Max Ernst (1891–1976), and as a result other Surrealists came to view Friedrich as a precursor to their movement. In 1934, the Belgian painter René Magritte (1898–1967) paid tribute in his work The Human Condition, which directly echoes motifs from Friedrich's art in its questioning of perception and the role of the viewer. A few years later, the Surrealist journal Minotaure featured Friedrich in a 1939 article by critic Marie Landsberger, thereby exposing his work to a far wider circle of artists. The influence of The Wreck of Hope (or The Sea of Ice) is evident in the 1940–41 painting Totes Meer by Paul Nash (1889–1946), a fervent admirer of Ernst. Friedrich's work has been cited as an inspiration by other major 20th-century artists, including Mark Rothko (1903–1970), Gerhard Richter (b. 1932), Gotthard Graubner and Anselm Kiefer (b. 1945). Friedrich's Romantic paintings have also been singled out by writer Samuel Beckett (1906–89), who, standing before Man and Woman Contemplating the Moon, said "This was the source of Waiting for Godot, you know." In his 1961 article "The Abstract Sublime", originally published in ARTnews, the art historian Robert Rosenblum drew comparisons between the Romantic landscape paintings of both Friedrich and Turner with the Abstract Expressionist paintings of Mark Rothko. Rosenblum specifically describes Friedrich's 1809 painting The Monk by the Sea, Turner's The Evening Star and Rothko's 1954 Light, Earth and Blue as revealing affinities of vision and feeling. According to Rosenblum, "Rothko, like Friedrich and Turner, places us on the threshold of those shapeless infinities discussed by the aestheticians of the Sublime. The tiny monk in the Friedrich and the fisher in the Turner establish a poignant contrast between the infinite vastness of a pantheistic God and the infinite smallness of His creatures. In the abstract language of Rothko, such literal detail—a bridge of empathy between the real spectator and the presentation of a transcendental landscape—is no longer necessary; we ourselves are the monk before the sea, standing silently and contemplatively before these huge and soundless pictures as if we were looking at a sunset or a moonlit night." The contemporary artist Christiane Pooley gets inspired by Friedrich's work for her landscapes reinterpreting the history of Chile. Critical opinion Until 1890, and especially after his friends had died, Friedrich's work lay in near-oblivion for decades. Yet, by 1890, the symbolism in his work began to ring true with the artistic mood of |
have sought a career working with children. In 1981, Love was granted a small trust fund that had been left by her maternal grandparents, which she used to travel to Dublin, Ireland, where her biological father was living. She audited courses at Trinity College, studying theology for two semesters. She later received honorary patronage from Trinity's University Philosophical Society in 2010. While in Dublin, Love met musician Julian Cope of the Teardrop Explodes at one of the band's concerts. Cope took a liking to Love and offered to let her stay at his Liverpool home in his absence. She traveled to London, where she was met by her friend and future bandmate, Robin Barbur, from Portland. Recalling Cope's offer, Love and Barbur moved into Cope's home with him and several other artists, including Pete de Freitas of Echo & the Bunnymen. De Freitas was initially hesitant to allow the girls to stay, but acquiesced as they were "alarmingly young and obviously had nowhere else to go". Love recalled: "They kind of took me in. I was sort of a mascot; I would get them coffee or tea during rehearsals." Cope writes of Love frequently in his 1994 autobiography, Head-On, in which he refers to her as "the adolescent". In July 1982, Love returned to the United States. In late 1982, she attended a Faith No More concert in San Francisco and convinced the members to let her join as a singer. The group recorded material with Love as a vocalist, but fired her; according to keyboardist Roddy Bottum, who remained Love's friend in the years after, the band wanted a "male energy". Love returned to working abroad as an erotic dancer, briefly in Taiwan, and then at a taxi dance hall in Hong Kong. By Love's account, she first used heroin while working at the Hong Kong dance hall, having mistaken it for cocaine. While still inebriated from the drug, Love was pursued by a wealthy male client who requested that she return with him to the Philippines, and gave her money to purchase new clothes. She used the money to purchase airfare back to the United States. 1983–1987: Early music projects and film At age 19, through her then-boyfriend's mother, film costume designer Bernadene Mann, Love took a short-lived job working at Paramount Studios cleaning out the wardrobe department of vintage pieces that had suffered dry rot or other damage. During this time, Love became interested in vintage fashion. She subsequently returned to Portland, where she formed embryonic musical projects with her friends Ursula Wehr and Robin Barbur (namely Sugar Babylon, later known as Sugar Babydoll), Love formed the Pagan Babies with Kat Bjelland, whom she met at the Satyricon club in Portland in 1984. As Love later reflected, "The best thing that ever happened to me in a way, was Kat." Love asked Bjelland to start a band with her as a guitarist, and the two moved to San Francisco in June 1985, where they recruited bassist Jennifer Finch and drummer Janis Tanaka. According to Bjelland, "[Courtney] didn't play an instrument at the time" aside from keyboards, so Bjelland would transcribe Love's musical ideas on guitar for her. The group played several house shows and recorded one 4-track demo before disbanding in late 1985. After Pagan Babies, Love moved to Minneapolis, where Bjelland had formed the group Babes in Toyland, and briefly worked as a concert promoter before returning to California. Drummer Lori Barbero recalled Love's time in Minneapolis: Deciding to shift her focus to acting, Love enrolled at the San Francisco Art Institute and studied film under experimental director George Kuchar, featuring in one of his short films, Club Vatican. She also took experimental theater courses in Oakland taught by Whoopi Goldberg. In 1985, Love submitted an audition tape for the role of Nancy Spungen in the Sid Vicious biopic Sid and Nancy (1986), and was given a minor supporting role by director Alex Cox. After filming Sid and Nancy in New York City, she worked at a peep show in Times Square and squatted at the ABC No Rio social center and Pyramid Club in the East Village. The same year, Cox cast her in a leading role in his film Straight to Hell (1987), a Spaghetti Western starring Joe Strummer and Grace Jones filmed in Spain in 1986. The film caught the attention of Andy Warhol, who featured Love in an episode of Andy Warhol's Fifteen Minutes. She also had a part in the 1988 Ramones music video for "I Wanna Be Sedated", appearing as a bride among dozens of party guests. In 1988, Love abandoned acting and returned to the West Coast, citing the "celebutante" fame she had attained as the central reason. She returned to stripping in the small town of McMinnville, Oregon, where she was recognized by customers at the bar. This prompted Love to go into isolation, so she relocated to Anchorage, Alaska, where she lived for three months to "gather her thoughts", supporting herself by working at a strip club frequented by local fishermen. "I decided to move to Alaska because I needed to get my shit together and learn how to work", she said in retrospect. "So I went on this sort of vision quest. I got rid of all my earthly possessions. I had my bad little strip clothes and some big sweaters, and I moved into a trailer with a bunch of other strippers." 1988–1991: Beginnings of Hole At the end of 1988, Love taught herself to play guitar and relocated to Los Angeles, where she placed an ad in a local music zine: "I want to start a band. My influences are Big Black, Sonic Youth, and Fleetwood Mac." By 1989, Love had recruited guitarist Eric Erlandson; bassist Lisa Roberts, her neighbor; and drummer Caroline Rue, whom she met at a Gwar concert. Love named the band Hole after a line from Euripides' Medea ("There is a hole that pierces right through me") and a conversation in which her mother told her that she could not live her life "with a hole running through her". On July 23, 1989, Love married Leaving Trains vocalist James Moreland in Las Vegas; the marriage was annulled the same year. She later said that Moreland was a transvestite and that they had married "as a joke". After forming Hole, Love and Erlandson had a romantic relationship that lasted over a year. In Hole's formative stages, Love continued to work at strip clubs in Hollywood (including Jumbo's Clown Room and the Seventh Veil), saving money to purchase backline equipment and a touring van, while rehearsing at a Hollywood studio loaned to her by the Red Hot Chili Peppers. Hole played their first show in November 1989 at Raji's, a rock club in central Hollywood. Their debut single, "Retard Girl", was issued in April 1990 through the Long Beach indie label Sympathy for the Record Industry, and was played by Rodney Bingenheimer on local rock station KROQ. Hole appeared on the cover of Flipside, a Los Angeles-based punk fanzine. In early 1991, they released their second single, "Dicknail", through Sub Pop Records. With no wave, noise rock and grindcore bands being major influences on Love, Hole's first studio album, Pretty on the Inside, captured an abrasive sound and contained disturbing, graphic lyrics, described by Q as "confrontational [and] genuinely uninhibited". The record was released in September 1991 on Caroline Records, produced by Kim Gordon of Sonic Youth with assistant production from Gumball's Don Fleming; Love and Gordon had met when Hole opened for Sonic Youth during their promotional tour for Goo at the Whisky a Go Go in November 1990. In early 1991, Love sent Gordon a personal letter asking her to produce the record for the band, to which she agreed. Though Love later said Pretty on the Inside was "unlistenable" and "[un]melodic", the album received generally positive critical reception from indie and punk rock critics and was named one of the 20 best albums of the year by Spin. It gained a following in the United Kingdom, charting at 59 on the UK Albums Chart, and its lead single, "Teenage Whore", entered the UK Indie Chart at number one. The album's feminist slant led many to tag the band as part of the riot grrrl movement, a movement with which Love did not associate. The band toured in support of the record, headlining with Mudhoney in Europe; in the United States, they opened for the Smashing Pumpkins, and performed at CBGB in New York City. During the tour, Love briefly dated Smashing Pumpkins frontman Billy Corgan and then Nirvana frontman Kurt Cobain. There are varying accounts of how Love and Cobain came to know one another. Journalist Michael Azerrad states that the two met in 1989 at the Satyricon nightclub in Portland, Oregon, though Cobain biographer Charles Cross claimed the date was actually February 12, 1990; Cross said that Cobain playfully wrestled Love to the floor after she said that he looked like Dave Pirner of Soul Asylum. According to Love, she first met Cobain at a Dharma Bums show in Portland, while Love's bandmate Eric Erlandson said that both he and Love were introduced to Cobain in a parking lot after a Butthole Surfers/L7 concert at the Hollywood Palladium on May 17, 1991. Sometime in late 1991, Love and Cobain became re-acquainted through Jennifer Finch, one of Love's longtime friends and former bandmates. Love and Cobain were a couple by 1992. 1992–1995: Marriage to Kurt Cobain, Live Through This and breakthrough Shortly after completing the tour for Pretty on the Inside, Love married Cobain on Waikiki Beach in Honolulu, Hawaii on February 24, 1992. She wore a satin and lace dress once owned by actress Frances Farmer, and Cobain wore plaid pajamas. During Love's pregnancy, Hole recorded a cover of "Over the Edge" for a Wipers tribute album, and recorded their fourth single, "Beautiful Son", which was released in April 1993. On August 18, the couple's only child, a daughter, Frances Bean Cobain, was born in Los Angeles. The couple relocated to Carnation, Washington and then to Seattle. Love's first major media exposure came in a September 1992 profile with Cobain for Vanity Fair by journalist Lynn Hirschberg, entitled "Strange Love". Cobain had become a major public figure following the surprise success of Nirvana's album Nevermind. Love was urged by her manager to participate in the cover story. In the year prior, Love and Cobain had developed a heroin addiction; the profile painted them in an unflattering light, suggesting that Love had been addicted to heroin during her pregnancy. The Los Angeles Department of Children and Family Services investigated, and custody of Frances was temporarily awarded to Love's sister, Jaimee. Love claimed she was misquoted by Hirschberg, and asserted that she had immediately quit heroin during her first trimester after she discovered she was pregnant. Love later said the article had serious implications for her marriage and Cobain's mental state, suggesting it was a factor in his suicide two years later. On September 8, 1993, Love and Cobain made their only public performance together at the Rock Against Rape benefit in Hollywood, performing two acoustic duets of "Pennyroyal Tea" and "Where Did You Sleep Last Night". Love also performed electric versions of two new Hole songs, "Doll Parts" and "Miss World", both written for their upcoming second album. In October 1993, Hole recorded their second album, Live Through This, in Atlanta. The album featured a new lineup with bassist Kristen Pfaff and drummer Patty Schemel. Live Through This was released on Geffen's subsidiary label DGC on April 12, 1994, one week after Cobain's death from a self-inflicted gunshot wound in the Seattle home he shared with Love, who was in rehab in Los Angeles at the time. In the following months, Love was rarely seen in public, holing up at her home with friends and family members. Cobain's remains were cremated and his ashes divided into portions by Love, who kept some in a teddy bear and some in an urn. In June 1994, she traveled to the Namgyal Buddhist Monastery in Ithaca, New York and had his ashes ceremonially blessed by Buddhist monks. Another portion was mixed into clay and made into memorial sculptures. On June 16, 1994, Hole bassist Kristen Pfaff died of a heroin overdose in Seattle. For the band's impending tour, Love recruited Canadian bassist Melissa Auf der Maur. Live Through This was a commercial and critical success, hitting platinum RIAA certification in April 1995 and receiving numerous critical accolades. The success of the record combined with Cobain's suicide resulted in a high level of publicity for Love, and she was featured on Barbara Walters' 10 Most Fascinating People in 1995. Simultaneously, her erratic onstage behavior and various legal troubles during Hole's 1994–1995 world tour compounded the media coverage of her. Hole's performance on August 26, 1994, at the Reading Festival—Love's first public performance following Cobain's death—was described by MTV as "by turns macabre, frightening and inspirational". John Peel wrote in The Guardian that Love's disheveled appearance "would have drawn whistles of astonishment in Bedlam", and that her performance "verged on the heroic ... Love steered her band through a set which dared you to pity either her recent history or that of the band ... the band teetered on the edge of chaos, generating a tension which I cannot remember having felt before from any stage." The band performed a series of riotous concerts over the following year, with Love frequently appearing hysterical onstage, flashing crowds, stage diving, and getting into fights with audience members. One journalist reported that at the band's show in Boston in December 1994: "Love interrupted the music and talked about her deceased husband Kurt Cobain, and also broke out into Tourette syndrome-like rants. The music was great, but the raving was vulgar and offensive, and prompted some of the audience to shout back at her." In January 1995, Love was arrested in Melbourne for disrupting a Qantas flight after getting into an argument with a stewardess. On July 4, 1995, at the Lollapalooza Festival in George, Washington, Love threw a lit cigarette at musician Kathleen Hanna before punching her in the face, alleging that Hanna had made a joke about her daughter. She pleaded guilty to an assault charge and was sentenced to anger management classes. In November 1995, two male teenagers sued Love for allegedly punching them during a Hole concert in Orlando, Florida in March 1995. The judge dismissed the case on grounds that the teens "weren't exposed to any greater amount of violence than could reasonably be expected at an alternative rock concert". Love later said she had little memory of 1994–1995, as she had been using large quantities of heroin and Rohypnol at the time. 1996–2002: Acting success and Celebrity Skin After Hole's world tour concluded in 1996, Love made a return to acting, first in small roles in the Jean-Michel Basquiat biopic Basquiat and the drama Feeling Minnesota (1996), and then a starring role as Larry Flynt's wife Althea in Miloš Forman's critically acclaimed 1996 film The People vs. Larry Flynt. Love went through rehabilitation and quit using heroin at the insistence of Forman; she was ordered to take multiple urine tests under the supervision of Columbia Pictures while filming, and passed all of them. Despite Columbia Pictures' initial reluctance to hire Love due to her troubled past, her performance received acclaim, earning a Golden Globe nomination for Best Actress, and a New York Film Critics Circle Award for Best Supporting Actress. Critic Roger Ebert called her work in the film "quite a performance; Love proves she is not a rock star pretending to act, but a true actress." She won several other awards from various film critic associations for the film. During this time, Love maintained what the media noted as a more decorous public image, and she appeared in ad campaigns for Versace and in a Vogue Italia spread. Following the release of The People vs. Larry Flynt, she dated her co-star Edward Norton, with whom she remained until 1999. In late 1997, Hole released the compilations My Body, the Hand Grenade and The First Session, both of which featured previously recorded material. Love attracted media attention in May 1998 after punching journalist Belissa Cohen at a party; the suit was settled out of court for an undisclosed sum. In September 1998, Hole released their third studio album, Celebrity Skin, which featured a stark power pop sound that contrasted with their earlier punk influences. Love divulged her ambition of making an album where "art meets commerce ... there are no compromises made, it has commercial appeal, and it sticks to [our] original vision." She said she was influenced by Neil Young, Fleetwood Mac, and My Bloody Valentine when writing the album. Smashing Pumpkins frontman Billy Corgan co-wrote several songs. Celebrity Skin was well received by critics; Rolling Stone called it "accessible, fiery and intimate—often at the same time ... a basic guitar record that's anything but basic." Celebrity Skin went multi-platinum, and topped "Best of Year" lists at Spin and The Village Voice. It garnered the Hole's only number-one single on the Modern Rock Tracks chart with "Celebrity Skin". Hole promoted the album through MTV performances and at the 1998 Billboard Music Awards, and were nominated for three Grammy Awards at the 41st Grammy Awards ceremony. Before the release of Celebrity Skin, Love and Fender designed a low-priced Squier brand guitar, the Vista Venus. The instrument featured a shape inspired by Mercury, a little-known independent guitar manufacturer, Stratocaster, and Rickenbacker's solid body guitars and had a single-coil and a humbucker pickup, and was available in 6-string and 12-string versions. In an early 1999 interview, Love said about the Venus: "I wanted a guitar that sounded really warm and pop, but which required just one box to go dirty ... And something that could also be your first band guitar. I didn't want it all teched out. I wanted it real simple, with just one pickup switch." Hole toured with Marilyn Manson on the Beautiful Monsters Tour in 1999, but dropped out after nine performances; Love and Manson disagreed over production costs, and Hole was forced to open for Manson under an agreement with Interscope Records. Hole resumed touring with Imperial Teen. Love later said Hole also abandoned the tour was due to Manson and Korn's (whom they also toured with in Australia) sexualized treatment of teenage female audience members. Love told interviewers at 99X.FM in Atlanta: "What I really don't like—there are certain girls that like us, or like me, who are really messed up ... and they do not need to be—they're very young—and they do not need to be taken and raped, or filmed having enema contests ... going out into the audience and picking up fourteen and fifteen-year-old girls who obviously cut themselves, and then having to see them in the morning ... it's just uncool." In 1999, Love was awarded an Orville H. Gibson award for Best Female Rock Guitarist. During this time, she starred opposite Jim Carrey as his partner Lynne Margulies in the Andy Kaufman biopic Man on the Moon (1999), followed by a role as William S. Burroughs's wife Joan Vollmer in Beat (2000) alongside Kiefer Sutherland. Love was cast as the lead in John Carpenter's sci-fi horror film Ghosts of Mars, but backed out after injuring her foot. She sued the ex-wife of her then-boyfriend, James Barber, whom Love alleged had caused the injury by running over her foot with her Volvo. The following year, she returned to film opposite Lili Taylor in Julie Johnson (2001), in which she played a woman who has a lesbian relationship; Love won an Outstanding Actress award at L.A.'s Outfest. She was then cast in the thriller Trapped (2002), alongside Kevin Bacon and Charlize Theron. The film was a box-office flop. In the interim, Hole had become dormant. In March 2001, Love began a "punk rock femme supergroup", Bastard, enlisting Schemel, Veruca Salt co-frontwoman Louise Post, and bassist Gina Crosley. Post recalled: "[Love] was like, 'Listen, you guys: I've been in my Malibu, manicure, movie-star world for two years, alright? I wanna make a record. And let's leave all that grunge shit behind us, eh? We were being so improvisational, and singing together, and with a trust developing between us. It was the shit." The group recorded a demo tape, but by September 2001, Post and Crosley had left, with Post citing "unhealthy and unprofessional working conditions". In May 2002, Hole announced their breakup amid continuing litigation with Universal Music Group over their record contract. In 1997, Love and former Nirvana members Krist Novoselic and Dave Grohl formed a limited liability company, Nirvana LLC, to manage Nirvana's business dealings. In June 2001, Love filed a lawsuit to dissolve it, blocking the release of unreleased Nirvana material and delaying the release of the Nirvana compilation With the Lights Out. Grohl and Novoselic sued Love, calling her "irrational, mercurial, self-centered, unmanageable, inconsistent and unpredictable". She responded with a letter stating that "Kurt Cobain was Nirvana" and that she and his family were the "rightful heirs" to the Nirvana legacy. 2003–2008: Solo work and legal troubles In February 2003, Love was arrested at Heathrow Airport for disrupting a flight and was banned from Virgin Airlines. In October, she was arrested in Los Angeles after breaking several windows of her producer and then-boyfriend James Barber's home, and was charged with being under the influence of a controlled substance; the ordeal resulted in her temporarily losing custody of her daughter. After the breakup of Hole, Love began composing material with songwriter Linda Perry, and in July 2003 signed a contract with Virgin Records. She began recording her debut solo album, America's Sweetheart, in France shortly after. Virgin Records released America's Sweetheart in February 2004; it received mixed reviews. Charles Aaron of Spin called it a "jaw-dropping act of artistic will and a fiery, proper follow-up to 1994's Live Through This" and awarded it eight out of ten, while Amy Phillips of The Village Voice wrote: "[Love is] willing to act out the dream of every teenage brat who ever wanted to have a glamorous, high-profile hissyfit, and she turns those egocentric nervous breakdowns into art. Sure, the art becomes less compelling when you've been pulling the same stunts for a decade. But, honestly, is there anybody out there who fucks up better?" The album sold fewer than 100,000 copies. Love later expressed regret over the record, blaming her drug problems at the time. Shortly after it was released, she told Kurt Loder on TRL: "I cannot exist as a solo artist. It's a joke." On March 17, 2004, Love appeared on the Late Show with David Letterman to promote America's Sweetheart. Her appearance drew media coverage when she lifted her shirt multiple times, flashed Letterman, and stood on his desk. The New York Times wrote: "The episode was not altogether surprising for Ms. Love, 39, whose most public moments have veered from extreme pathos—like the time she read the suicide note of her famous husband, Kurt Cobain, on MTV—to angry feminism to catfights to incoherent ranting." Hours later, in the early morning of March 18, Love was arrested in Manhattan for allegedly striking a fan with a microphone stand during a small concert in the East Village. She was released within hours and performed a scheduled concert the following evening at the Bowery Ballroom. Four days later, she called in multiple times to The Howard Stern Show, claiming in broadcast conversations with Stern that the incident had not occurred, and that actress Natasha Lyonne, who was at the concert, was told by the alleged victim that he had been paid $10,000 to file a false claim leading to Love's arrest. On July 9, 2004, her 40th birthday, Love was arrested for failing to make a court appearance for the March 2004 charges, and taken to Bellevue Hospital, allegedly incoherent, where she was placed on a 72-hour watch. According to police, she was believed to be a potential danger to herself, but deemed mentally sound and released to a rehab facility two days later. Amidst public criticism and press coverage, comedian Margaret Cho published an opinion piece, "Courtney Deserves Better from Feminists", arguing that negative associations of Love with her drug and personal problems (including from feminists) overshadowed her music and wellbeing. Love pleaded guilty in October 2004 to disorderly conduct over the incident in East Village. Love's appearance as a roaster on the Comedy Central Roast of Pamela Anderson in August 2005, in which she appeared intoxicated and disheveled, attracted further media attention. One review said that Love "acted as if she belonged in an institution". Six days after the broadcast, Love was sentenced to a 28-day lockdown rehab program for being under the influence of a controlled substance, violating her probation. To avoid jail time, she accepted an additional 180-day rehab sentence in September 2005. In November 2005, after completing the program, Love was discharged from the rehab center under the provision that she complete further outpatient rehab. In subsequent interviews, Love said she had been addicted to substances including prescription drugs, cocaine, and crack cocaine. She said had been sober since completing rehabilitation in 2007, and cited her Nichiren Buddhist practice (which she began in 1989) as integral to her sobriety. In the midst of her | deemed mentally sound and released to a rehab facility two days later. Amidst public criticism and press coverage, comedian Margaret Cho published an opinion piece, "Courtney Deserves Better from Feminists", arguing that negative associations of Love with her drug and personal problems (including from feminists) overshadowed her music and wellbeing. Love pleaded guilty in October 2004 to disorderly conduct over the incident in East Village. Love's appearance as a roaster on the Comedy Central Roast of Pamela Anderson in August 2005, in which she appeared intoxicated and disheveled, attracted further media attention. One review said that Love "acted as if she belonged in an institution". Six days after the broadcast, Love was sentenced to a 28-day lockdown rehab program for being under the influence of a controlled substance, violating her probation. To avoid jail time, she accepted an additional 180-day rehab sentence in September 2005. In November 2005, after completing the program, Love was discharged from the rehab center under the provision that she complete further outpatient rehab. In subsequent interviews, Love said she had been addicted to substances including prescription drugs, cocaine, and crack cocaine. She said had been sober since completing rehabilitation in 2007, and cited her Nichiren Buddhist practice (which she began in 1989) as integral to her sobriety. In the midst of her legal troubles, Love had endeavors in writing and publishing. She co-wrote a semi-autobiographical manga, Princess Ai (Japanese: プリンセス·アイ物語), with Stu Levy, illustrated by Misaho Kujiradou and Ai Yazawa; it was released in three volumes in the United States and Japan between 2004 and 2006. In 2006, Love published a memoir, Dirty Blonde, and began recording her second solo album, How Dirty Girls Get Clean, collaborating again with Perry and Billy Corgan. Love had written several songs, including an anti-cocaine song titled "Loser Dust", during her time in rehab in 2005. She told Billboard: "My hand-eye coordination was so bad [after the drug use], I didn't even know chords anymore. It was like my fingers were frozen. And I wasn't allowed to make noise [in rehab] ... I never thought I would work again." Tracks and demos for the album leaked onlind in 2006, and a documentary, The Return of Courtney Love, detailing the making of the album, aired on the British television network More4 in the fall of that year. A rough acoustic version of "Never Go Hungry Again", recorded during an interview for The Times in November, was also released. Incomplete audio clips of the song "Samantha", originating from an interview with NPR, were distributed on the internet in 2007. 2009–2012: Hole revival and visual art In March 2009, fashion designer Dawn Simorangkir brought a libel suit against Love concerning a defamatory post Love made on her Twitter account, which was eventually settled for $450,000. Several months later, in June 2009, NME published an article detailing Love's plan to reunite Hole and release a new album, Nobody's Daughter. In response, former Hole guitarist Eric Erlandson stated in Spin magazine that contractually no reunion could take place without his involvement; therefore Nobody's Daughter would remain Love's solo record, as opposed to a "Hole" record. Love responded to Erlandson's comments in a Twitter post, claiming "he's out of his mind, Hole is my band, my name, and my Trademark". Nobody's Daughter was released worldwide as a Hole album on April 27, 2010. For the new line-up, Love recruited guitarist Micko Larkin, Shawn Dailey (bass guitar), and Stu Fisher (drums, percussion). Nobody's Daughter featured material written and recorded for Love's unfinished solo album, How Dirty Girls Get Clean, including "Pacific Coast Highway", "Letter to God", "Samantha", and "Never Go Hungry", although they were re-produced in the studio with Larkin and engineer Michael Beinhorn. The album's subject matter was largely centered on Love's tumultuous life between 2003 and 2007, and featured a polished folk rock sound, and more acoustic guitar work than previous Hole albums. The first single from Nobody's Daughter was "Skinny Little Bitch", released to promote the album in March 2010. The album received mixed reviews. Robert Sheffield of Rolling Stone gave the album three out of five, saying Love "worked hard on these songs, instead of just babbling a bunch of druggy bullshit and assuming people would buy it, the way she did on her 2004 flop, America's Sweetheart". Sal Cinquemani of Slant Magazine also gave the album three out of five: "It's Marianne Faithfull's substance-ravaged voice that comes to mind most often while listening to songs like 'Honey' and 'For Once in Your Life'. The latter track is, in fact, one of Love's most raw and vulnerable vocal performances to date ... the song offers a rare glimpse into the mind of a woman who, for the last 15 years, has been as famous for being a rock star as she's been for being a victim." Love and the band toured internationally from 2010 into late 2012 promoting the record, with their pre-release shows in London and at South by Southwest receiving critical acclaim. In 2011, Love participated in Hit So Hard, a documentary chronicling bandmate Schemel's time in Hole. In May 2012, Love debuted an art collection at Fred Torres Collaborations in New York titled "And She's Not Even Pretty", which contained over forty drawings and paintings by Love composed in ink, colored pencil, pastels, and watercolors. Later in the year, she collaborated with Michael Stipe on the track "Rio Grande" for Johnny Depp's sea shanty album Son of Rogues Gallery, and in 2013, co-wrote and contributed vocals on "Rat A Tat" from Fall Out Boy's album Save Rock and Roll, also appearing in the song's music video. 2013–2015: Return to acting; libel lawsuits After dropping the Hole name and performing as a solo artist in late 2012, Love appeared in spring 2013 advertisements for Yves Saint Laurent alongside Kim Gordon and Ariel Pink. Love completed a solo tour of North America in mid-2013, which was purported to be in promotion of an upcoming solo album; however, it was ultimately dubbed a "greatest hits" tour, and featured songs from Love's and Hole's back catalogue. Love told Billboard at the time that she had recorded eight songs in the studio. Love was subject of a second landmark libel lawsuit brought against her in January 2014 by her former attorney Rhonda Holmes, who accused Love of online defamation, seeking $8 million in damages. It was the first case of alleged Twitter-based libel in U.S. history to make it to trial. The jury, however, found in Love's favor. A subsequent defamation lawsuit filed by fashion Simorangkir in February 2014, however, resulted in Love being ordered to pay a further $350,000 in recompense. On April 22, 2014, Love debuted the song "You Know My Name" on BBC Radio 6 to promote her tour of the United Kingdom. It was released as a double A-side single with the song "Wedding Day" on May 4, 2014, on her own label Cherry Forever Records via Kobalt Label Services. The tracks were produced by Michael Beinhorn, and feature Tommy Lee on drums. In an interview with the BBC, Love revealed that she and former Hole guitarist Eric Erlandson had reconciled, and had been rehearsing new material together, along with former bassist Melissa Auf der Maur and drummer Patty Schemel, though she did not confirm a reunion of the band. On May 1, 2014, in an interview with Pitchfork, Love commented further on the possibility of Hole reuniting, saying: "I'm not going to commit to it happening, because we want an element of surprise. There's a lot of is to be dotted and ts to be crossed." Love was cast in several television series in supporting parts throughout 2014 including the FX series Sons of Anarchy, Revenge, and Lee Daniels' network series Empire in a recurring guest role as Elle Dallas. The track "Walk Out on Me" featuring Love was included on the Empire: Original Soundtrack from Season 1 album, which debuted at number 1 on the Billboard 200. Alexis Petridis of The Guardian praised the track, saying: "The idea of Courtney Love singing a ballad with a group of gospel singers seems faintly terrifying ... the reality is brilliant. Love's voice fits the careworn lyrics, effortlessly summoning the kind of ravaged darkness that Lana Del Rey nearly ruptures herself trying to conjure up." In January 2015, Love starred in a New York City stage production titled Kansas City Choir Boy, a "pop opera" conceived by and co-starring Todd Almond. Charles Isherwood of The New York Times praised her performance, noting a "soft-edged and bewitching" stage presence, adding: "Her voice, never the most supple or rangy of instruments, retains the singular sound that made her an electrifying front woman for the band Hole: a single sustained noted can seem to simultaneously contain a plea, a wound and a threat." The show toured later in the year, with performances in Boston and Los Angeles. Love saw further legal troubles in April 2015 when journalist Anthony Bozza sued her over an alleged contractual violation regarding his co-writing of her memoir. Love subsequently joined Lana Del Rey on her Endless Summer Tour, performing as an opener on the tour's eight West Coast shows in May–June 2015. During her tenure on Del Rey's tour, Love debuted a new single, "Miss Narcissist", released on Wavves' independent label Ghost Ramp. She also was cast in a supporting role in James Franco's film The Long Home, based on William Gay's novel of the same name, marking her first film role in over ten years. 2016–present: Fashion and forthcoming music In January 2016, Love released a clothing line in collaboration with Sophia Amoruso titled "Love, Courtney", featuring eighteen pieces reflecting her personal style. In November 2016, she began filming the pilot for A Midsummer's Nightmare, a Shakespeare anthology series adapted for Lifetime. She then starred as Kitty Menendez in Menendez: Blood Brothers, a biopic television film based on the lives of Lyle and Erik Menendez, which premiered on Lifetime in June 2017. In October 2017, shortly after the Harvey Weinstein scandal first made news, a 2005 video of Love warning young actresses about Weinstein went viral. In the footage, while on the red carpet for the Comedy Central Roast of Pamela Anderson, Love was asked by Natasha Leggero if she had any advice for "a young girl moving to Hollywood"; she responded, "If Harvey Weinstein invites you to a private party in the Four Seasons [hotel], don't go". She later tweeted, "Although I wasn't one of his victims, I was eternally banned by [Creative Artists Agency] for speaking out". The same year, she was cast in Justin Kelly's biopic JT LeRoy, portraying a film producer opposite Laura Dern. In March 2018, Love appeared in the music video for Marilyn Manson's "Tattooed in Reverse", which she followed with an April 5, 2018, guest-judge appearance on RuPaul's Drag Race. In December 2018, Love filed and was awarded a restraining order against Sam Lutfi, who had acted as her manager for the previous six years, alleging verbal abuse and harassment. Her daughter, Frances, and sister, Jaimee, were also awarded restraining orders against Lutfi. In January 2019, a Los Angeles County judge extended the three-year duration of the order to a total of five years, citing Lutfi's apparent tendency to "prey upon people". On August 18, 2019, Love performed a solo set at the Yola Día festival in Los Angeles, which also featured performances by Cat Power and Lykke Li. On September 9, Love garnered press attention when she publicly criticized Joss Sackler, an heiress to the Sackler family OxyContin fortune, after she allegedly offered Love $100,000 to attend her fashion show during New York Fashion Week. In the same statement, Love indicated that she had relapsed into opioid addiction in 2018, stating that she had recently celebrated a year of sobriety. In October 2019, Love relocated from Los Angeles to London. On November 21, 2019, Love recorded the song "Mother", written and produced by Lawrence Rothman, as part of the soundtrack for the horror film The Turning (2020). In January 2020, she was honored with the Icon Award at the NME Awards, deemed by the publication as "one of the most influential singers in alternative culture of the last 30 years." The following month, she confirmed she was writing a new record which she described as "really sad ... [I'm] writing in minor chords, and that appeals to my sadness." Love revealed in March 2021 that she had been hospitalized with acute anemia in August 2020, which left her weighing and nearly killed her, though she has since made a full recovery. In June 2021, Love revealed the project Bruises of Roses, a new video series featuring her performing cover versions of her favorite songs. Artistry Influences Love has been candid about her diverse musical influences, the earliest being Patti Smith, The Runaways, and The Pretenders, artists she discovered while in juvenile hall as a young teenager. As a child, her first exposure to music was records that her parents received each month through Columbia Record Club. The first record Love owned was Leonard Cohen's Songs of Leonard Cohen (1967), which she obtained from her mother: "He was so lyric-conscious and morbid, and I was a pretty morbid kid", she recalled. As a teenager, she named Flipper, Kate Bush, Soft Cell, Joni Mitchell, Laura Nyro, Lou Reed, and Dead Kennedys among her favorite artists. While in Dublin at age fifteen, Love attended a Virgin Prunes concert, an event she credited as being a pivotal influence: "I had never seen so much sex, snarl, poetry, evil, restraint, grace, filth, raw power and the very essence of rock and roll", she recalled. "[I had seen] U2 [who] gave me lashes of love and inspiration, and a few nights later the Virgin Prunes fuckedmeup." Decades later, in 2009, Love introduced the band's frontman Gavin Friday at a Carnegie Hall event, and performed a song with him. Though often associated with punk music, Love has noted that her most significant musical influences have been post-punk and new wave artists. Commenting in 2021, Love said: Over the years, Love has also named several other new wave and post-punk bands as influences, including The Smiths, Siouxsie and the Banshees, Television, and Bauhaus. Love's diverse genre interests were illustrated in a 1991 interview with Flipside, in which she stated: "There's a part of me that wants to have a grindcore band and another that wants to have a Raspberries-type pop band." Discussing the abrasive sound of Hole's debut album, she said she felt she had to "catch up with all my hip peers who'd gone all indie on me, and who made fun of me for liking R.E.M. and The Smiths." She has also embraced the influence of experimental artists and punk rock groups, including Sonic Youth, Swans, Big Black, Diamanda Galás, the Germs, and The Stooges. While writing Celebrity Skin, she drew influence from Neil Young and My Bloody Valentine. She has also cited her contemporary PJ Harvey as an influence, saying: "The one rock star that makes me know I'm shit is Polly Harvey. I'm nothing next to the purity that she experiences." Literature and poetry have often been a major influence on her songwriting; Love said she had "always wanted to be a poet, but there was no money in it." She has named the works of T.S. Eliot and Charles Baudelaire as influential, and referenced works by Dante Rossetti, William Shakespeare, Rudyard Kipling, and Anne Sexton in her lyrics. Musical style and lyrics Musically, Love's work with Hole and her solo efforts have been characterized as alternative rock; Hole's early material, however, was described by critics as being stylistically closer to grindcore and aggressive punk rock. Spins October 1991 review of Hole's first album noted Love's layering of harsh and abrasive riffs buried more sophisticated musical arrangements. In 1998, she stated that Hole had "always been a pop band. We always had a subtext of pop. I always talked about it, if you go back ... what'll sound like some weird Sonic Youth tuning back then to you was sounding like the Raspberries to me, in my demented pop framework." Love's lyrical content is composed from a female's point of view, and her lyrics have been described as "literate and mordant" and noted by scholars for "articulating a third-wave feminist consciousness." Simon Reynolds, in reviewing Hole's debut album, noted: "Ms. Love's songs explore the full spectrum of female emotions, from vulnerability to rage. The songs are fueled by adolescent traumas, feelings of disgust about the body, passionate friendships with women and the desire to escape domesticity. Her lyrical style could be described as emotional nudism." Journalist and critic Kim France, in critiquing Love's lyrics, referred to her as a "dark genius" and likened her work to that of Anne Sexton. Love has remarked that lyrics have always been the most important component of songwriting for her: "The important thing for me ... is it has to look good on the page. I mean, you can love Led Zeppelin and not love their lyrics ... but I made a big effort in my career to have what's on the page mean something." Common themes present in Love's lyrics during her early career included body image, rape, suicide, conformity, pregnancy, prostitution, and death. In a 1991 interview with Everett True, |
album Big Red Car Film and television The Cow (1969 film), an Iranian film The Cow (1989 film), a Soviet animated short Cow (2009 film), a Chinese film Cow (2021 film), a British documentary film Cow (public service announcement), an anti texting while driving public service announcement Cows (TV series), a pilot and cancelled television sitcom produced by Eddie Izzard for Channel 4 in 1997 Cow, a character in the animated series Cow and Chicken Computer Originated World, referring to the globe ID the BBC1 TV network used from 1985 to 1991 Music Cows (band), a noise rock band from Minneapolis Cow (demo), a 1987 EP by Inspiral Carpets "Cows", a song by Grandaddy from their 1992 album Prepare to Bawl Other uses Cerritos On Wheels, municipal bus service operated by the City of Cerritos, California College of Wooster, liberal arts college in Wooster, Ohio Crude oil washing Cows (ice cream), a Canadian ice cream brand Cowdenbeath railway station, | television sitcom produced by Eddie Izzard for Channel 4 in 1997 Cow, a character in the animated series Cow and Chicken Computer Originated World, referring to the globe ID the BBC1 TV network used from 1985 to 1991 Music Cows (band), a noise rock band from Minneapolis Cow (demo), a 1987 EP by Inspiral Carpets "Cows", a song by Grandaddy from their 1992 album Prepare to Bawl Other uses Cerritos On Wheels, municipal bus service operated by the City of Cerritos, California College of Wooster, liberal arts college in Wooster, Ohio Crude oil washing Cows (ice cream), a Canadian ice cream brand Cowdenbeath railway station, Scotland, National Rail station code Cow, part of a cow-calf railroad locomotive set COWS, a mnemonic for Cold Opposite, Warm Same in the caloric reflex test See also Vacas (English: Cows), a 1991 Spanish film Kráva (English: |
original authors published a subsequent paper in 2008 defending their conclusions. Myths, legends and folklore Cannibalism features in the folklore and legends of many cultures and is most often attributed to evil characters or as extreme retribution for some wrongdoing. Examples include the witch in "Hansel and Gretel", Lamia of Greek mythology and Baba Yaga of Slavic folklore. A number of stories in Greek mythology involve cannibalism, in particular cannibalism of close family members, e.g., the stories of Thyestes, Tereus and especially Cronus, who was Saturn in the Roman pantheon. The story of Tantalus also parallels this. The wendigo is a creature appearing in the legends of the Algonquian people. It is thought of variously as a malevolent cannibalistic spirit that could possess humans or a monster that humans could physically transform into. Those who indulged in cannibalism were at particular risk, and the legend appears to have reinforced this practice as taboo. The Zuni people tell the story of the Átahsaia – a giant who cannibalizes his fellow demons and seeks out human flesh. The wechuge is a demonic cannibalistic creature that seeks out human flesh. It is a creature appearing in the Native American mythology of the Athabaskan people. It is said to be half monster and half human-like; however, it has many shapes and forms. Accusations William Arens, author of The Man-Eating Myth: Anthropology and Anthropophagy, questions the credibility of reports of cannibalism and argues that the description by one group of people of another people as cannibals is a consistent and demonstrable ideological and rhetorical device to establish perceived cultural superiority. Arens bases his thesis on a detailed analysis of numerous "classic" cases of cultural cannibalism cited by explorers, missionaries, and anthropologists. He asserts that many were steeped in racism, unsubstantiated, or based on second-hand or hearsay evidence. Accusations of cannibalism helped characterize indigenous peoples as "uncivilized", "primitive", or even "inhuman." These assertions promoted the use of military force as a means of "civilizing" and "pacifying" the "savages". During the Spanish conquest of the Aztec Empire and its earlier conquests in the Caribbean there were widespread reports of cannibalism, justifying the conquest. Cannibals were exempt from Queen Isabella's prohibition on enslaving the indigenous. Another example of the sensationalism of cannibalism and its connection to imperialism occurred during Japan's 1874 expedition to Taiwan. As Eskildsen describes, there was an exaggeration of cannibalism by Taiwanese indigenous peoples in Japan's popular media such as newspapers and illustrations at the time. This Horrid Practice: The Myth and Reality of Traditional Maori Cannibalism (2008) by New Zealand historian Paul Moon received a hostile reception by many Maori, who felt the book tarnished their whole people. The title of the book is drawn from the 16 January 1770 journal entry of Captain James Cook, who, in describing acts of Māori cannibalism, stated "though stronger evidence of this horrid practice prevailing among the inhabitants of this coast will scarcely be required, we have still stronger to give." History Among modern humans, cannibalism has been practiced by various groups. It was practiced by humans in Prehistoric Europe, Mesoamerica, South America, among Iroquoian peoples in North America, Māori in New Zealand, the Solomon Islands, parts of West Africa and Central Africa, some of the islands of Polynesia, New Guinea, Sumatra, and Fiji. Evidence of cannibalism has been found in ruins associated with the Ancestral Puebloans of the Southwestern United States as well as (at Cowboy Wash in Colorado). Pre-history There is evidence, both archaeological and genetic, that cannibalism has been practiced for hundreds of thousands of years by early Homo Sapiens and archaic hominins. Human bones that have been "de-fleshed" by other humans go back 600,000 years. The oldest Homo sapiens bones (from Ethiopia) show signs of this as well. Some anthropologists, such as Tim D. White, suggest that ritual cannibalism was common in human societies prior to the beginning of the Upper Paleolithic period. This theory is based on the large amount of "butchered human" bones found in Neanderthal and other Lower/Middle Paleolithic sites. Cannibalism in the Lower and Middle Paleolithic may have occurred because of food shortages. It has been also suggested that removing dead bodies through ritual cannibalism might have been a means of predator control, aiming to eliminate predators' and scavengers' access to hominid (and early human) bodies. Jim Corbett proposed that after major epidemics, when human corpses are easily accessible to predators, there are more cases of man-eating leopards, so removing dead bodies through ritual cannibalism (before the cultural traditions of burying and burning bodies appeared in human history) might have had practical reasons for hominids and early humans to control predation. In Gough's Cave, England, remains of human bones and skulls, around 14,700 years old, suggest that cannibalism took place amongst the people living in or visiting the cave, and that they may have used human skulls as drinking vessels. Researchers have found physical evidence of cannibalism in ancient times. In 2001, archaeologists at the University of Bristol found evidence of Iron Age cannibalism in Gloucestershire. Cannibalism was practiced as recently as 2000 years ago in Great Britain. Early history Cannibalism is mentioned many times in early history and literature. Herodotus in "The Histories" (450s to the 420s BCE) claimed, that after eleven days' voyage up the Borysthenes (Dnieper in Europe) a desolated land extended for a long way, and later the country of the man-eaters (other than Scythians) was located, and beyond it again a desolated area extended where no men lived. The tomb of ancient Egyptian king Unas contained a hymn in praise to the king portraying him as a cannibal. The Stoic philosopher Chrysippus wrote in his treatise On Justice that cannibalism was ethically acceptable. Polybius records that Hannibal Monomachus once suggested to the Carthaginian general Hannibal that he teach his army to adopt cannibalism in order to be properly supplied in his travel to Italy, although Barca and his officers could not bring themselves to practice it. In the same war, Gaius Terentius Varro once claimed to the citizens of Capua that Barca's Gaul and Spanish mercenaries fed on human flesh, though this claim seemed to be acknowledged as false. Cassius Dio recorded cannibalism practiced by the bucoli, Egyptian tribes led by Isidorus against Rome. They sacrificed and devoured two Roman officers in ritualistic fashion, swearing an oath over their entrails. According to Appian, during the Roman Siege of Numantia in the 2nd century BCE, the population of Numantia was reduced to cannibalism and suicide. Cannibalism was reported by Josephus during the siege of Jerusalem by Rome in 70 CE. Jerome, in his letter Against Jovinianus, discusses how people come to their present condition as a result of their heritage, and he then lists several examples of peoples and their customs. In the list, he mentions that he has heard that Attacotti eat human flesh and that Massagetae and Derbices (a people on the borders of India) kill and eat old people. Reports of cannibalism were recorded during the First Crusade, as Crusaders were alleged to have fed on the bodies of their dead opponents following the Siege of Ma'arra. Amin Maalouf also alleges further cannibalism incidents on the march to Jerusalem, and to the efforts made to delete mention of these from Western history. Even though this account does not appear in any contemporary Muslim chronicle. The famine and cannibalism are recognised as described by Fulcher of Chartres, but the torture and the killing of Muslim captives for cannibalism by Radulph of Caen are very unlikely since no Arab or Muslim records of the events exist. Had they occurred, they would have probably been recorded. That has been noted by BBC Timewatch series, the episode The Crusades: A Timewatch Guide, which included experts Thomas Asbridge and Muslim Arabic historian Fozia Bora, who state that Radulph of Caen's description does not appear in any contemporary Muslim chronicle. During Europe's Great Famine of 1315–17, there were many reports of cannibalism among the starving populations. In North Africa, as in Europe, there are references to cannibalism as a last resort in times of famine. The Moroccan Muslim explorer ibn Battuta reported that one African king advised him that nearby people were cannibals (although this may have been a prank played on ibn Battuta by the king to fluster his guest). Ibn Batutta reported that Arabs and Christians were safe, as their flesh was "unripe" and would cause the eater to fall ill. For a brief time in Europe, an unusual form of cannibalism occurred when thousands of Egyptian mummies preserved in bitumen were ground up and sold as medicine. The practice developed into a wide-scale business which flourished until the late 16th century. This "fad" ended because the mummies were revealed actually to be recently killed slaves. Two centuries ago, mummies were still believed to have medicinal properties against bleeding, and were sold as pharmaceuticals in powdered form (see human mummy confection and mummia). In China during the Tang dynasty, cannibalism was supposedly resorted to by rebel forces early in the period (who were said to raid neighboring areas for victims to eat), as well as both soldiers and civilians besieged during the rebellion of An Lushan. Eating an enemy's heart and liver was also claimed to be a feature of both official punishments and private vengeance. References to cannibalizing the enemy have also been seen in poetry written in the Song dynasty (for example, in Man Jiang Hong), although the cannibalizing is perhaps poetic symbolism, expressing hatred towards the enemy. Charges of cannibalism were levied against the Qizilbash of the Safavid Ismail. There is universal agreement that some Mesoamerican people practiced human sacrifice, but there is a lack of scholarly consensus as to whether cannibalism in pre-Columbian America was widespread. At one extreme, anthropologist Marvin Harris, author of Cannibals and Kings, has suggested that the flesh of the victims was a part of an aristocratic diet as a reward, since the Aztec diet was lacking in proteins. While most historians of the pre-Columbian era believe that there was ritual cannibalism related to human sacrifices, they do not support Harris's thesis that human flesh was ever a significant portion of the Aztec diet. Others have hypothesized that cannibalism was part of a blood revenge in war. Early modern and colonial era European explorers and colonizers brought home many stories of cannibalism practiced by the native peoples they encountered, but there is now archeological and written evidence for English settlers' cannibalism in 1609 in the Jamestown Colony under famine conditions. In Spain's overseas expansion to the New World, the practice of cannibalism was reported by Christopher Columbus in the Caribbean islands, and the Caribs were greatly feared because of their supposed practice of it. Queen Isabel of Castile had forbidden the Spaniards to enslave the indigenous, but if they were "guilty" of cannibalism, they could be enslaved. The accusation of cannibalism became a pretext for attacks on indigenous groups and justification for the Spanish conquest. In Yucatán, shipwrecked Spaniard Jerónimo de Aguilar, who later became a translator for Hernán Cortés, reported to have witnessed fellow Spaniards sacrificed and eaten, but escaped from captivity where he was being fattened for sacrifice himself. In the Florentine Codex (1576) compiled by Franciscan Bernardino de Sahagún from information provided by indigenous eyewitnesses has questionable evidence of Mexica (Aztec) cannibalism. Franciscan friar Diego de Landa reported on Yucatán instances. In early Brazil, there is reportage of cannibalism among the Tupinamba. It is recorded about the natives of the captaincy of Sergipe in Brazil: "They eat human flesh when they can get it, and if a woman miscarries devour the abortive immediately. If she goes her time out, she herself cuts the navel-string with a shell, which she boils along with the secondine [i.e. placenta], and eats them both." (see human placentophagy). In modern Brazil, a black comedy film, How Tasty Was My Little Frenchman, mostly in the Tupi language, portrays a Frenchman captured by the indigenous and his demise. The 1913 Handbook of Indians of Canada (reprinting 1907 material from the Bureau of American Ethnology), claims that North American natives practicing cannibalism included "... the Montagnais, and some of the tribes of Maine; the Algonkin, Armouchiquois, Iroquois, and Micmac; farther west the Assiniboine, Cree, Foxes, Chippewa, Miami, Ottawa, Kickapoo, Illinois, Sioux, and Winnebago; in the south the people who built the mounds in Florida, and the Tonkawa, Attacapa, Karankawa, Caddo, and Comanche; in the northwest and west, portions of the continent, the Thlingchadinneh and other Athapascan tribes, the Tlingit, Heiltsuk, Kwakiutl, Tsimshian, Nootka, Siksika, some of the Californian tribes, and the Ute. There is also a tradition of the practice among the Hopi, and mentions of the custom among other tribes of New Mexico and Arizona. The Mohawk, and the Attacapa, Tonkawa, and other Texas tribes were known to their neighbours as 'man-eaters.'" The forms of cannibalism described included both resorting to human flesh during famines and ritual cannibalism, the latter usually consisting of eating a small portion of an enemy warrior. From another source, according to Hans Egede, when the Inuit killed a woman accused of witchcraft, they ate a portion of her heart. As with most lurid tales of native cannibalism, these stories are treated with a great deal of scrutiny, as accusations of cannibalism were often used as justifications for the subjugation or destruction of "savages". The very first encounter between Europeans and Māori may have involved cannibalism of a Dutch sailor. In June 1772, the French explorer Marion du Fresne and 26 members of his crew were killed and eaten in the Bay of Islands. In an 1809 incident known as the Boyd massacre, about 66 passengers and crew of the Boyd were killed and eaten by Māori on the Whangaroa peninsula, Northland. Cannibalism was already a regular practice in Māori wars. In another instance, on July 11, 1821, warriors from the Ngapuhi tribe killed 2,000 enemies and remained on the battlefield "eating the vanquished until they were driven off by the smell of decaying bodies". Māori warriors fighting the New Zealand government in Titokowaru's War in New Zealand's North Island in 1868–69 revived ancient rites of cannibalism as part of the radical Hauhau movement of the Pai Marire religion. In parts of Melanesia, cannibalism was still practiced in the early 20th century, for a variety of reasons—including retaliation, to insult an enemy people, or to absorb the dead person's qualities. One tribal chief, Ratu Udre Udre in Rakiraki, Fiji, is said to have consumed 872 people and to have made a pile of stones to record his achievement. Fiji was nicknamed the "Cannibal Isles" by European sailors, who avoided disembarking there. The dense population of the Marquesas Islands, in what is now French | into zoology to describe an individual of a species consuming all or part of another individual of the same species as food, including sexual cannibalism. The Island Carib people of the Lesser Antilles, from whom the word "cannibalism" is derived, acquired a long-standing reputation as cannibals after their legends were recorded in the 17th century. Some controversy exists over the accuracy of these legends and the prevalence of actual cannibalism in the culture. Cannibalism was practised in New Guinea and in parts of the Solomon Islands, and flesh markets existed in some parts of Melanesia. Fiji was once known as the "Cannibal Isles". Cannibalism has been well documented in much of the world, including Fiji, the Amazon Basin, the Congo, and the Māori people of New Zealand. Neanderthals are believed to have practised cannibalism, and Neanderthals may have been eaten by anatomically modern humans. Cannibalism was also practised in ancient Egypt, Roman Egypt and during famines in Egypt such as the great famine of 1199–1202. Cannibalism has recently been both practised and fiercely condemned in several wars, especially in Liberia and the Democratic Republic of the Congo. It was still practised in Papua New Guinea as of 2012, for cultural reasons and in ritual as well as in war in various Melanesian tribes. Cannibalism has been said to test the bounds of cultural relativism because it challenges anthropologists "to define what is or is not beyond the pale of acceptable human behavior". Some scholars argue that no firm evidence exists that cannibalism has ever been a socially acceptable practice anywhere in the world, at any time in history, although this has been consistently debated against. A form of cannibalism popular in early modern Europe was the consumption of body parts or blood for medical purposes. This practice was at its height during the 17th century, although as late as the second half of the 19th century some peasants attending an execution are recorded to have "rushed forward and scraped the ground with their hands that they might collect some of the bloody earth, which they subsequently crammed in their mouth, in hope that they might thus get rid of their disease." Cannibalism has occasionally been practiced as a last resort by people suffering from famine, even in modern times. Famous examples include the ill-fated Donner Party (1846–47) and, more recently, the crash of Uruguayan Air Force Flight 571 (1972), after which some survivors ate the bodies of the dead. Additionally, there are cases of people suffering from mental illness engaging in cannibalism for sexual pleasure, such as Jeffrey Dahmer and Albert Fish. There is resistance to formally labeling cannibalism a mental disorder. Etymology The word "cannibalism" is derived from Caníbales, the Spanish name for the Caribs, a West Indies tribe that may have practiced cannibalism, from Spanish canibal or caribal, "a savage". The term anthropophagy, meaning "eating humans", is also used for human cannibalism. Reasons In some societies, cannibalism is a cultural norm. Consumption of a person from within the same community is called endocannibalism; ritual cannibalism of the recently deceased can be part of the grieving process or be seen as a way of guiding the souls of the dead into the bodies of living descendants. Exocannibalism is the consumption of a person from outside the community, usually as a celebration of victory against a rival tribe. Both types of cannibalism can also be fueled by the belief that eating a person's flesh or internal organs will endow the cannibal with some of the characteristics of the deceased. In Guns, Germs and Steel, Jared Diamond suggests that protein deficiency was the ultimate reason why cannibalism was formerly common in the New Guinea highlands. In most parts of the world, cannibalism is not a societal norm, but is sometimes resorted to in situations of extreme necessity. The survivors of the shipwrecks of the Essex and Méduse in the 19th century are said to have engaged in cannibalism, as did the members of Franklin's lost expedition and the Donner Party. Such cases generally involve necro-cannibalism (eating the corpse of someone who is already dead) as opposed to homicidal cannibalism (killing someone for food). In English law, the latter is always considered a crime, even in the most trying circumstances. The case of R v Dudley and Stephens, in which two men were found guilty of murder for killing and eating a cabin boy while adrift at sea in a lifeboat, set the precedent that necessity is no defence to a charge of murder. In pre-modern medicine, the explanation given by the now-discredited theory of humorism for cannibalism was that it came about within a black acrimonious humor, which, being lodged in the linings of the ventricle, produced the voracity for human flesh. Medical aspects A well-known case of mortuary cannibalism is that of the Fore tribe in New Guinea, which resulted in the spread of the prion disease kuru. Although the Fore's mortuary cannibalism was well-documented, the practice had ceased before the cause of the disease was recognized. However, some scholars argue that although post-mortem dismemberment was the practice during funeral rites, cannibalism was not. Marvin Harris theorizes that it happened during a famine period coincident with the arrival of Europeans and was rationalized as a religious rite. In 2003, a publication in Science received a large amount of press attention when it suggested that early humans may have practiced extensive cannibalism. According to this research, genetic markers commonly found in modern humans worldwide suggest that today many people carry a gene that evolved as protection against the brain diseases that can be spread by consuming human brain tissue. A 2006 reanalysis of the data questioned this hypothesis, because it claimed to have found a data collection bias, which led to an erroneous conclusion. This claimed bias came from incidents of cannibalism used in the analysis not being due to local cultures, but having been carried out by explorers, stranded seafarers or escaped convicts. The original authors published a subsequent paper in 2008 defending their conclusions. Myths, legends and folklore Cannibalism features in the folklore and legends of many cultures and is most often attributed to evil characters or as extreme retribution for some wrongdoing. Examples include the witch in "Hansel and Gretel", Lamia of Greek mythology and Baba Yaga of Slavic folklore. A number of stories in Greek mythology involve cannibalism, in particular cannibalism of close family members, e.g., the stories of Thyestes, Tereus and especially Cronus, who was Saturn in the Roman pantheon. The story of Tantalus also parallels this. The wendigo is a creature appearing in the legends of the Algonquian people. It is thought of variously as a malevolent cannibalistic spirit that could possess humans or a monster that humans could physically transform into. Those who indulged in cannibalism were at particular risk, and the legend appears to have reinforced this practice as taboo. The Zuni people tell the story of the Átahsaia – a giant who cannibalizes his fellow demons and seeks out human flesh. The wechuge is a demonic cannibalistic creature that seeks out human flesh. It is a creature appearing in the Native American mythology of the Athabaskan people. It is said to be half monster and half human-like; however, it has many shapes and forms. Accusations William Arens, author of The Man-Eating Myth: Anthropology and Anthropophagy, questions the credibility of reports of cannibalism and argues that the description by one group of people of another people as cannibals is a consistent and demonstrable ideological and rhetorical device to establish perceived cultural superiority. Arens bases his thesis on a detailed analysis of numerous "classic" cases of cultural cannibalism cited by explorers, missionaries, and anthropologists. He asserts that many were steeped in racism, unsubstantiated, or based on second-hand or hearsay evidence. Accusations of cannibalism helped characterize indigenous peoples as "uncivilized", "primitive", or even "inhuman." These assertions promoted the use of military force as a means of "civilizing" and "pacifying" the "savages". During the Spanish conquest of the Aztec Empire and its earlier conquests in the Caribbean there were widespread reports of cannibalism, justifying the conquest. Cannibals were exempt from Queen Isabella's prohibition on enslaving the indigenous. Another example of the sensationalism of cannibalism and its connection to imperialism occurred during Japan's 1874 expedition to Taiwan. As Eskildsen describes, there was an exaggeration of cannibalism by Taiwanese indigenous peoples in Japan's popular media such as newspapers and illustrations at the time. This Horrid Practice: The Myth and Reality of Traditional Maori Cannibalism (2008) by New Zealand historian Paul Moon received a hostile reception by many Maori, who felt the book tarnished their whole people. The title of the book is drawn from the 16 January 1770 journal entry of Captain James Cook, who, in describing acts of Māori cannibalism, stated "though stronger evidence of this horrid practice prevailing among the inhabitants of this coast will scarcely be required, we have still stronger to give." History Among modern humans, cannibalism has been practiced by various groups. It was practiced by humans in Prehistoric Europe, Mesoamerica, South America, among Iroquoian peoples in North America, Māori in New Zealand, the Solomon Islands, parts of West Africa and Central Africa, some of the islands of Polynesia, New Guinea, Sumatra, and Fiji. Evidence of cannibalism has been found in ruins associated with the Ancestral Puebloans of the Southwestern United States as well as (at Cowboy Wash in Colorado). Pre-history There is evidence, both archaeological and genetic, that cannibalism has been practiced for hundreds of thousands of years by early Homo Sapiens and archaic hominins. Human bones that have been "de-fleshed" by other humans go back 600,000 years. The oldest Homo sapiens bones (from Ethiopia) show signs of this as well. Some anthropologists, such as Tim D. White, suggest that ritual cannibalism was common in human societies prior to the beginning of the Upper Paleolithic period. This theory is based on the large amount of "butchered human" bones found in Neanderthal and other Lower/Middle Paleolithic sites. Cannibalism in the Lower and Middle Paleolithic may have occurred because of food shortages. It has been also suggested that removing dead bodies through ritual cannibalism might have been a means of predator control, aiming to eliminate predators' and scavengers' access to hominid (and early human) bodies. Jim Corbett proposed that after major epidemics, when human corpses are easily accessible to predators, there are more cases of man-eating leopards, so removing dead bodies through ritual cannibalism (before the cultural traditions of burying and burning bodies appeared in human history) might have had practical reasons for hominids and early humans to control predation. In Gough's Cave, England, remains of human bones and skulls, around 14,700 years old, suggest that cannibalism took place amongst the people living in or visiting the cave, and that they may have used human skulls as drinking vessels. Researchers have found physical evidence of cannibalism in ancient times. In 2001, archaeologists at the University of Bristol found evidence of Iron Age cannibalism in Gloucestershire. Cannibalism was practiced as recently as 2000 years ago in Great Britain. Early history Cannibalism is mentioned many times in early history and literature. Herodotus in "The Histories" (450s to the 420s BCE) claimed, that after eleven days' voyage up the Borysthenes (Dnieper in Europe) a desolated land extended for a long way, and later the country of the man-eaters (other than Scythians) was located, and beyond it again a desolated area extended where no men lived. The tomb of ancient Egyptian king Unas contained a hymn in praise to the king portraying him as a cannibal. The Stoic philosopher Chrysippus wrote in his treatise On Justice that cannibalism was ethically acceptable. Polybius records that Hannibal Monomachus once suggested to the Carthaginian general Hannibal that he teach his army to adopt cannibalism in order to be properly supplied in his travel to Italy, although Barca and his officers could not bring themselves to practice it. In the same war, Gaius Terentius Varro once claimed to the citizens of Capua that Barca's Gaul and Spanish mercenaries fed on human flesh, though this claim seemed to be acknowledged as false. Cassius Dio recorded cannibalism practiced by the bucoli, Egyptian tribes led by Isidorus against Rome. They sacrificed and devoured two Roman officers in ritualistic fashion, swearing an oath over their entrails. According to Appian, during the Roman Siege of Numantia in the 2nd century BCE, the population of Numantia was reduced to cannibalism and suicide. Cannibalism was reported by Josephus during the siege of Jerusalem by Rome in 70 CE. Jerome, in his letter Against Jovinianus, discusses how people come to their present condition as a result of their heritage, and he then lists several examples of peoples and their customs. In the list, he mentions that he has heard that Attacotti eat human flesh and that Massagetae and Derbices (a people on the borders of India) kill and eat old people. Reports of cannibalism were recorded during the First Crusade, as Crusaders were alleged to have fed on the bodies of their dead opponents following the Siege of Ma'arra. Amin Maalouf also alleges further cannibalism incidents on the march to Jerusalem, and to the efforts made to delete mention of these from Western history. Even though this account does not appear in any contemporary Muslim chronicle. The famine and cannibalism are recognised as described by Fulcher of Chartres, but the torture and the killing of Muslim captives for cannibalism by Radulph of Caen are very unlikely since no Arab or Muslim records of the events exist. Had they occurred, they would have probably been recorded. That has been noted by BBC Timewatch series, the episode The Crusades: A Timewatch Guide, which included experts Thomas Asbridge and Muslim Arabic historian Fozia Bora, who state that Radulph of Caen's description does not appear in any contemporary Muslim chronicle. During Europe's Great Famine of 1315–17, there were many reports of cannibalism among the starving populations. In North Africa, as in Europe, there are references to cannibalism as a last resort in times of famine. The Moroccan Muslim explorer ibn Battuta reported that one African king advised him that nearby people were cannibals (although this may have been a prank played on ibn Battuta by the king to fluster his guest). Ibn Batutta reported that Arabs and Christians were safe, as their flesh was "unripe" and would cause the eater to fall ill. For a brief time in Europe, an unusual form of cannibalism occurred when thousands of Egyptian mummies preserved in bitumen were ground up and sold as medicine. The practice developed into a wide-scale business which flourished until the late 16th century. This "fad" ended because the mummies were revealed actually to be recently killed slaves. Two centuries ago, mummies were still believed to have medicinal properties against bleeding, and were sold as pharmaceuticals in powdered form (see human mummy confection and mummia). In China during the Tang dynasty, cannibalism was supposedly resorted to by rebel forces early in the period (who were said to raid neighboring areas for victims to eat), as well as both soldiers and civilians besieged during the rebellion of An Lushan. Eating an enemy's heart and liver was also claimed to be a feature of both official punishments and private vengeance. References to cannibalizing the enemy have also been seen in poetry written in the Song dynasty (for example, in Man Jiang Hong), although the cannibalizing is perhaps poetic symbolism, expressing hatred towards the enemy. Charges of cannibalism were levied against the Qizilbash of the Safavid Ismail. There is universal agreement that some Mesoamerican people practiced human sacrifice, but there is a lack of scholarly consensus as to whether cannibalism in pre-Columbian America was widespread. At one extreme, anthropologist Marvin Harris, author of Cannibals and Kings, has suggested that the flesh of the victims was a part of an aristocratic diet as a reward, since the Aztec diet was lacking in proteins. While most historians of the pre-Columbian era believe that there was ritual cannibalism related to human sacrifices, they do not support Harris's thesis that human flesh was ever a significant portion of the Aztec diet. Others have hypothesized that cannibalism was part of a blood revenge in war. Early modern and colonial era European explorers and colonizers brought home many stories of cannibalism practiced by the native peoples they encountered, but there is now archeological and written evidence for English settlers' cannibalism in 1609 in the Jamestown Colony under famine conditions. In Spain's overseas expansion to the New World, the practice of cannibalism was reported by Christopher Columbus in the Caribbean islands, and the Caribs were greatly feared because of their supposed practice of it. Queen Isabel of Castile had forbidden the Spaniards to enslave the indigenous, but if they were "guilty" of cannibalism, they could be enslaved. The accusation of cannibalism became a pretext for attacks on indigenous groups and justification for the Spanish conquest. In Yucatán, shipwrecked Spaniard Jerónimo de Aguilar, who later became a translator for Hernán Cortés, reported to have witnessed fellow Spaniards sacrificed and eaten, but escaped from captivity where he was being fattened for sacrifice himself. In the Florentine Codex (1576) compiled by Franciscan Bernardino de Sahagún from information provided by indigenous eyewitnesses has questionable evidence of Mexica (Aztec) cannibalism. Franciscan friar Diego de Landa reported on Yucatán instances. In early Brazil, there is reportage of cannibalism among the Tupinamba. It is recorded about the natives of the captaincy of Sergipe in Brazil: "They eat human flesh when they can get it, and if a woman miscarries devour the abortive immediately. If she goes her time out, she herself cuts the navel-string with a shell, which she boils along with the secondine [i.e. placenta], and eats them both." (see human placentophagy). In modern Brazil, a black comedy film, How Tasty Was My Little Frenchman, mostly in the Tupi language, portrays a Frenchman captured by the indigenous and his demise. The 1913 Handbook of Indians of Canada (reprinting 1907 material from the Bureau of American Ethnology), claims that North American natives practicing cannibalism included "... the Montagnais, and some of the tribes of Maine; the Algonkin, Armouchiquois, Iroquois, and Micmac; farther west the Assiniboine, Cree, Foxes, Chippewa, Miami, Ottawa, Kickapoo, Illinois, Sioux, and Winnebago; in the south the people who built the mounds in Florida, and the Tonkawa, Attacapa, Karankawa, Caddo, and Comanche; in the northwest and west, portions of the continent, the Thlingchadinneh and other Athapascan tribes, the Tlingit, Heiltsuk, Kwakiutl, Tsimshian, Nootka, Siksika, some of the Californian tribes, and the Ute. There is also a tradition of the practice among the Hopi, and mentions of the custom among other tribes of New Mexico and Arizona. The Mohawk, and the Attacapa, Tonkawa, and other Texas tribes were known to their neighbours as 'man-eaters.'" The forms of cannibalism described included both resorting to human flesh during famines and ritual cannibalism, the latter usually consisting of eating a small portion of an enemy warrior. From another source, according to Hans Egede, when the Inuit killed a woman accused of witchcraft, they ate a portion of her heart. As with most lurid tales of native cannibalism, these stories are treated with a great deal of scrutiny, as accusations of cannibalism were often used as justifications for the subjugation or destruction of "savages". The very first encounter between Europeans and Māori may have involved cannibalism of a Dutch sailor. In June 1772, the French explorer Marion du Fresne and 26 members of his crew were killed and eaten in the Bay of Islands. In an 1809 incident known as the Boyd massacre, about 66 passengers and crew of the Boyd were killed and eaten by Māori on the Whangaroa peninsula, Northland. Cannibalism was already a regular practice in Māori wars. In another |
such as alpha decay, beta decay, spontaneous fission, cluster decay, and other rarer modes of decay. Of the 94 naturally occurring elements, those with atomic numbers 1 through 82 each have at least one stable isotope (except for technetium, element 43 and promethium, element 61, which have no stable isotopes). Isotopes considered stable are those for which no radioactive decay has yet been observed. Elements with atomic numbers 83 through 94 are unstable to the point that radioactive decay of all isotopes can be detected. Some of these elements, notably bismuth (atomic number 83), thorium (atomic number 90), and uranium (atomic number 92), have one or more isotopes with half-lives long enough to survive as remnants of the explosive stellar nucleosynthesis that produced the heavy metals before the formation of our Solar System. At over 1.9 years, over a billion times longer than the current estimated age of the universe, bismuth-209 (atomic number 83) has the longest known alpha decay half-life of any naturally occurring element, and is almost always considered on par with the 80 stable elements. The very heaviest elements (those beyond plutonium, element 94) undergo radioactive decay with half-lives so short that they are not found in nature and must be synthesized. There are now 118 known elements. In this context, "known" means observed well enough, even from just a few decay products, to have been differentiated from other elements. Most recently, the synthesis of element 118 (since named oganesson) was reported in October 2006, and the synthesis of element 117 (tennessine) was reported in April 2010. Of these 118 elements, 94 occur naturally on Earth. Six of these occur in extreme trace quantities: technetium, atomic number 43; promethium, number 61; astatine, number 85; francium, number 87; neptunium, number 93; and plutonium, number 94. These 94 elements have been detected in the universe at large, in the spectra of stars and also supernovae, where short-lived radioactive elements are newly being made. The first 94 elements have been detected directly on Earth as primordial nuclides present from the formation of the Solar System, or as naturally occurring fission or transmutation products of uranium and thorium. The remaining 24 heavier elements, not found today either on Earth or in astronomical spectra, have been produced artificially: these are all radioactive, with very short half-lives; if any atoms of these elements were present at the formation of Earth, they are extremely likely, to the point of certainty, to have already decayed, and if present in novae have been in quantities too small to have been noted. Technetium was the first purportedly non-naturally occurring element synthesized, in 1937, although trace amounts of technetium have since been found in nature (and also the element may have been discovered naturally in 1925). This pattern of artificial production and later natural discovery has been repeated with several other radioactive naturally occurring rare elements. List of the elements are available by name, atomic number, density, melting point, boiling point and by symbol, as well as ionization energies of the elements. The nuclides of stable and radioactive elements are also available as a list of nuclides, sorted by length of half-life for those that are unstable. One of the most convenient, and certainly the most traditional presentation of the elements, is in the form of the periodic table, which groups together elements with similar chemical properties (and usually also similar electronic structures). Atomic number The atomic number of an element is equal to the number of protons in each atom, and defines the element. For example, all carbon atoms contain 6 protons in their atomic nucleus; so the atomic number of carbon is 6. Carbon atoms may have different numbers of neutrons; atoms of the same element having different numbers of neutrons are known as isotopes of the element. The number of protons in the atomic nucleus also determines its electric charge, which in turn determines the number of electrons of the atom in its non-ionized state. The electrons are placed into atomic orbitals that determine the atom's various chemical properties. The number of neutrons in a nucleus usually has very little effect on an element's chemical properties (except in the case of hydrogen and deuterium). Thus, all carbon isotopes have nearly identical chemical properties because they all have six protons and six electrons, even though carbon atoms may, for example, have 6 or 8 neutrons. That is why the atomic number, rather than mass number or atomic weight, is considered the identifying characteristic of a chemical element. The symbol for atomic number is Z. Isotopes Isotopes are atoms of the same element (that is, with the same number of protons in their atomic nucleus), but having different numbers of neutrons. Thus, for example, there are three main isotopes of carbon. All carbon atoms have 6 protons in the nucleus, but they can have either 6, 7, or 8 neutrons. Since the mass numbers of these are 12, 13 and 14 respectively, the three isotopes of carbon are known as carbon-12, carbon-13, and carbon-14, often abbreviated to 12C, 13C, and 14C. Carbon in everyday life and in chemistry is a mixture of 12C (about 98.9%), 13C (about 1.1%) and about 1 atom per trillion of 14C. Most (66 of 94) naturally occurring elements have more than one stable isotope. Except for the isotopes of hydrogen (which differ greatly from each other in relative mass—enough to cause chemical effects), the isotopes of a given element are chemically nearly indistinguishable. All of the elements have some isotopes that are radioactive (radioisotopes), although not all of these radioisotopes occur naturally. The radioisotopes typically decay into other elements upon radiating an alpha or beta particle. If an element has isotopes that are not radioactive, these are termed "stable" isotopes. All of the known stable isotopes occur naturally (see primordial isotope). The many radioisotopes that are not found in nature have been characterized after being artificially made. Certain elements have no stable isotopes and are composed only of radioactive isotopes: specifically the elements without any stable isotopes are technetium (atomic number 43), promethium (atomic number 61), and all observed elements with atomic numbers greater than 82. Of the 80 elements with at least one stable isotope, 26 have only one single stable isotope. The mean number of stable isotopes for the 80 stable elements is 3.1 stable isotopes per element. The largest number of stable isotopes that occur for a single element is 10 (for tin, element 50). Isotopic mass and atomic mass The mass number of an element, A, is the number of nucleons (protons and neutrons) in the atomic nucleus. Different isotopes of a given element are distinguished by their mass numbers, which are conventionally written as a superscript on the left hand side of the atomic symbol (e.g. 238U). The mass number is always a whole number and has units of "nucleons". For example, magnesium-24 (24 is the mass number) is an atom with 24 nucleons (12 protons and 12 neutrons). Whereas the mass number simply counts the total number of neutrons and protons and is thus a natural (or whole) number, the atomic mass of a single atom is a real number giving the mass of a particular isotope (or "nuclide") of the element, expressed in atomic mass units (symbol: u). In general, the mass number of a given nuclide differs in value slightly from its atomic mass, since the mass of each proton and neutron is not exactly 1 u; since the electrons contribute a lesser share to the atomic mass as neutron number exceeds proton number; and (finally) because of the nuclear binding energy. For example, the atomic mass of chlorine-35 to five significant digits is 34.969 u and that of chlorine-37 is 36.966 u. However, the atomic mass in u of each isotope is quite close to its simple mass number (always within 1%). The only isotope whose atomic mass is exactly a natural number is 12C, which by definition has a mass of exactly 12 because u is defined as 1/12 of the mass of a free neutral carbon-12 atom in the ground state. The standard atomic weight (commonly called "atomic weight") of an element is the average of the atomic masses of all the chemical element's isotopes as found in a particular environment, weighted by isotopic abundance, relative to the atomic mass unit. This number may be a fraction that is not close to a whole number. For example, the relative atomic mass of chlorine is 35.453 u, which differs greatly from a whole number as it is an average of about 76% chlorine-35 and 24% chlorine-37. Whenever a relative atomic mass value differs by more than 1% from a whole number, it is due to this averaging effect, as significant amounts of more than one isotope are naturally present in a sample of that element. Chemically pure and isotopically pure Chemists and nuclear scientists have different definitions of a pure element. In chemistry, a pure element means a substance whose atoms all (or in practice almost all) have the same atomic number, or number of protons. Nuclear scientists, however, define a pure element as one that consists of only one stable isotope. For example, a copper wire is 99.99% chemically pure if 99.99% of its atoms are copper, with 29 protons each. However it is not isotopically pure since ordinary copper consists of two stable isotopes, 69% 63Cu and 31% 65Cu, with different numbers of neutrons. However, a pure gold ingot would be both chemically and isotopically pure, since ordinary gold consists only of one isotope, 197Au. Allotropes Atoms of chemically pure elements may bond to each other chemically in more than one way, allowing the pure element to exist in multiple chemical structures (spatial arrangements of atoms), known as allotropes, which differ in their properties. For example, carbon can be found as diamond, which has a tetrahedral structure around each carbon atom; graphite, which has layers of carbon atoms with a hexagonal structure stacked on top of each other; graphene, which is a single layer of graphite that is very strong; fullerenes, which have nearly spherical shapes; and carbon nanotubes, which are tubes with a hexagonal structure (even these may differ from each other in electrical properties). The ability of an element to exist in one of many structural forms is known as 'allotropy'. The standard state, also known as the reference state, of an element is defined as its thermodynamically most stable state at a pressure of 1 bar and a given temperature (typically at 298.15K). In thermochemistry, an element is defined to have an enthalpy of formation of zero in its standard state. For example, the reference state for carbon is graphite, because the structure of graphite is more stable than that of the other allotropes. Properties Several kinds of descriptive categorizations can be applied broadly to the elements, including consideration of their general physical and chemical properties, their states of matter under familiar conditions, their melting and boiling points, their densities, their crystal structures as solids, and their origins. General properties Several terms are commonly used to characterize the general physical and chemical properties of the chemical elements. A first distinction is between metals, which readily conduct electricity, nonmetals, which do not, and a small group, (the metalloids), having intermediate properties and often behaving as semiconductors. A more refined classification is often shown in colored presentations of the periodic table. This system restricts the terms "metal" and "nonmetal" to only certain of the more broadly defined metals and nonmetals, adding additional terms for certain sets of the more broadly viewed metals and nonmetals. The version of this classification used in the periodic tables presented here includes: actinides, alkali metals, alkaline earth metals, halogens, lanthanides, transition metals, post-transition metals, metalloids, reactive nonmetals, and noble gases. In this system, the alkali metals, alkaline earth metals, and transition metals, as well as the lanthanides and the actinides, are special groups of the metals viewed in a broader sense. Similarly, the reactive nonmetals and the noble gases are nonmetals viewed in the broader sense. In some presentations, the halogens are not distinguished, with astatine identified as a metalloid and the others identified as nonmetals. States of matter Another commonly used basic distinction among the elements is their state of matter (phase), whether solid, liquid, or gas, at a selected standard temperature and pressure (STP). Most of the elements are solids at conventional temperatures and atmospheric pressure, while several are gases. Only bromine and mercury are liquids at 0 degrees Celsius (32 degrees Fahrenheit) and normal atmospheric pressure; caesium and gallium are solids at that temperature, but melt at 28.4 °C (83.2 °F) and 29.8 °C (85.6 °F), respectively. Melting and boiling points Melting and boiling points, typically expressed in degrees Celsius at a pressure of one atmosphere, are commonly used in characterizing the various elements. While known for most elements, either or both of these measurements is still undetermined for some of the radioactive elements available in only tiny quantities. Since helium remains a liquid even at absolute zero at atmospheric pressure, it has only a boiling point, and not a melting point, in conventional presentations. Densities The density at selected standard temperature and pressure (STP) is frequently used in characterizing the elements. Density is often expressed in grams per cubic centimeter (g/cm3). Since several elements are gases at commonly encountered temperatures, their densities are usually stated for their gaseous forms; when liquefied or solidified, the gaseous elements have densities similar to those of the other elements. When an element has allotropes with different densities, one representative allotrope is typically selected in summary presentations, while densities for each allotrope can be stated where more detail is provided. For example, the three familiar allotropes of carbon (amorphous carbon, graphite, and diamond) have densities of 1.8–2.1, 2.267, and 3.515 g/cm3, respectively. Crystal structures The elements studied to date as solid samples have eight kinds of crystal structures: cubic, body-centered cubic, face-centered cubic, hexagonal, monoclinic, orthorhombic, rhombohedral, and tetragonal. For some of the synthetically produced transuranic elements, available samples have been too small to determine crystal structures. Occurrence and origin on Earth Chemical elements may also be categorized by their origin on Earth, with the first 94 considered naturally occurring, while those with atomic numbers beyond 94 have only been produced artificially as the synthetic products of man-made nuclear reactions. Of the 94 naturally occurring elements, 83 are considered primordial and either stable or weakly radioactive. The remaining 11 naturally occurring elements possess half lives too short for them to have been present at the beginning of the Solar System, and are therefore considered transient elements. Of these 11 transient elements, 5 (polonium, radon, radium, actinium, and protactinium) are relatively common decay products of thorium and uranium. The remaining 6 transient elements (technetium, promethium, astatine, francium, neptunium, and plutonium) occur only rarely, as products of rare decay modes or nuclear reaction processes involving uranium or other heavy elements. No radioactive decay has been observed for elements with atomic numbers 1 through 82, except 43 (technetium) and 61 (promethium). Observationally stable isotopes of some elements (such as tungsten and lead), however, are predicted to be slightly radioactive with very long half-lives: for example, the half-lives predicted for the observationally stable lead isotopes range from 1035 to 10189 years. Elements with atomic numbers 43, 61, and 83 through 94 are unstable enough that their radioactive decay can readily be detected. Three of these elements, bismuth (element 83), thorium (element 90), and uranium (element 92) have one or more isotopes with half-lives long enough to survive as remnants of the explosive stellar nucleosynthesis that produced the heavy elements before the formation of the Solar System. For example, at over 1.9 years, over a billion times longer than the current estimated age of the universe, bismuth-209 has the longest known alpha decay half-life of any naturally occurring element. The very heaviest 24 elements (those beyond plutonium, element 94) undergo radioactive decay with short half-lives and cannot be produced as daughters of longer-lived elements, and thus are not known to occur in nature at all. Periodic table The properties of the chemical elements are often summarized using the periodic table, which powerfully and elegantly organizes the elements by increasing atomic number into rows ("periods") in which the columns ("groups") share recurring ("periodic") physical and chemical properties. The current standard table contains 118 confirmed elements as of 2021. Although earlier precursors to this presentation exist, its invention is generally credited to the Russian chemist Dmitri Mendeleev in 1869, who intended the table to illustrate recurring trends in the properties of the elements. The layout of the table has been refined and extended over time as new elements have been discovered and new theoretical models have been developed to explain chemical behavior. Use of the periodic table is now ubiquitous within the academic discipline of chemistry, providing an extremely useful framework to classify, systematize and compare all the many different forms of chemical behavior. The table has also found wide application in physics, geology, biology, materials science, engineering, agriculture, medicine, nutrition, environmental health, and astronomy. Its principles are especially important in chemical engineering. Nomenclature and symbols The various chemical elements are formally identified by their unique atomic numbers, by their accepted names, and by their symbols. Atomic numbers The known elements have atomic numbers from 1 through 118, conventionally presented as Arabic numerals. Since the elements can be uniquely sequenced by atomic number, conventionally from lowest to highest (as in a periodic table), sets of elements are sometimes specified by such notation as "through", "beyond", or "from ... through", as in "through iron", "beyond uranium", or "from | Another commonly used basic distinction among the elements is their state of matter (phase), whether solid, liquid, or gas, at a selected standard temperature and pressure (STP). Most of the elements are solids at conventional temperatures and atmospheric pressure, while several are gases. Only bromine and mercury are liquids at 0 degrees Celsius (32 degrees Fahrenheit) and normal atmospheric pressure; caesium and gallium are solids at that temperature, but melt at 28.4 °C (83.2 °F) and 29.8 °C (85.6 °F), respectively. Melting and boiling points Melting and boiling points, typically expressed in degrees Celsius at a pressure of one atmosphere, are commonly used in characterizing the various elements. While known for most elements, either or both of these measurements is still undetermined for some of the radioactive elements available in only tiny quantities. Since helium remains a liquid even at absolute zero at atmospheric pressure, it has only a boiling point, and not a melting point, in conventional presentations. Densities The density at selected standard temperature and pressure (STP) is frequently used in characterizing the elements. Density is often expressed in grams per cubic centimeter (g/cm3). Since several elements are gases at commonly encountered temperatures, their densities are usually stated for their gaseous forms; when liquefied or solidified, the gaseous elements have densities similar to those of the other elements. When an element has allotropes with different densities, one representative allotrope is typically selected in summary presentations, while densities for each allotrope can be stated where more detail is provided. For example, the three familiar allotropes of carbon (amorphous carbon, graphite, and diamond) have densities of 1.8–2.1, 2.267, and 3.515 g/cm3, respectively. Crystal structures The elements studied to date as solid samples have eight kinds of crystal structures: cubic, body-centered cubic, face-centered cubic, hexagonal, monoclinic, orthorhombic, rhombohedral, and tetragonal. For some of the synthetically produced transuranic elements, available samples have been too small to determine crystal structures. Occurrence and origin on Earth Chemical elements may also be categorized by their origin on Earth, with the first 94 considered naturally occurring, while those with atomic numbers beyond 94 have only been produced artificially as the synthetic products of man-made nuclear reactions. Of the 94 naturally occurring elements, 83 are considered primordial and either stable or weakly radioactive. The remaining 11 naturally occurring elements possess half lives too short for them to have been present at the beginning of the Solar System, and are therefore considered transient elements. Of these 11 transient elements, 5 (polonium, radon, radium, actinium, and protactinium) are relatively common decay products of thorium and uranium. The remaining 6 transient elements (technetium, promethium, astatine, francium, neptunium, and plutonium) occur only rarely, as products of rare decay modes or nuclear reaction processes involving uranium or other heavy elements. No radioactive decay has been observed for elements with atomic numbers 1 through 82, except 43 (technetium) and 61 (promethium). Observationally stable isotopes of some elements (such as tungsten and lead), however, are predicted to be slightly radioactive with very long half-lives: for example, the half-lives predicted for the observationally stable lead isotopes range from 1035 to 10189 years. Elements with atomic numbers 43, 61, and 83 through 94 are unstable enough that their radioactive decay can readily be detected. Three of these elements, bismuth (element 83), thorium (element 90), and uranium (element 92) have one or more isotopes with half-lives long enough to survive as remnants of the explosive stellar nucleosynthesis that produced the heavy elements before the formation of the Solar System. For example, at over 1.9 years, over a billion times longer than the current estimated age of the universe, bismuth-209 has the longest known alpha decay half-life of any naturally occurring element. The very heaviest 24 elements (those beyond plutonium, element 94) undergo radioactive decay with short half-lives and cannot be produced as daughters of longer-lived elements, and thus are not known to occur in nature at all. Periodic table The properties of the chemical elements are often summarized using the periodic table, which powerfully and elegantly organizes the elements by increasing atomic number into rows ("periods") in which the columns ("groups") share recurring ("periodic") physical and chemical properties. The current standard table contains 118 confirmed elements as of 2021. Although earlier precursors to this presentation exist, its invention is generally credited to the Russian chemist Dmitri Mendeleev in 1869, who intended the table to illustrate recurring trends in the properties of the elements. The layout of the table has been refined and extended over time as new elements have been discovered and new theoretical models have been developed to explain chemical behavior. Use of the periodic table is now ubiquitous within the academic discipline of chemistry, providing an extremely useful framework to classify, systematize and compare all the many different forms of chemical behavior. The table has also found wide application in physics, geology, biology, materials science, engineering, agriculture, medicine, nutrition, environmental health, and astronomy. Its principles are especially important in chemical engineering. Nomenclature and symbols The various chemical elements are formally identified by their unique atomic numbers, by their accepted names, and by their symbols. Atomic numbers The known elements have atomic numbers from 1 through 118, conventionally presented as Arabic numerals. Since the elements can be uniquely sequenced by atomic number, conventionally from lowest to highest (as in a periodic table), sets of elements are sometimes specified by such notation as "through", "beyond", or "from ... through", as in "through iron", "beyond uranium", or "from lanthanum through lutetium". The terms "light" and "heavy" are sometimes also used informally to indicate relative atomic numbers (not densities), as in "lighter than carbon" or "heavier than lead", although technically the weight or mass of atoms of an element (their atomic weights or atomic masses) do not always increase monotonically with their atomic numbers. Element names The naming of various substances now known as elements precedes the atomic theory of matter, as names were given locally by various cultures to various minerals, metals, compounds, alloys, mixtures, and other materials, although at the time it was not known which chemicals were elements and which compounds. As they were identified as elements, the existing names for anciently known elements (e.g., gold, mercury, iron) were kept in most countries. National differences emerged over the names of elements either for convenience, linguistic niceties, or nationalism. For a few illustrative examples: German speakers use "Wasserstoff" (water substance) for "hydrogen", "Sauerstoff" (acid substance) for "oxygen" and "Stickstoff" (smothering substance) for "nitrogen", while English and some romance languages use "sodium" for "natrium" and "potassium" for "kalium", and the French, Italians, Greeks, Portuguese and Poles prefer "azote/azot/azoto" (from roots meaning "no life") for "nitrogen". For purposes of international communication and trade, the official names of the chemical elements both ancient and more recently recognized are decided by the International Union of Pure and Applied Chemistry (IUPAC), which has decided on a sort of international English language, drawing on traditional English names even when an element's chemical symbol is based on a Latin or other traditional word, for example adopting "gold" rather than "aurum" as the name for the 79th element (Au). IUPAC prefers the British spellings "aluminium" and "caesium" over the U.S. spellings "aluminum" and "cesium", and the U.S. "sulfur" over the British "sulphur". However, elements that are practical to sell in bulk in many countries often still have locally used national names, and countries whose national language does not use the Latin alphabet are likely to use the IUPAC element names. According to IUPAC, chemical elements are not proper nouns in English; consequently, the full name of an element is not routinely capitalized in English, even if derived from a proper noun, as in californium and einsteinium. Isotope names of chemical elements are also uncapitalized if written out, e.g., carbon-12 or uranium-235. Chemical element symbols (such as Cf for californium and Es for einsteinium), are always capitalized (see below). In the second half of the twentieth century, physics laboratories became able to produce nuclei of chemical elements with half-lives too short for an appreciable amount of them to exist at any time. These are also named by IUPAC, which generally adopts the name chosen by the discoverer. This practice can lead to the controversial question of which research group actually discovered an element, a question that delayed the naming of elements with atomic number of 104 and higher for a considerable amount of time. (See element naming controversy). Precursors of such controversies involved the nationalistic namings of elements in the late 19th century. For example, lutetium was named in reference to Paris, France. The Germans were reluctant to relinquish naming rights to the French, often calling it cassiopeium. Similarly, the British discoverer of niobium originally named it columbium, in reference to the New World. It was used extensively as such by American publications before the international standardization (in 1950). Chemical symbols Specific chemical elements Before chemistry became a science, alchemists had designed arcane symbols for both metals and common compounds. These were however used as abbreviations in diagrams or procedures; there was no concept of atoms combining to form molecules. With his advances in the atomic theory of matter, John Dalton devised his own simpler symbols, based on circles, to depict molecules. The current system of chemical notation was invented by Berzelius. In this typographical system, chemical symbols are not mere abbreviations—though each consists of letters of the Latin alphabet. They are intended as universal symbols for people of all languages and alphabets. The first of these symbols were intended to be fully universal. Since Latin was the common language of science at that time, they were abbreviations based on the Latin names of metals. Cu comes from cuprum, Fe comes from ferrum, Ag from argentum. The symbols were not followed by a period (full stop) as with abbreviations. Later chemical elements were also assigned unique chemical symbols, based on the name of the element, but not necessarily in English. For example, sodium has the chemical symbol 'Na' after the Latin natrium. The same applies to "Fe" (ferrum) for iron, "Hg" (hydrargyrum) for mercury, "Sn" (stannum) for tin, "Au" (aurum) for gold, "Ag" (argentum) for silver, "Pb" (plumbum) for lead, "Cu" (cuprum) for copper, and "Sb" (stibium) for antimony. "W" (wolfram) for tungsten ultimately derives from German, "K" (kalium) for potassium ultimately from Arabic. Chemical symbols are understood internationally when element names might require translation. There have sometimes been differences in the past. For example, Germans in the past have used "J" (for the alternate name Jod) for iodine, but now use "I" and "Iod". The first letter of a chemical symbol is always capitalized, as in the preceding examples, and the subsequent letters, if any, are always lower case (small letters). Thus, the symbols for californium and einsteinium are Cf and Es. General chemical symbols There are also symbols in chemical equations for groups of chemical elements, for example in comparative formulas. These are often a single capital letter, and the letters are reserved and not used for names of specific elements. For example, an "X" indicates a variable group (usually a halogen) in a class of compounds, while "R" is a radical, meaning a compound structure such as a hydrocarbon chain. The letter "Q" is reserved for "heat" in a chemical reaction. "Y" is also often used as a general chemical symbol, although it is also the symbol of yttrium. "Z" is also frequently used as a general variable group. "E" is used in organic chemistry to denote an electron-withdrawing group or an electrophile; similarly "Nu" denotes a nucleophile. "L" is used to represent a general ligand in inorganic and organometallic chemistry. "M" is also often used in place of a general metal. At least two additional, two-letter generic chemical symbols are also in informal usage, "Ln" for any lanthanide element and "An" for any actinide element. "Rg" was formerly used for any rare gas element, but the group of rare gases has now been renamed noble gases and the symbol "Rg" has now been assigned to the element roentgenium. Isotope symbols Isotopes are distinguished by the atomic mass number (total protons and neutrons) for a particular isotope of an element, with this number combined with the pertinent element's symbol. IUPAC prefers that isotope symbols be written in superscript notation when practical, for example 12C and 235U. However, other notations, such as carbon-12 and uranium-235, or C-12 and U-235, are also used. As a special case, the three naturally occurring isotopes of the element hydrogen are often specified as H for 1H (protium), D for 2H (deuterium), and T for 3H (tritium). This convention is easier to use in chemical equations, replacing the need to write out the mass number for each atom. For example, the formula for heavy water may be written D2O instead of 2H2O. Origin of the elements Only about 4% of the total mass of the universe is made of atoms or ions, and thus represented by chemical elements. This fraction is about 15% of the total matter, with the remainder of the matter (85%) being dark matter. The nature of dark matter is unknown, but it is not composed of atoms of chemical elements because it contains no protons, neutrons, or electrons. (The remaining non-matter part of the mass of the universe is composed of the even less well understood dark energy). The 94 naturally occurring chemical elements were produced by at least four classes of astrophysical process. Most of the hydrogen, helium and a very small quantity of lithium were produced in the first few minutes of the Big Bang. This Big Bang nucleosynthesis happened only once; the other processes are ongoing. Nuclear fusion inside stars produces elements through stellar nucleosynthesis, including all elements from carbon to iron in atomic number. Elements higher in atomic number than iron, including heavy elements like uranium and plutonium, are produced by various forms of explosive nucleosynthesis in supernovae and neutron star mergers. The light elements lithium, beryllium and boron are produced mostly through cosmic ray spallation (fragmentation induced by cosmic rays) of carbon, nitrogen, and oxygen. During the early phases of the Big Bang, nucleosynthesis of hydrogen nuclei resulted in the production of hydrogen-1 (protium, 1H) and helium-4 (4He), as well as a smaller amount of deuterium (2H) and very minuscule amounts (on the order of 10−10) of lithium and beryllium. Even smaller amounts of boron may have been produced in the Big Bang, since it has been observed in some very old stars, while carbon has not. No elements heavier than boron were produced in the Big Bang. As a result, the primordial abundance of atoms (or ions) consisted of roughly 75% 1H, 25% 4He, and 0.01% deuterium, with only tiny traces of lithium, beryllium, and perhaps boron. Subsequent enrichment of galactic halos occurred due to stellar nucleosynthesis and supernova nucleosynthesis. However, the element abundance in intergalactic space can still closely resemble primordial conditions, unless it has been enriched by some means. On Earth (and elsewhere), trace amounts of various elements continue to be produced from other elements as products of nuclear transmutation processes. These include some produced by cosmic rays or other nuclear reactions (see cosmogenic and nucleogenic nuclides), and others produced as decay products of long-lived primordial nuclides. For example, trace (but detectable) amounts of carbon-14 (14C) are continually produced in the atmosphere by cosmic rays impacting nitrogen atoms, and argon-40 (40Ar) is continually produced by the decay of primordially occurring but unstable potassium-40 (40K). Also, three primordially occurring but radioactive actinides, thorium, uranium, and plutonium, decay through a series of recurrently produced but unstable radioactive elements such as radium and radon, which are transiently present in any sample of these metals or their ores or compounds. Three other radioactive elements, technetium, promethium, and neptunium, occur only incidentally in natural materials, produced as individual atoms by nuclear fission of the nuclei of various heavy elements or in other rare nuclear processes. In addition to the 94 naturally occurring elements, several artificial elements have been produced by human nuclear physics technology. , these experiments have produced all elements up to atomic number 118. Abundance The following graph (note log scale) shows the abundance of elements in our Solar System. The table shows the twelve most common elements in our galaxy (estimated spectroscopically), as measured in parts per million, by mass. Nearby galaxies that have evolved along similar lines have a corresponding enrichment of elements heavier than hydrogen and helium. The more distant galaxies are being viewed as they appeared in the past, so their abundances of elements appear closer to the primordial mixture. As physical laws and processes appear common throughout the visible universe, however, scientist expect that these galaxies evolved elements in similar abundance. The abundance of elements in the Solar System is in keeping with their origin from nucleosynthesis in the Big Bang and a number of progenitor supernova stars. Very abundant hydrogen and helium are products of the Big Bang, but the next three elements are rare since they had little time to form in the Big Bang and are not made in stars (they are, however, produced in small quantities by the breakup of heavier elements in interstellar dust, as a result of impact by cosmic rays). Beginning with carbon, elements are produced in stars by buildup from alpha particles (helium nuclei), resulting in an alternatingly larger abundance of elements with even atomic numbers (these are also more stable). In general, such elements up to iron are made in large stars in the process of becoming supernovas. Iron-56 is particularly common, since it is the most stable element that can easily be made from alpha particles (being a product of decay of radioactive nickel-56, ultimately made from 14 helium nuclei). Elements heavier than iron are made in energy-absorbing processes in large stars, and their abundance in the universe (and on Earth) generally decreases with their atomic number. The abundance of the chemical elements on Earth varies from air to crust to ocean, and in various types of life. The abundance of elements in Earth's crust differs from that in the Solar System (as seen in the Sun and heavy planets like Jupiter) mainly in selective loss of the very lightest elements (hydrogen and helium) and also volatile neon, carbon (as hydrocarbons), nitrogen and sulfur, as a result of solar heating in the early formation of the solar system. Oxygen, the most abundant Earth element by mass, is retained on Earth by combination with silicon. Aluminum at 8% by mass is more common in the Earth's crust than in the universe and solar system, but the composition of the far more bulky mantle, which has magnesium and iron in place of aluminum (which occurs there only at 2% of mass) more closely mirrors the elemental composition of the solar system, save for the noted loss of volatile elements to space, and loss of iron which has migrated to the Earth's core. The composition of the human body, by contrast, more closely follows the composition of seawater—save that the human body has additional stores of carbon and nitrogen necessary to form the proteins and nucleic acids, together with phosphorus in the nucleic acids and energy transfer molecule adenosine triphosphate (ATP) that occurs in the cells of all living organisms. Certain kinds of organisms require particular additional elements, for example the magnesium in chlorophyll in green plants, the calcium in mollusc shells, or the iron in the hemoglobin in vertebrate animals' red blood cells. History Evolving definitions The concept of an "element" as an undivisible substance has developed through three major historical phases: Classical definitions (such as those of the ancient Greeks), chemical definitions, and atomic definitions. Classical definitions Ancient philosophy posited a set of classical |
cent. Spoken and written use of the official form cent in Francophone Canada is exceptionally uncommon. In the Canadian French vernacular sou, sou noir (noir means "black" in French), cenne, and cenne noire are all widely known, used, and accepted monikers when referring to either of a Canadian dollar or the 1¢ coin (colloquially known as a "penny" in North American English). Subdivision of euro: cent or centime? In the European community cent is the official name for one hundredth of a euro. However, in French-speaking countries the word centime is the preferred term. Indeed, the Superior Council of the French language of Belgium recommended in 2001 the use of centime, since cent is also the French word for "hundred". An analogous decision was published in the Journal officiel in France (2 December 1997). In Morocco, dirhams are divided into 100 centimes and one may find prices in the country quoted in centimes rather than in dirhams. Sometimes centimes are known | official cent. Spoken and written use of the official form cent in Francophone Canada is exceptionally uncommon. In the Canadian French vernacular sou, sou noir (noir means "black" in French), cenne, and cenne noire are all widely known, used, and accepted monikers when referring to either of a Canadian dollar or the 1¢ coin (colloquially known as a "penny" in North American English). Subdivision of euro: cent or centime? In the European community cent is the official name for one hundredth of a euro. However, in French-speaking countries the word centime is the preferred term. Indeed, the Superior Council of the French language of Belgium recommended in 2001 the use of centime, since cent is also the French word for "hundred". An analogous decision was published in the Journal officiel in France (2 December 1997). In Morocco, dirhams are divided into 100 centimes and one may |
before this named day in the following year. This may be termed a "year's time", but not a "calendar year". To reconcile the calendar year with the astronomical cycle (which has a fractional number of days) certain years contain extra days ("leap days" or "intercalary days"). The Gregorian year, which is in use in most of the world, begins on January 1 and ends on December 31. It has a length of 365 days in an ordinary year, with 8760 hours, 525,600 minutes, or 31,536,000 seconds; but 366 days in a leap year, with 8784 hours, 527,040 minutes, or 31,622,400 seconds. With 97 leap years every 400 years, the year has an average length of 365.2425 days. Other formula-based calendars can have lengths which are further out of step with the solar cycle: for example, the Julian calendar has an average length of 365.25 days, and the Hebrew calendar has an average length of 365.2468 days. The Islamic calendar is a lunar calendar consisting of 12 months in a year of 354 or 355 days. The astronomer's mean tropical | a "calendar year". To reconcile the calendar year with the astronomical cycle (which has a fractional number of days) certain years contain extra days ("leap days" or "intercalary days"). The Gregorian year, which is in use in most of the world, begins on January 1 and ends on December 31. It has a length of 365 days in an ordinary year, with 8760 hours, 525,600 minutes, or 31,536,000 seconds; but 366 days in a leap year, with 8784 hours, 527,040 minutes, or 31,622,400 seconds. With 97 leap years every 400 years, the year has an average length of 365.2425 days. Other formula-based calendars can have lengths which are further out of step with the solar cycle: for example, the Julian calendar has an average length of 365.25 days, and the Hebrew calendar has an average length of 365.2468 days. The Islamic calendar is a lunar calendar consisting of 12 months in a year of 354 or 355 days. The astronomer's mean tropical year, which is averaged over equinoxes and solstices, is currently 365.24219 days, slightly shorter than the average length of the year in most calendars, but the astronomer's value changes over time, so John Herschel's |
of Africa, originally Franc of the French Colonies in Africa, or colloquially ) is the name of two currencies, the West African CFA franc, used in eight West African countries, and the Central African CFA franc, used in six Central African countries. Although separate, the two CFA franc currencies have always been at parity and are effectively interchangeable. The ISO currency codes are XAF for the Central African CFA franc and XOF for the West African CFA franc. On 22 December 2019, it was announced that the West African currency would be replaced by an independent currency to be called Eco. Both CFA francs have a fixed exchange rate to the euro: 100 CFA francs = 1 former French franc = 0.152449 euro; or 1 € = 6.55957 FRF = 655.957 CFA francs exactly. Usage CFA francs are used in fourteen countries: twelve nations formerly ruled by France in West and Central Africa (excluding Guinea and Mauritania, which withdrew), plus Guinea-Bissau (a former Portuguese colony), and Equatorial Guinea (a former Spanish colony). These fourteen countries have a combined population of 147.5 million people (as of 2013), and a combined GDP of US$166.6 billion (as of 2012). The ISO currency codes are XAF for the Central African CFA franc and XOF for the West African CFA franc. Evaluation The currency has been criticized for making economic planning for the developing countries of French West Africa all but impossible since the CFA's value is pegged to the euro (whose monetary policy is set by the European Central Bank). Others disagree and argue that the CFA "helps stabilize the national currencies of Franc Zone member-countries and greatly facilitates the flow of exports and imports between France and the member-countries". The European Union's own assessment of the CFA's link to the euro, carried out in 2008, noted that "benefits from economic integration within each of the two monetary unions of the CFA franc zone, and even more so between them, remained remarkably low" but that "the peg to the French franc and, since 1999, to the euro as exchange rate anchor is usually found to have had favourable effects in the region in terms of macroeconomic stability". Name Between 1945 and 1958, CFA stood for ("French colonies of Africa"); then for ("French Community of Africa") between 1958 (establishment of the French Fifth Republic) and the independence of these African countries at the beginning of the 1960s. Since independence, CFA is taken to mean (African Financial Community), but in actual use, the term can have two meanings (see Institutions below). History Creation The CFA franc was created on 26 December 1945, along with the CFP franc. The reason for their creation was the weakness of the French franc immediately after World War II. When France ratified the Bretton Woods Agreement in December 1945, the French franc was devalued in order to set a fixed exchange rate with the US dollar. New currencies were created in the French colonies to spare them the strong devaluation, thereby facilitating imports from France. French officials presented the decision as an act of generosity. René Pleven, the French Minister of Finance, was quoted as saying - Exchange rate The CFA franc was created with a fixed exchange rate versus the French franc. This exchange rate was changed only twice: in 1948 and in 1994. Exchange rate: 26 December 1945 to 16 October 1948 – 1 CFA franc = 1.70 FRF (FRF = French franc). This 0.70 FRF premium is the consequence of the creation of the CFA franc, which spared the French African colonies the devaluation of December 1945 (before December 1945, 1 local franc in these colonies was worth 1 French franc). 17 October 1948 to 31 December 1959 – 1 CFA franc = 2.00 FRF (the CFA franc had followed the French franc's devaluation versus the US dollar in January 1948, but on 18 October 1948, the French franc devalued again and this time the CFA franc was revalued against the French franc to offset almost all of this new devaluation of the French franc; after October 1948, the CFA was never revalued again versus the French franc and followed all the successive devaluations of the French franc) 1 January 1960 to 11 January 1994 – 1 CFA franc = 0.02 FRF (1 January 1960: the French franc redenominated, with 100 "old" francs becoming 1 "new" franc) 12 January 1994 to 31 December 1998 – 1 CFA franc = 0.01 FRF (sharp devaluation of the CFA franc to help African exports) 1 January 1999 onwards – 100 CFA franc = 0.152449 euro or 1 euro = 655.957 CFA franc. (1 January 1999: euro replaced FRF at the rate of 6.55957 FRF for 1 euro) The 1960 and 1999 events were merely changes in the currency in use in France: the relative value of the CFA franc versus the French franc/euro changed only in 1948 and 1994. The value of the CFA franc has been widely criticized as being too high, which many economists believe favours the urban elite of the African countries, who can buy imported manufactured goods cheaply at the expense of farmers who cannot easily export agricultural products. The devaluation of 1994 was an attempt to reduce these imbalances. Changes in countries using the franc Over time, the number of countries and territories using the CFA franc has changed as some countries began introducing their own separate currencies. A couple of nations in West Africa have also chosen to adopt the CFA franc since its introduction, despite the fact that they were never French colonies. 1960: Guinea leaves and begins issuing Guinean francs. 1962: Mali leaves and begins issuing | later to the Euro 1975: Réunion leaves for French franc, which changed later to the Euro 1976: Mayotte leaves for French franc, which changed later to the Euro 1984: Mali rejoins (1 CFA franc = 2 Malian francs). 1985: Equatorial Guinea joins (1 franc = 4 bipkwele) 1997: Guinea-Bissau joins (1 franc = 65 pesos) European Monetary Union In 1998, in anticipation of Economic and Monetary Union of the European Union, the Council of the European Union addressed the monetary agreements France had with the CFA Zone and Comoros and ruled that: The agreements are unlikely to have any material effect on the monetary and exchange rate policy of the Eurozone In their present forms and states of implementation, the agreements are unlikely to present any obstacle to a smooth functioning of economic and monetary union Nothing in the agreements can be construed as implying an obligation for the European Central Bank (ECB) or any national central bank to support the convertibility of the CFA and Comorian francs Modifications to the existing agreements will not lead to any obligations for the European Central or any national central bank The French Treasury will guarantee the free convertibility at a fixed parity between the euro and the CFA and Comorian francs The competent French authorities shall keep the European Commission, the European Central Bank and the Economic and Financial Committee informed about the implementation of the agreements and inform the Committee prior to changes of the parity between the euro and the CFA and Comorian francs Any change to the nature or scope of the agreements would require Council approval on the basis of a Commission recommendation and ECB consultation Criticism and replacement in West Africa Critics point out that the currency is controlled by the French treasury, and in turn African countries channel more money to France than they receive in aid and have no sovereignty over their monetary policies. In January 2019, the Italians criticized France for impoverishing Africa through the CFA franc. France responded by ejecting Italy's ambassador to Paris. However, criticism of the CFA franc, coming from various African organizations, continued. On 21 December 2019, President Alassane Ouattara of the Ivory Coast and President Emmanuel Macron of France announced an initiative to replace the West African CFA Franc with the Eco. Subsequently, a reform of the West African CFA franc was initiated. In May 2020, the French National Assembly agreed to end the French engagement in the West African CFA franc. The countries using the currency will no longer have to deposit half of their foreign exchange reserves with the French Treasury. The West African CFA franc is expected to be renamed to the "Eco" in the near future. Institutions There are two different currencies called the CFA franc: the West African CFA franc (ISO 4217 currency code XOF), and the Central Africa CFA franc (ISO 4217 currency code XAF). They are distinguished in French by the meaning of the abbreviation CFA. These two CFA francs have the same exchange rate with the euro (1 euro = 655.957 XOF = 655.957 XAF), and they are both guaranteed by the French treasury (), but the West African CFA franc cannot be used in Central African countries, and the Central Africa CFA franc cannot be used in West African countries. West African The West African CFA franc (XOF) is known in French as the , where CFA stands for ('Financial Community of Africa') or ("African Financial Community"). It is issued by the BCEAO (, i.e., "Central Bank of the West African States"), located in Dakar, Senegal, for the eight countries of the UEMOA |
philosophers have been happy with his solution, and his ideas about the pineal gland have especially been ridiculed. However, no alternative solution has gained general acceptance. Proposed solutions can be divided broadly into two categories: dualist solutions that maintain Descartes' rigid distinction between the realm of consciousness and the realm of matter but give different answers for how the two realms relate to each other; and monist solutions that maintain that there is really only one realm of being, of which consciousness and matter are both aspects. Each of these categories itself contains numerous variants. The two main types of dualism are substance dualism (which holds that the mind is formed of a distinct type of substance not governed by the laws of physics) and property dualism (which holds that the laws of physics are universally valid but cannot be used to explain the mind). The three main types of monism are physicalism (which holds that the mind consists of matter organized in a particular way), idealism (which holds that only thought or experience truly exists, and matter is merely an illusion), and neutral monism (which holds that both mind and matter are aspects of a distinct essence that is itself identical to neither of them). There are also, however, a large number of idiosyncratic theories that cannot cleanly be assigned to any of these schools of thought. Since the dawn of Newtonian science with its vision of simple mechanical principles governing the entire universe, some philosophers have been tempted by the idea that consciousness could be explained in purely physical terms. The first influential writer to propose such an idea explicitly was Julien Offray de La Mettrie, in his book Man a Machine (L'homme machine). His arguments, however, were very abstract. The most influential modern physical theories of consciousness are based on psychology and neuroscience. Theories proposed by neuroscientists such as Gerald Edelman and Antonio Damasio, and by philosophers such as Daniel Dennett, seek to explain consciousness in terms of neural events occurring within the brain. Many other neuroscientists, such as Christof Koch, have explored the neural basis of consciousness without attempting to frame all-encompassing global theories. At the same time, computer scientists working in the field of artificial intelligence have pursued the goal of creating digital computer programs that can simulate or embody consciousness. A few theoretical physicists have argued that classical physics is intrinsically incapable of explaining the holistic aspects of consciousness, but that quantum theory may provide the missing ingredients. Several theorists have therefore proposed quantum mind (QM) theories of consciousness. Notable theories falling into this category include the holonomic brain theory of Karl Pribram and David Bohm, and the Orch-OR theory formulated by Stuart Hameroff and Roger Penrose. Some of these QM theories offer descriptions of phenomenal consciousness, as well as QM interpretations of access consciousness. None of the quantum mechanical theories have been confirmed by experiment. Recent publications by G. Guerreshi, J. Cia, S. Popescu, and H. Briegel could falsify proposals such as those of Hameroff, which rely on quantum entanglement in protein. At the present time many scientists and philosophers consider the arguments for an important role of quantum phenomena to be unconvincing. Apart from the general question of the "hard problem" of consciousness (which is, roughly speaking, the question of how mental experience can arise from a physical basis), a more specialized question is how to square the subjective notion that we are in control of our decisions (at least in some small measure) with the customary view of causality that subsequent events are caused by prior events. The topic of free will is the philosophical and scientific examination of this conundrum. Problem of other minds Many philosophers consider experience to be the essence of consciousness, and believe that experience can only fully be known from the inside, subjectively. But if consciousness is subjective and not visible from the outside, why do the vast majority of people believe that other people are conscious, but rocks and trees are not? This is called the problem of other minds. It is particularly acute for people who believe in the possibility of philosophical zombies, that is, people who think it is possible in principle to have an entity that is physically indistinguishable from a human being and behaves like a human being in every way but nevertheless lacks consciousness. Related issues have also been studied extensively by Greg Littmann of the University of Illinois, and by Colin Allen (a professor at Indiana University) regarding the literature and research studying artificial intelligence in androids. The most commonly given answer is that we attribute consciousness to other people because we see that they resemble us in appearance and behavior; we reason that if they look like us and act like us, they must be like us in other ways, including having experiences of the sort that we do. There are, however, a variety of problems with that explanation. For one thing, it seems to violate the principle of parsimony, by postulating an invisible entity that is not necessary to explain what we observe. Some philosophers, such as Daniel Dennett in an essay titled The Unimagined Preposterousness of Zombies, argue that people who give this explanation do not really understand what they are saying. More broadly, philosophers who do not accept the possibility of zombies generally believe that consciousness is reflected in behavior (including verbal behavior), and that we attribute consciousness on the basis of behavior. A more straightforward way of saying this is that we attribute experiences to people because of what they can do, including the fact that they can tell us about their experiences. Animal consciousness The topic of animal consciousness is beset by a number of difficulties. It poses the problem of other minds in an especially severe form, because non-human animals, lacking the ability to express human language, cannot tell humans about their experiences. Also, it is difficult to reason objectively about the question, because a denial that an animal is conscious is often taken to imply that it does not feel, its life has no value, and that harming it is not morally wrong. Descartes, for example, has sometimes been blamed for mistreatment of animals due to the fact that he believed only humans have a non-physical mind. Most people have a strong intuition that some animals, such as cats and dogs, are conscious, while others, such as insects, are not; but the sources of this intuition are not obvious, and are often based on personal interactions with pets and other animals they have observed. Philosophers who consider subjective experience the essence of consciousness also generally believe, as a correlate, that the existence and nature of animal consciousness can never rigorously be known. Thomas Nagel spelled out this point of view in an influential essay titled What Is it Like to Be a Bat?. He said that an organism is conscious "if and only if there is something that it is like to be that organism—something it is like for the organism"; and he argued that no matter how much we know about an animal's brain and behavior, we can never really put ourselves into the mind of the animal and experience its world in the way it does itself. Other thinkers, such as Douglas Hofstadter, dismiss this argument as incoherent. Several psychologists and ethologists have argued for the existence of animal consciousness by describing a range of behaviors that appear to show animals holding beliefs about things they cannot directly perceive—Donald Griffin's 2001 book Animal Minds reviews a substantial portion of the evidence. On July 7, 2012, eminent scientists from different branches of neuroscience gathered at the University of Cambridge to celebrate the Francis Crick Memorial Conference, which deals with consciousness in humans and pre-linguistic consciousness in nonhuman animals. After the conference, they signed in the presence of Stephen Hawking, the 'Cambridge Declaration on Consciousness', which summarizes the most important findings of the survey: "We decided to reach a consensus and make a statement directed to the public that is not scientific. It's obvious to everyone in this room that animals have consciousness, but it is not obvious to the rest of the world. It is not obvious to the rest of the Western world or the Far East. It is not obvious to the society." "Convergent evidence indicates that non-human animals ..., including all mammals and birds, and other creatures, ... have the necessary neural substrates of consciousness and the capacity to exhibit intentional behaviors." Artifact consciousness The idea of an artifact made conscious is an ancient theme of mythology, appearing for example in the Greek myth of Pygmalion, who carved a statue that was magically brought to life, and in medieval Jewish stories of the Golem, a magically animated homunculus built of clay. However, the possibility of actually constructing a conscious machine was probably first discussed by Ada Lovelace, in a set of notes written in 1842 about the Analytical Engine invented by Charles Babbage, a precursor (never built) to modern electronic computers. Lovelace was essentially dismissive of the idea that a machine such as the Analytical Engine could think in a humanlike way. She wrote: One of the most influential contributions to this question was an essay written in 1950 by pioneering computer scientist Alan Turing, titled Computing Machinery and Intelligence. Turing disavowed any interest in terminology, saying that even "Can machines think?" is too loaded with spurious connotations to be meaningful; but he proposed to replace all such questions with a specific operational test, which has become known as the Turing test. To pass the test, a computer must be able to imitate a human well enough to fool interrogators. In his essay Turing discussed a variety of possible objections, and presented a counterargument to each of them. The Turing test is commonly cited in discussions of artificial intelligence as a proposed criterion for machine consciousness; it has provoked a great deal of philosophical debate. For example, Daniel Dennett and Douglas Hofstadter argue that anything capable of passing the Turing test is necessarily conscious, while David Chalmers argues that a philosophical zombie could pass the test, yet fail to be conscious. A third group of scholars have argued that with technological growth once machines begin to display any substantial signs of human-like behavior then the dichotomy (of human consciousness compared to human-like consciousness) becomes passé and issues of machine autonomy begin to prevail even as observed in its nascent form within contemporary industry and technology. Jürgen Schmidhuber argues that consciousness is simply the result of compression. As an agent sees representation of itself recurring in the environment, the compression of this representation can be called consciousness. In a lively exchange over what has come to be referred to as "the Chinese room argument", John Searle sought to refute the claim of proponents of what he calls "strong artificial intelligence (AI)" that a computer program can be conscious, though he does agree with advocates of "weak AI" that computer programs can be formatted to "simulate" conscious states. His own view is that consciousness has subjective, first-person causal powers by being essentially intentional due simply to the way human brains function biologically; conscious persons can perform computations, but consciousness is not inherently computational the way computer programs are. To make a Turing machine that speaks Chinese, Searle imagines a room with one monolingual English speaker (Searle himself, in fact), a book that designates a combination of Chinese symbols to be output paired with Chinese symbol input, and boxes filled with Chinese symbols. In this case, the English speaker is acting as a computer and the rulebook as a program. Searle argues that with such a machine, he would be able to process the inputs to outputs perfectly without having any understanding of Chinese, nor having any idea what the questions and answers could possibly mean. If the experiment were done in English, since Searle knows English, he would be able to take questions and give answers without any algorithms for English questions, and he would be effectively aware of what was being said and the purposes it might serve. Searle would pass the Turing test of answering the questions in both languages, but he is only conscious of what he is doing when he speaks English. Another way of putting the argument is to say that computer programs can pass the Turing test for processing the syntax of a language, but that the syntax cannot lead to semantic meaning in the way strong AI advocates hoped. In the literature concerning artificial intelligence, Searle's essay has been second only to Turing's in the volume of debate it has generated. Searle himself was vague about what extra ingredients it would take to make a machine conscious: all he proposed was that what was needed was "causal powers" of the sort that the brain has and that computers lack. But other thinkers sympathetic to his basic argument have suggested that the necessary (though perhaps still not sufficient) extra conditions may include the ability to pass not just the verbal version of the Turing test, but the robotic version, which requires grounding the robot's words in the robot's sensorimotor capacity to categorize and interact with the things in the world that its words are about, Turing-indistinguishably from a real person. Turing-scale robotics is an empirical branch of research on embodied cognition and situated cognition. In 2014, Victor Argonov has suggested a non-Turing test for machine consciousness based on machine's ability to produce philosophical judgments. He argues that a deterministic machine must be regarded as conscious if it is able to produce judgments on all problematic properties of consciousness (such as qualia or binding) having no innate (preloaded) philosophical knowledge on these issues, no philosophical discussions while learning, and no informational models of other creatures in its memory (such models may implicitly or explicitly contain knowledge about these creatures' consciousness). However, this test can be used only to detect, but not refute the existence of consciousness. A positive result proves that machine is conscious but a negative result proves nothing. For example, absence of philosophical judgments may be caused by lack of the machine's intellect, not by absence of consciousness. Scientific study For many decades, consciousness as a research topic was avoided by the majority of mainstream scientists, because of a general feeling that a phenomenon defined in subjective terms could not properly be studied using objective experimental methods. In 1975 George Mandler published an influential psychological study which distinguished between slow, serial, and limited conscious processes and fast, parallel and extensive unconscious ones. The Science and Religion Forum 1984 annual conference, 'From Artificial Intelligence to Human Consciousness' identified the nature of consciousness as a matter for investigation; Donald Michie was a keynote speaker. Starting in the 1980s, an expanding community of neuroscientists and psychologists have associated themselves with a field called Consciousness Studies, giving rise to a stream of experimental work published in books, journals such as Consciousness and Cognition, Frontiers in Consciousness Research, Psyche, and the Journal of Consciousness Studies, along with regular conferences organized by groups such as the Association for the Scientific Study of Consciousness and the Society for Consciousness Studies. Modern medical and psychological investigations into consciousness are based on psychological experiments (including, for example, the investigation of priming effects using subliminal stimuli), and on case studies of alterations in consciousness produced by trauma, illness, or drugs. Broadly viewed, scientific approaches are based on two core concepts. The first identifies the content of consciousness with the experiences that are reported by human subjects; the second makes use of the concept of consciousness that has been developed by neurologists and other medical professionals who deal with patients whose behavior is impaired. In either case, the ultimate goals are to develop techniques for assessing consciousness objectively in humans as well as other animals, and to understand the neural and psychological mechanisms that underlie it. Measurement Experimental research on consciousness presents special difficulties, due to the lack of a universally accepted operational definition. In the majority of experiments that are specifically about consciousness, the subjects are human, and the criterion used is verbal report: in other words, subjects are asked to describe their experiences, and their descriptions are treated as observations of the contents of consciousness. For example, subjects who stare continuously at a Necker cube usually report that they experience it "flipping" between two 3D configurations, even though the stimulus itself remains the same. The objective is to understand the relationship between the conscious awareness of stimuli (as indicated by verbal report) and the effects the stimuli have on brain activity and behavior. In several paradigms, such as the technique of response priming, the behavior of subjects is clearly influenced by stimuli for which they report no awareness, and suitable experimental manipulations can lead to increasing priming effects despite decreasing prime identification (double dissociation). Verbal report is widely considered to be the most reliable indicator of consciousness, but it raises a number of issues. For one thing, if verbal reports are treated as observations, akin to observations in other branches of science, then the possibility arises that they may contain errors—but it is difficult to make sense of the idea that subjects could be wrong about their own experiences, and even more difficult to see how such an error could be detected. Daniel Dennett has argued for an approach he calls heterophenomenology, which means treating verbal reports as stories that may or may not be true, but his ideas about how to do this have not been widely adopted. Another issue with verbal report as a criterion is that it restricts the field of study to humans who have language: this approach cannot be used to study consciousness in other species, pre-linguistic children, or people with types of brain damage that impair language. As a third issue, philosophers who dispute the validity of the Turing test may feel that it is possible, at least in principle, for verbal report to be dissociated from consciousness entirely: a philosophical zombie may give detailed verbal reports of awareness in the absence of any genuine awareness. Although verbal report is in practice the "gold standard" for ascribing consciousness, it is not the only possible criterion. In medicine, consciousness is assessed as a combination of verbal behavior, arousal, brain activity and purposeful movement. The last three of these can be used as indicators of consciousness when verbal behavior is absent. The scientific literature regarding the neural bases of arousal and purposeful movement is very extensive. Their reliability as indicators of consciousness is disputed, however, due to numerous studies showing that alert human subjects can be induced to behave purposefully in a variety of ways in spite of reporting a complete lack of awareness. Studies of the neuroscience of free will have also shown that the experiences that people report when they behave purposefully sometimes do not correspond to their actual behaviors or to the patterns of electrical activity recorded from their brains. Another approach applies specifically to the study of self-awareness, that is, the ability to distinguish oneself from others. In the 1970s Gordon Gallup developed an operational test for self-awareness, known as the mirror test. The test examines whether animals are able to differentiate between seeing themselves in a mirror versus seeing other animals. The classic example involves placing a spot of coloring on the skin or fur near the individual's forehead and seeing if they attempt to remove it or at least touch the spot, thus indicating that they recognize that the individual they are seeing in the mirror is themselves. Humans (older than 18 months) and other great apes, bottlenose dolphins, killer whales, pigeons, European magpies and elephants have all been observed to pass this test. Neural correlates A major part of the scientific literature on consciousness consists of studies that examine the relationship between the experiences reported by subjects and the activity that simultaneously takes place in their brains—that is, studies of the neural correlates of consciousness. The hope is to find that activity in a particular part of the brain, or a particular pattern of global brain activity, which will be strongly predictive of conscious awareness. Several brain imaging techniques, such as EEG and fMRI, have been used for physical measures of brain activity in these studies. Another idea that has drawn attention for several decades is that consciousness is associated with high-frequency (gamma band) oscillations in brain activity. This idea arose from proposals in the 1980s, by Christof von der Malsburg and Wolf Singer, that gamma oscillations could solve the so-called binding problem, by linking information represented in different parts of the brain into a unified experience. Rodolfo Llinás, for example, proposed that consciousness results from recurrent thalamo-cortical resonance where the specific thalamocortical systems (content) and the non-specific (centromedial thalamus) thalamocortical systems (context) interact in the gamma band frequency via synchronous oscillations. A number of studies have shown that activity in primary sensory areas of the brain is not sufficient to produce consciousness: it is possible for subjects to report a lack of awareness even when areas such as the primary visual cortex (V1) show clear electrical responses to a stimulus. Higher brain areas are seen as more promising, especially the prefrontal cortex, which is involved in a range of higher cognitive functions collectively known as executive functions. There is substantial evidence that a "top-down" flow of neural activity (i.e., activity propagating from the frontal cortex to sensory areas) is more predictive of conscious awareness than a "bottom-up" flow of activity. The prefrontal cortex is not the only candidate area, however: studies by Nikos Logothetis and his colleagues have shown, for example, that visually responsive neurons in parts of the temporal lobe reflect the visual perception in the situation when conflicting visual images are presented to different eyes (i.e., bistable percepts during binocular rivalry). Furthermore, top-down feedback from higher to lower visual brain areas may be weaker or absent in the peripheral visual field, as suggested by some experimental data and theoretical arguments; nevertheless humans can perceive visual inputs in the peripheral visual field arising from bottom-up V1 neural activities. Meanwhile, bottom-up V1 activities for the central visual fields can be vetoed, and thus made invisible to perception, by the top-down feedback, when these bottom-up signals are inconsistent with brain's internal model of the visual world. Modulation of neural responses may correlate with phenomenal experiences. In contrast to the raw electrical responses that do not correlate with consciousness, the modulation of these responses by other stimuli correlates surprisingly well with an important aspect of consciousness: namely with the phenomenal experience of stimulus intensity (brightness, contrast). In the research group of Danko Nikolić it has been shown that some of the changes in the subjectively perceived brightness correlated with the modulation of firing rates while others correlated with the modulation of neural synchrony. An fMRI investigation suggested that these findings were strictly limited to the primary visual areas. This indicates that, in the primary visual areas, changes in firing rates and synchrony can be considered as neural correlates of qualia—at least for some type of qualia. In 2011, Graziano and Kastner proposed the "attention schema" theory of awareness. In that theory, specific cortical areas, notably in the superior temporal sulcus and the temporo-parietal junction, are used to build the construct of awareness and attribute it to other people. The same cortical machinery is also used to attribute awareness to oneself. Damage to these cortical regions can lead to deficits in consciousness such as hemispatial neglect. In the attention schema theory, the value of explaining the feature of awareness and attributing it to a person is to gain a useful predictive model of that person's attentional processing. Attention is a style of information processing in which a brain focuses its resources on a limited set of interrelated signals. Awareness, in this theory, is a useful, simplified schema that represents attentional states. To be aware of X is explained by constructing a model of one's attentional focus on X. In 2013, the perturbational complexity index (PCI) was proposed, a measure of the algorithmic complexity of the electrophysiological response of the cortex to transcranial magnetic stimulation. This measure was shown to be higher in individuals that are awake, in REM sleep or in a locked-in state than in those who are in deep sleep or in a vegetative state, making it potentially useful as a quantitative assessment of consciousness states. Assuming that not only humans but even some non-mammalian species are conscious, a number of evolutionary approaches to the problem of neural correlates of consciousness open up. For example, assuming that birds are conscious—a common assumption among neuroscientists and ethologists due to the extensive cognitive repertoire of birds—there are comparative neuroanatomical ways to validate some of the principal, currently competing, mammalian consciousness–brain theories. The rationale for such a comparative study is that the avian brain deviates structurally from the mammalian brain. So how similar are they? What homologues can be identified? The general conclusion from the study by Butler, et al., is that some of the major theories for the mammalian brain also appear to be valid for the avian brain. The structures assumed to be critical for consciousness in mammalian brains have homologous counterparts in avian brains. Thus the main portions of the theories of Crick and Koch, Edelman and Tononi, and Cotterill seem to be compatible with the assumption that birds are conscious. Edelman also differentiates between what he calls primary consciousness (which is a trait shared by humans and non-human animals) and higher-order consciousness as it appears in humans alone along with human language capacity. Certain aspects of the three theories, however, seem less easy to apply to the hypothesis of avian consciousness. For instance, the suggestion by Crick and Koch that layer 5 neurons of the mammalian brain have a special role, seems difficult to apply to the avian brain, since the avian homologues have a different morphology. Likewise, the theory of Eccles seems incompatible, since a structural homologue/analogue to the dendron has not been found in avian brains. The assumption of an avian consciousness also brings the reptilian brain into focus. The reason is the structural continuity between avian and reptilian brains, meaning that the phylogenetic origin of consciousness may be earlier than suggested by many leading neuroscientists. Joaquin Fuster of UCLA has advocated the position of the importance of the prefrontal cortex in humans, along with the areas of Wernicke and Broca, as being of particular importance to the development of human language capacities neuro-anatomically necessary for the emergence of higher-order consciousness in humans. Biological function and evolution Opinions are divided as to where in biological evolution consciousness emerged and about whether or not consciousness has any survival value. Some argue that consciousness is a byproduct of evolution. It has been argued that consciousness emerged (i) exclusively with the first humans, (ii) exclusively with the first mammals, (iii) independently in mammals and birds, or (iv) with the first reptiles. Other authors date the origins of consciousness to the first animals with nervous systems or early vertebrates in the Cambrian over 500 million years ago. Donald Griffin suggests in his book Animal Minds a gradual evolution of consciousness. Each of these scenarios raises the question of the possible survival value of consciousness. Thomas Henry Huxley defends in an essay titled On the Hypothesis that Animals are Automata, and its History an epiphenomenalist theory of consciousness according to which consciousness is a causally inert effect of neural activity—"as the steam-whistle which accompanies the work of a locomotive engine is without influence upon its machinery". To this William James objects in his essay Are We Automata? by stating an evolutionary argument for mind-brain interaction implying that if the preservation and development of consciousness in the biological evolution is a result of natural selection, it is plausible that consciousness has not only been influenced by neural processes, but has had a survival value itself; and it could only have had this if it had been efficacious. Karl Popper develops in the book The Self and Its Brain a similar evolutionary argument. Regarding the primary function of conscious processing, a recurring idea in recent theories is that phenomenal states somehow integrate neural activities and information-processing that would otherwise be independent. This has been called the integration consensus. Another example has been proposed by Gerald Edelman called dynamic core hypothesis which puts emphasis on reentrant connections that reciprocally link areas of the brain in a massively parallel manner. Edelman also stresses the importance of the evolutionary emergence of higher-order consciousness in humans from the historically older trait of primary consciousness which humans share with non-human animals (see Neural correlates section above). These theories of integrative function present solutions to two classic problems associated with consciousness: differentiation and unity. They show how our conscious experience can discriminate between a virtually unlimited number of different possible scenes and details (differentiation) because it integrates those details from our sensory systems, while the integrative nature of consciousness in this view easily explains how our experience can seem unified as one whole despite all of these individual parts. However, it remains unspecified which kinds of information are integrated in a conscious manner and which kinds can be integrated without consciousness. Nor is it explained what specific causal role conscious integration plays, nor why the same functionality cannot be achieved without consciousness. Obviously not all kinds of information are capable of being disseminated consciously (e.g., neural activity related to vegetative functions, reflexes, unconscious motor programs, low-level perceptual analyses, etc.) and many kinds of information can be disseminated and combined with other kinds without consciousness, as in intersensory interactions such as the ventriloquism effect. Hence it remains unclear why any of it is conscious. For a review of the differences between conscious and unconscious integrations, see the article of E. Morsella. As noted earlier, even among writers who consider consciousness to be a well-defined thing, there is widespread dispute about which animals other than humans can be said to possess it. Edelman has described this distinction as that of humans possessing higher-order consciousness while sharing the trait of primary consciousness with non-human animals (see previous paragraph). Thus, any examination of the evolution of consciousness is faced with great difficulties. Nevertheless, some writers have argued that consciousness can be viewed from the standpoint of evolutionary biology as an adaptation in the sense of a trait that increases fitness. In his article "Evolution of consciousness", John Eccles argued that special anatomical and physical properties of the mammalian cerebral cortex gave rise to consciousness ("[a] psychon ... linked to [a] dendron through quantum physics"). Bernard Baars proposed that once in place, this "recursive" circuitry may have provided a basis for the subsequent development of many of the functions that consciousness facilitates in higher organisms. Peter Carruthers has put forth one such potential adaptive advantage gained by conscious creatures by suggesting that consciousness allows an individual to make distinctions between | but he proposed to replace all such questions with a specific operational test, which has become known as the Turing test. To pass the test, a computer must be able to imitate a human well enough to fool interrogators. In his essay Turing discussed a variety of possible objections, and presented a counterargument to each of them. The Turing test is commonly cited in discussions of artificial intelligence as a proposed criterion for machine consciousness; it has provoked a great deal of philosophical debate. For example, Daniel Dennett and Douglas Hofstadter argue that anything capable of passing the Turing test is necessarily conscious, while David Chalmers argues that a philosophical zombie could pass the test, yet fail to be conscious. A third group of scholars have argued that with technological growth once machines begin to display any substantial signs of human-like behavior then the dichotomy (of human consciousness compared to human-like consciousness) becomes passé and issues of machine autonomy begin to prevail even as observed in its nascent form within contemporary industry and technology. Jürgen Schmidhuber argues that consciousness is simply the result of compression. As an agent sees representation of itself recurring in the environment, the compression of this representation can be called consciousness. In a lively exchange over what has come to be referred to as "the Chinese room argument", John Searle sought to refute the claim of proponents of what he calls "strong artificial intelligence (AI)" that a computer program can be conscious, though he does agree with advocates of "weak AI" that computer programs can be formatted to "simulate" conscious states. His own view is that consciousness has subjective, first-person causal powers by being essentially intentional due simply to the way human brains function biologically; conscious persons can perform computations, but consciousness is not inherently computational the way computer programs are. To make a Turing machine that speaks Chinese, Searle imagines a room with one monolingual English speaker (Searle himself, in fact), a book that designates a combination of Chinese symbols to be output paired with Chinese symbol input, and boxes filled with Chinese symbols. In this case, the English speaker is acting as a computer and the rulebook as a program. Searle argues that with such a machine, he would be able to process the inputs to outputs perfectly without having any understanding of Chinese, nor having any idea what the questions and answers could possibly mean. If the experiment were done in English, since Searle knows English, he would be able to take questions and give answers without any algorithms for English questions, and he would be effectively aware of what was being said and the purposes it might serve. Searle would pass the Turing test of answering the questions in both languages, but he is only conscious of what he is doing when he speaks English. Another way of putting the argument is to say that computer programs can pass the Turing test for processing the syntax of a language, but that the syntax cannot lead to semantic meaning in the way strong AI advocates hoped. In the literature concerning artificial intelligence, Searle's essay has been second only to Turing's in the volume of debate it has generated. Searle himself was vague about what extra ingredients it would take to make a machine conscious: all he proposed was that what was needed was "causal powers" of the sort that the brain has and that computers lack. But other thinkers sympathetic to his basic argument have suggested that the necessary (though perhaps still not sufficient) extra conditions may include the ability to pass not just the verbal version of the Turing test, but the robotic version, which requires grounding the robot's words in the robot's sensorimotor capacity to categorize and interact with the things in the world that its words are about, Turing-indistinguishably from a real person. Turing-scale robotics is an empirical branch of research on embodied cognition and situated cognition. In 2014, Victor Argonov has suggested a non-Turing test for machine consciousness based on machine's ability to produce philosophical judgments. He argues that a deterministic machine must be regarded as conscious if it is able to produce judgments on all problematic properties of consciousness (such as qualia or binding) having no innate (preloaded) philosophical knowledge on these issues, no philosophical discussions while learning, and no informational models of other creatures in its memory (such models may implicitly or explicitly contain knowledge about these creatures' consciousness). However, this test can be used only to detect, but not refute the existence of consciousness. A positive result proves that machine is conscious but a negative result proves nothing. For example, absence of philosophical judgments may be caused by lack of the machine's intellect, not by absence of consciousness. Scientific study For many decades, consciousness as a research topic was avoided by the majority of mainstream scientists, because of a general feeling that a phenomenon defined in subjective terms could not properly be studied using objective experimental methods. In 1975 George Mandler published an influential psychological study which distinguished between slow, serial, and limited conscious processes and fast, parallel and extensive unconscious ones. The Science and Religion Forum 1984 annual conference, 'From Artificial Intelligence to Human Consciousness' identified the nature of consciousness as a matter for investigation; Donald Michie was a keynote speaker. Starting in the 1980s, an expanding community of neuroscientists and psychologists have associated themselves with a field called Consciousness Studies, giving rise to a stream of experimental work published in books, journals such as Consciousness and Cognition, Frontiers in Consciousness Research, Psyche, and the Journal of Consciousness Studies, along with regular conferences organized by groups such as the Association for the Scientific Study of Consciousness and the Society for Consciousness Studies. Modern medical and psychological investigations into consciousness are based on psychological experiments (including, for example, the investigation of priming effects using subliminal stimuli), and on case studies of alterations in consciousness produced by trauma, illness, or drugs. Broadly viewed, scientific approaches are based on two core concepts. The first identifies the content of consciousness with the experiences that are reported by human subjects; the second makes use of the concept of consciousness that has been developed by neurologists and other medical professionals who deal with patients whose behavior is impaired. In either case, the ultimate goals are to develop techniques for assessing consciousness objectively in humans as well as other animals, and to understand the neural and psychological mechanisms that underlie it. Measurement Experimental research on consciousness presents special difficulties, due to the lack of a universally accepted operational definition. In the majority of experiments that are specifically about consciousness, the subjects are human, and the criterion used is verbal report: in other words, subjects are asked to describe their experiences, and their descriptions are treated as observations of the contents of consciousness. For example, subjects who stare continuously at a Necker cube usually report that they experience it "flipping" between two 3D configurations, even though the stimulus itself remains the same. The objective is to understand the relationship between the conscious awareness of stimuli (as indicated by verbal report) and the effects the stimuli have on brain activity and behavior. In several paradigms, such as the technique of response priming, the behavior of subjects is clearly influenced by stimuli for which they report no awareness, and suitable experimental manipulations can lead to increasing priming effects despite decreasing prime identification (double dissociation). Verbal report is widely considered to be the most reliable indicator of consciousness, but it raises a number of issues. For one thing, if verbal reports are treated as observations, akin to observations in other branches of science, then the possibility arises that they may contain errors—but it is difficult to make sense of the idea that subjects could be wrong about their own experiences, and even more difficult to see how such an error could be detected. Daniel Dennett has argued for an approach he calls heterophenomenology, which means treating verbal reports as stories that may or may not be true, but his ideas about how to do this have not been widely adopted. Another issue with verbal report as a criterion is that it restricts the field of study to humans who have language: this approach cannot be used to study consciousness in other species, pre-linguistic children, or people with types of brain damage that impair language. As a third issue, philosophers who dispute the validity of the Turing test may feel that it is possible, at least in principle, for verbal report to be dissociated from consciousness entirely: a philosophical zombie may give detailed verbal reports of awareness in the absence of any genuine awareness. Although verbal report is in practice the "gold standard" for ascribing consciousness, it is not the only possible criterion. In medicine, consciousness is assessed as a combination of verbal behavior, arousal, brain activity and purposeful movement. The last three of these can be used as indicators of consciousness when verbal behavior is absent. The scientific literature regarding the neural bases of arousal and purposeful movement is very extensive. Their reliability as indicators of consciousness is disputed, however, due to numerous studies showing that alert human subjects can be induced to behave purposefully in a variety of ways in spite of reporting a complete lack of awareness. Studies of the neuroscience of free will have also shown that the experiences that people report when they behave purposefully sometimes do not correspond to their actual behaviors or to the patterns of electrical activity recorded from their brains. Another approach applies specifically to the study of self-awareness, that is, the ability to distinguish oneself from others. In the 1970s Gordon Gallup developed an operational test for self-awareness, known as the mirror test. The test examines whether animals are able to differentiate between seeing themselves in a mirror versus seeing other animals. The classic example involves placing a spot of coloring on the skin or fur near the individual's forehead and seeing if they attempt to remove it or at least touch the spot, thus indicating that they recognize that the individual they are seeing in the mirror is themselves. Humans (older than 18 months) and other great apes, bottlenose dolphins, killer whales, pigeons, European magpies and elephants have all been observed to pass this test. Neural correlates A major part of the scientific literature on consciousness consists of studies that examine the relationship between the experiences reported by subjects and the activity that simultaneously takes place in their brains—that is, studies of the neural correlates of consciousness. The hope is to find that activity in a particular part of the brain, or a particular pattern of global brain activity, which will be strongly predictive of conscious awareness. Several brain imaging techniques, such as EEG and fMRI, have been used for physical measures of brain activity in these studies. Another idea that has drawn attention for several decades is that consciousness is associated with high-frequency (gamma band) oscillations in brain activity. This idea arose from proposals in the 1980s, by Christof von der Malsburg and Wolf Singer, that gamma oscillations could solve the so-called binding problem, by linking information represented in different parts of the brain into a unified experience. Rodolfo Llinás, for example, proposed that consciousness results from recurrent thalamo-cortical resonance where the specific thalamocortical systems (content) and the non-specific (centromedial thalamus) thalamocortical systems (context) interact in the gamma band frequency via synchronous oscillations. A number of studies have shown that activity in primary sensory areas of the brain is not sufficient to produce consciousness: it is possible for subjects to report a lack of awareness even when areas such as the primary visual cortex (V1) show clear electrical responses to a stimulus. Higher brain areas are seen as more promising, especially the prefrontal cortex, which is involved in a range of higher cognitive functions collectively known as executive functions. There is substantial evidence that a "top-down" flow of neural activity (i.e., activity propagating from the frontal cortex to sensory areas) is more predictive of conscious awareness than a "bottom-up" flow of activity. The prefrontal cortex is not the only candidate area, however: studies by Nikos Logothetis and his colleagues have shown, for example, that visually responsive neurons in parts of the temporal lobe reflect the visual perception in the situation when conflicting visual images are presented to different eyes (i.e., bistable percepts during binocular rivalry). Furthermore, top-down feedback from higher to lower visual brain areas may be weaker or absent in the peripheral visual field, as suggested by some experimental data and theoretical arguments; nevertheless humans can perceive visual inputs in the peripheral visual field arising from bottom-up V1 neural activities. Meanwhile, bottom-up V1 activities for the central visual fields can be vetoed, and thus made invisible to perception, by the top-down feedback, when these bottom-up signals are inconsistent with brain's internal model of the visual world. Modulation of neural responses may correlate with phenomenal experiences. In contrast to the raw electrical responses that do not correlate with consciousness, the modulation of these responses by other stimuli correlates surprisingly well with an important aspect of consciousness: namely with the phenomenal experience of stimulus intensity (brightness, contrast). In the research group of Danko Nikolić it has been shown that some of the changes in the subjectively perceived brightness correlated with the modulation of firing rates while others correlated with the modulation of neural synchrony. An fMRI investigation suggested that these findings were strictly limited to the primary visual areas. This indicates that, in the primary visual areas, changes in firing rates and synchrony can be considered as neural correlates of qualia—at least for some type of qualia. In 2011, Graziano and Kastner proposed the "attention schema" theory of awareness. In that theory, specific cortical areas, notably in the superior temporal sulcus and the temporo-parietal junction, are used to build the construct of awareness and attribute it to other people. The same cortical machinery is also used to attribute awareness to oneself. Damage to these cortical regions can lead to deficits in consciousness such as hemispatial neglect. In the attention schema theory, the value of explaining the feature of awareness and attributing it to a person is to gain a useful predictive model of that person's attentional processing. Attention is a style of information processing in which a brain focuses its resources on a limited set of interrelated signals. Awareness, in this theory, is a useful, simplified schema that represents attentional states. To be aware of X is explained by constructing a model of one's attentional focus on X. In 2013, the perturbational complexity index (PCI) was proposed, a measure of the algorithmic complexity of the electrophysiological response of the cortex to transcranial magnetic stimulation. This measure was shown to be higher in individuals that are awake, in REM sleep or in a locked-in state than in those who are in deep sleep or in a vegetative state, making it potentially useful as a quantitative assessment of consciousness states. Assuming that not only humans but even some non-mammalian species are conscious, a number of evolutionary approaches to the problem of neural correlates of consciousness open up. For example, assuming that birds are conscious—a common assumption among neuroscientists and ethologists due to the extensive cognitive repertoire of birds—there are comparative neuroanatomical ways to validate some of the principal, currently competing, mammalian consciousness–brain theories. The rationale for such a comparative study is that the avian brain deviates structurally from the mammalian brain. So how similar are they? What homologues can be identified? The general conclusion from the study by Butler, et al., is that some of the major theories for the mammalian brain also appear to be valid for the avian brain. The structures assumed to be critical for consciousness in mammalian brains have homologous counterparts in avian brains. Thus the main portions of the theories of Crick and Koch, Edelman and Tononi, and Cotterill seem to be compatible with the assumption that birds are conscious. Edelman also differentiates between what he calls primary consciousness (which is a trait shared by humans and non-human animals) and higher-order consciousness as it appears in humans alone along with human language capacity. Certain aspects of the three theories, however, seem less easy to apply to the hypothesis of avian consciousness. For instance, the suggestion by Crick and Koch that layer 5 neurons of the mammalian brain have a special role, seems difficult to apply to the avian brain, since the avian homologues have a different morphology. Likewise, the theory of Eccles seems incompatible, since a structural homologue/analogue to the dendron has not been found in avian brains. The assumption of an avian consciousness also brings the reptilian brain into focus. The reason is the structural continuity between avian and reptilian brains, meaning that the phylogenetic origin of consciousness may be earlier than suggested by many leading neuroscientists. Joaquin Fuster of UCLA has advocated the position of the importance of the prefrontal cortex in humans, along with the areas of Wernicke and Broca, as being of particular importance to the development of human language capacities neuro-anatomically necessary for the emergence of higher-order consciousness in humans. Biological function and evolution Opinions are divided as to where in biological evolution consciousness emerged and about whether or not consciousness has any survival value. Some argue that consciousness is a byproduct of evolution. It has been argued that consciousness emerged (i) exclusively with the first humans, (ii) exclusively with the first mammals, (iii) independently in mammals and birds, or (iv) with the first reptiles. Other authors date the origins of consciousness to the first animals with nervous systems or early vertebrates in the Cambrian over 500 million years ago. Donald Griffin suggests in his book Animal Minds a gradual evolution of consciousness. Each of these scenarios raises the question of the possible survival value of consciousness. Thomas Henry Huxley defends in an essay titled On the Hypothesis that Animals are Automata, and its History an epiphenomenalist theory of consciousness according to which consciousness is a causally inert effect of neural activity—"as the steam-whistle which accompanies the work of a locomotive engine is without influence upon its machinery". To this William James objects in his essay Are We Automata? by stating an evolutionary argument for mind-brain interaction implying that if the preservation and development of consciousness in the biological evolution is a result of natural selection, it is plausible that consciousness has not only been influenced by neural processes, but has had a survival value itself; and it could only have had this if it had been efficacious. Karl Popper develops in the book The Self and Its Brain a similar evolutionary argument. Regarding the primary function of conscious processing, a recurring idea in recent theories is that phenomenal states somehow integrate neural activities and information-processing that would otherwise be independent. This has been called the integration consensus. Another example has been proposed by Gerald Edelman called dynamic core hypothesis which puts emphasis on reentrant connections that reciprocally link areas of the brain in a massively parallel manner. Edelman also stresses the importance of the evolutionary emergence of higher-order consciousness in humans from the historically older trait of primary consciousness which humans share with non-human animals (see Neural correlates section above). These theories of integrative function present solutions to two classic problems associated with consciousness: differentiation and unity. They show how our conscious experience can discriminate between a virtually unlimited number of different possible scenes and details (differentiation) because it integrates those details from our sensory systems, while the integrative nature of consciousness in this view easily explains how our experience can seem unified as one whole despite all of these individual parts. However, it remains unspecified which kinds of information are integrated in a conscious manner and which kinds can be integrated without consciousness. Nor is it explained what specific causal role conscious integration plays, nor why the same functionality cannot be achieved without consciousness. Obviously not all kinds of information are capable of being disseminated consciously (e.g., neural activity related to vegetative functions, reflexes, unconscious motor programs, low-level perceptual analyses, etc.) and many kinds of information can be disseminated and combined with other kinds without consciousness, as in intersensory interactions such as the ventriloquism effect. Hence it remains unclear why any of it is conscious. For a review of the differences between conscious and unconscious integrations, see the article of E. Morsella. As noted earlier, even among writers who consider consciousness to be a well-defined thing, there is widespread dispute about which animals other than humans can be said to possess it. Edelman has described this distinction as that of humans possessing higher-order consciousness while sharing the trait of primary consciousness with non-human animals (see previous paragraph). Thus, any examination of the evolution of consciousness is faced with great difficulties. Nevertheless, some writers have argued that consciousness can be viewed from the standpoint of evolutionary biology as an adaptation in the sense of a trait that increases fitness. In his article "Evolution of consciousness", John Eccles argued that special anatomical and physical properties of the mammalian cerebral cortex gave rise to consciousness ("[a] psychon ... linked to [a] dendron through quantum physics"). Bernard Baars proposed that once in place, this "recursive" circuitry may have provided a basis for the subsequent development of many of the functions that consciousness facilitates in higher organisms. Peter Carruthers has put forth one such potential adaptive advantage gained by conscious creatures by suggesting that consciousness allows an individual to make distinctions between appearance and reality. This ability would enable a creature to recognize the likelihood that their perceptions are deceiving them (e.g. that water in the distance may be a mirage) and behave accordingly, and it could also facilitate the manipulation of others by recognizing how things appear to them for both cooperative and devious ends. Other philosophers, however, have suggested that consciousness would not be necessary for any functional advantage in evolutionary processes. No one has given a causal explanation, they argue, of why it would not be possible for a functionally equivalent non-conscious organism (i.e., a philosophical zombie) to achieve the very same survival advantages as a conscious organism. If evolutionary processes are blind to the difference between function F being performed by conscious organism O and non-conscious organism O*, it is unclear what adaptive advantage consciousness could provide. As a result, an exaptive explanation of consciousness has gained favor with some theorists that posit consciousness did not evolve as an adaptation but was an exaptation arising as a consequence of other developments such as increases in brain size or cortical rearrangement. Consciousness in this sense has been compared to the blind spot in the retina where it is not an adaption of the retina, but instead just a by-product of the way the retinal axons were wired. Several scholars including Pinker, Chomsky, Edelman, and Luria have indicated the importance of the emergence of human language as an important regulative mechanism of learning and memory in the context of the development of higher-order consciousness (see Neural correlates section above). Another idea suggested where consciousness originates from a cell that has nestled itself in a blood capillary in the brain where the blood flow determines whether or not one is conscious. States of consciousness There are some brain states in which consciousness seems to be absent, including dreamless sleep or coma. There are also a variety of circumstances that can change the relationship between the mind and the world in less drastic ways, producing what are known as altered states of consciousness. Some altered states occur naturally; others can be produced by drugs or brain damage. Altered states can be accompanied by changes in thinking, disturbances in the sense of time, feelings of loss of control, changes in emotional expression, alternations in body image and changes in meaning or significance. The two most widely accepted altered states are sleep and dreaming. Although dream sleep and non-dream sleep appear very similar to an outside observer, each is associated with a distinct pattern of brain activity, metabolic activity, and eye movement; each is also associated with a distinct pattern of experience and cognition. During ordinary non-dream sleep, people who are awakened report only vague and sketchy thoughts, and their experiences do not cohere into a continuous narrative. During dream sleep, in contrast, people who are awakened report rich and detailed experiences in which events form a continuous progression, which may however be interrupted by bizarre or fantastic intrusions. Thought processes during the dream state frequently show a high level of irrationality. Both dream and non-dream states are associated with severe disruption of memory: it usually disappears in seconds during the non-dream state, and in minutes after awakening from a dream unless actively refreshed. Research conducted on the effects of partial epileptic seizures on consciousness found that patients who have partial epileptic seizures experience altered states of consciousness. In partial epileptic seizures, consciousness is impaired or lost while some aspects of consciousness, often automated behaviors, remain intact. Studies found that when measuring the qualitative features during partial epileptic seizures, patients exhibited an increase in arousal and became absorbed in the experience of the seizure, followed by difficulty in focusing and shifting attention. A variety of psychoactive drugs, including alcohol, have notable effects on consciousness. These range from a simple dulling of awareness produced by sedatives, to increases in the intensity of sensory qualities produced by stimulants, cannabis, empathogens–entactogens such as MDMA ("Ecstasy"), or most notably by the class of drugs known as psychedelics. LSD, mescaline, psilocybin, dimethyltryptamine, and others in this group can produce major distortions of perception, including hallucinations; some users even describe their drug-induced experiences as mystical or spiritual in quality. The brain mechanisms underlying these effects are not as well understood as those induced by use of alcohol, but there is substantial evidence that alterations in the brain system that uses the chemical neurotransmitter serotonin play an essential role. There has been some research into physiological changes in yogis and people who practise various techniques of meditation. Some research with brain waves during meditation has reported differences between those corresponding to ordinary relaxation and those corresponding to meditation. It has been disputed, however, whether there is enough evidence to count these as physiologically distinct states of consciousness. The most extensive study of the characteristics of altered states of consciousness was made by psychologist Charles Tart in the 1960s and 1970s. Tart analyzed a state of consciousness as made up of a number of component processes, including exteroception (sensing the external world); interoception (sensing the body); input-processing (seeing meaning); emotions; memory; time sense; sense of identity; evaluation and cognitive processing; motor output; and interaction with the environment. Each of these, in his view, could be altered in multiple ways by drugs or other manipulations. The components that Tart identified have not, however, been validated by empirical studies. Research in this area has not yet reached firm conclusions, but a recent questionnaire-based study identified eleven significant factors contributing to drug-induced states of consciousness: experience of unity; spiritual experience; blissful state; insightfulness; |
the world followed Gresham's law: keeping the gold and silver they received but paying out in notes. This did not happen all around the world at the same time, but occurred sporadically, generally in times of war or financial crisis, beginning in the early 20th century and continuing across the world until the late 20th century, when the regime of floating fiat currencies came into force. One of the last countries to break away from the gold standard was the United States in 1971, an action which was known as the Nixon shock. No country has an enforceable gold standard or silver standard currency system. Banknote era A banknote (more commonly known as a bill in the United States and Canada) is a type of currency and is commonly used as legal tender in many jurisdictions. Together with coins, banknotes make up the cash form of all money. Banknotes are mostly paper, but Australia's Commonwealth Scientific and Industrial Research Organisation developed a polymer currency in the 1980s; it went into circulation on the nation's bicentenary in 1988. Polymer banknotes had already been introduced in the Isle of Man in 1983. As of 2016, polymer currency is used in over 20 countries (over 40 if counting commemorative issues), and dramatically increases the life span of banknotes and reduces counterfeiting. Modern currencies The currency used is based on the concept of lex monetae; that a sovereign state decides which currency it shall use. The International Organization for Standardization has introduced a system of three-letter codes (ISO 4217) to denote currency (as opposed to simple names or currency signs), in order to remove the confusion arising because there are dozens of currencies called the dollar and several called the franc. Even the "pound" is used in nearly a dozen different countries; most of these are tied to the pound sterling, while the remainder has varying values. In general, the three-letter code uses the ISO 3166-1 country code for the first two letters and the first letter of the name of the currency (D for dollar, for example) as the third letter. United States currency, for instance, is globally referred to as USD. Currencies such as the pound sterling have different codes, as the first two letters denote not the exact country name but an alternative name also used to describe the country. The pound's code is GBP where GB denotes Great Britain instead of the United Kingdom. The former currencies include the marks that were in circulation in Germany and Finland. The International Monetary Fund uses a different system when referring to national currencies. Alternative currencies Distinct from centrally controlled government-issued currencies, private decentralized trust-less networks support alternative currencies such as Bitcoin, Ethereum, Litecoin, Monero, Peercoin or Dogecoin, which are classified as cryptocurrency since the transfer of value is assured through cryptographic signatures validated by all users. There are also branded currencies, for example 'obligation' based stores of value, such as quasi-regulated BarterCard, Loyalty Points (Credit Cards, Airlines) or Game-Credits (MMO games) that are based on reputation of commercial products, or highly regulated 'asset-backed' 'alternative currencies' such as mobile-money schemes like MPESA (called E-Money Issuance). The currency may be Internet-based and digital, for instance, bitcoin is not tied to any specific country, or the IMF's SDR that is based on a basket of currencies (and assets held). Possession and sale of alternative forms of currencies is often outlawed by governments in order to preserve the legitimacy of the constitutional currency for the benefit of all citizens. For example, Article I, section 8, clause 5 of the United States Constitution delegates to Congress the power to coin money and to regulate the value thereof. This power was delegated to Congress in order to establish and preserve a uniform standard of value and to insure a singular monetary system for all purchases and debts in the United States, public and private. Along with the power to coin money, the United States Congress has the concurrent power to restrain the circulation of money which is not issued under its own authority in order to protect and preserve the constitutional currency. It is a violation of federal law for individuals, or organizations to create private coin or currency systems to compete with the official coinage and currency of the United States. Control and production In most cases, a central bank has the exclusive power to issue all forms of currency, including coins and banknotes (fiat money), and to restrain the circulation alternative currencies for its own area of circulation (a country or group of countries); it regulates the production of currency by banks (credit) through monetary policy. An exchange rate is a price at which two currencies can be exchanged against each other. This is used for trade between the two currency zones. Exchange rates can be classified as either floating or fixed. In the former, day-to-day movements in exchange rates are determined by the market; in the latter, governments intervene in the market to buy or sell their currency to balance supply and demand at a static exchange rate. In cases where a country has control of its own currency, that control is exercised either by a central bank or by a Ministry of Finance. The institution that has control of monetary policy is referred to as the monetary authority. Monetary authorities have varying degrees of autonomy from the governments that create them. A monetary authority is created and supported by its sponsoring government, so independence can be reduced by the legislative or executive authority that creates it. Several countries can use the same name for their own separate currencies (for example, a dollar in Australia, Canada, and the United States). By contrast, several countries can also use the same currency (for example, the euro or the CFA franc), or one country can declare the currency of another country to be legal tender. For example, Panama and El Salvador have declared US currency to be legal tender, and from 1791 to 1857, Spanish dollars were legal tender in the United States. At various times countries have either re-stamped foreign coins or used currency boards, issuing one note of currency for each note of a foreign government held, as Ecuador currently does. Each currency typically has a main currency unit (the dollar, for example, or the euro) and a fractional unit, often defined as of the main unit: 100 cents = 1 dollar, 100 centimes = 1 franc, 100 pence = 1 pound, although units of or occasionally also occur. Some currencies do not have any smaller units at all, such as the Icelandic króna and the Japanese yen. Mauritania and Madagascar are the only remaining countries that have theoretical fractional units not based on the decimal system; instead, the Mauritanian ouguiya is in theory divided into 5 khoums, while the Malagasy ariary is theoretically divided into 5 iraimbilanja. In these countries, words like dollar or pound "were simply names for given weights of gold". Due to inflation khoums and iraimbilanja have in practice fallen into disuse. (See non-decimal currencies for other historic currencies with non-decimal divisions.) Currency convertibility Subject to variation around the world, local currency can be converted to another currency or vice versa with or without central bank/government intervention. Such conversions take place in the foreign exchange market. Based on the above restrictions or free and readily conversion features, currencies are classified as: Fully convertible When there are no restrictions or limitations on the amount of currency that can be traded on the international market, and the government does not artificially impose a fixed value or minimum value on | has limited boundaries of acceptance. Other definitions of the term "currency" appear in the respective synonymous articles: banknote, coin, and money. This article uses the definition which focuses on the currency systems of countries. One can classify currencies into three monetary systems: fiat money, commodity money, and representative money, depending on what guarantees a currency's value (the economy at large vs. the government's physical metal reserves). Some currencies function as legal tender in certain political jurisdictions. Others simply get traded for their economic value. Digital currency has arisen with the popularity of computers and the Internet. Whether digital notes and coins will be successfully developed remains dubious. Decentralized digital currencies, such as cryptocurrencies are not legal currency, strictly speaking, since they are not issued by a government monetary authority (although one of them, Bitcoin, has become legal tender in El Salvador). Many warnings issued by various countries note the opportunities that cryptocurrencies create for illegal activities, such as money laundering and terrorism. In 2014 the United States IRS issued a statement explaining that virtual currency is treated as property for Federal income-tax purposes and providing examples of how longstanding tax principles applicable to transactions involving property apply to virtual currency. History Early currency Originally money was a form of receipt, representing grain stored in temple granaries in Sumer in ancient Mesopotamia and in Ancient Egypt. In this first stage of currency, metals were used as symbols to represent value stored in the form of commodities. This formed the basis of trade in the Fertile Crescent for over 1500 years. However, the collapse of the Near Eastern trading system pointed to a flaw: in an era where there was no place that was safe to store value, the value of a circulating medium could only be as sound as the forces that defended that store. A trade could only reach as far as the credibility of that military. By the late Bronze Age, however, a series of treaties had established safe passage for merchants around the Eastern Mediterranean, spreading from Minoan Crete and Mycenae in the northwest to Elam and Bahrain in the southeast. It is not known what was used as a currency for these exchanges, but it is thought that ox-hide shaped ingots of copper, produced in Cyprus, may have functioned as a currency. It is thought that the increase in piracy and raiding associated with the Bronze Age collapse, possibly produced by the Peoples of the Sea, brought the trading system of oxhide ingots to an end. It was only the recovery of Phoenician trade in the 10th and 9th centuries BC that led to a return to prosperity, and the appearance of real coinage, possibly first in Anatolia with Croesus of Lydia and subsequently with the Greeks and Persians. In Africa, many forms of value store have been used, including beads, ingots, ivory, various forms of weapons, livestock, the manilla currency, and ochre and other earth oxides. The manilla rings of West Africa were one of the currencies used from the 15th century onwards to sell slaves. African currency is still notable for its variety, and in many places, various forms of barter still apply. Coinage The prevalance of metal coins possibly led to the metal itself being the store of value: first copper, then both silver and gold, and at one point also bronze. Now other non-precious metals are used for coins. Metals were mined, weighed, and stamped into coins. This was to assure the individual accepting the coin that he was getting a certain known weight of precious metal. Coins could be counterfeited, but the existence of standard coins also created a new unit of account, which helped lead to banking. Archimedes' principle provided the next link: coins could now be easily tested for their fine weight of the metal, and thus the value of a coin could be determined, even if it had been shaved, debased or otherwise tampered with (see Numismatics). Most major economies using coinage had several tiers of coins of different values, made of copper, silver, and gold. Gold coins were the most valuable and were used for large purchases, payment of the military, and backing of state activities. Units of account were often defined as the value of a particular type of gold coin. Silver coins were used for midsized transactions, and sometimes also defined a unit of account, while coins of copper or silver, or some mixture of them (see debasement), might be used for everyday transactions. This system had been used in ancient India since the time of the Mahajanapadas. The exact ratios between the values of the three metals varied greatly between different eras and places; for example, the opening of silver mines in the Harz mountains of central Europe made silver relatively less valuable, as did the flood of New World silver after the Spanish conquests. However, the rarity of gold consistently made it more valuable than silver, and likewise silver was consistently worth more than copper. Paper money In premodern China, the need for credit and for a medium of exchange that was less physically cumbersome than large numbers of copper coins led to the introduction of paper money, i.e. banknotes. Their introduction was a gradual process that lasted from the late Tang dynasty (618–907) into the Song dynasty (960–1279). It began as a means for merchants to exchange heavy coinage for receipts of deposit issued as promissory notes by wholesalers' shops. These notes were valid for temporary use in a small regional territory. In the 10th century, the Song dynasty government began to circulate these notes amongst the traders in its monopolized salt industry. The Song government granted several shops the right to issue banknotes, and in the early 12th century the government finally took over these shops to produce state-issued currency. Yet the banknotes issued were still only locally and temporarily valid: it was not until the mid 13th century that a standard and uniform government issue of paper money became an acceptable nationwide currency. The already widespread methods of woodblock printing and then Bi Sheng's movable type printing by the 11th century were the impetus for the mass |
willing to work for the current money-wage and the aggregate demand for it at that wage would be greater than the existing volume of employment.— John Maynard Keynes, The General Theory of Employment, Interest and Money p1 Economic growth Economic growth can be enhanced by investment in capital, such as more or better machinery. A low interest rate implies that firms can borrow money to invest in their capital stock and pay less interest for it. Lowering the interest is therefore considered to encourage economic growth and is often used to alleviate times of low economic growth. On the other hand, raising the interest rate is often used in times of high economic growth as a contra-cyclical device to keep the economy from overheating and avoid market bubbles. Further goals of monetary policy are stability of interest rates, of the financial market, and of the foreign exchange market. Goals frequently cannot be separated from each other and often conflict. Costs must therefore be carefully weighed before policy implementation. Climate change In the aftermath of the Paris agreement on climate change, a debate is now underway on whether central banks should also pursue environmental goals as part of their activities. In 2017, eight central banks have formed the Network for Greening the Financial System (NGFS) to evaluate the way in which central banks can use their regulatory and monetary policy tools to support climate change mitigation. Today more than 70 central banks are part of the NGFS. In January 2020, the European Central Bank has announced it will consider climate considerations when reviewing its monetary policy framework. </p> Proponents of "green monetary policy" are proposing that central banks include climate-related criteria in their collateral eligibility frameworks, when conducting asset purchases and also in their refinancing operations. But critics such as Jens Weidmann are arguing it is not central banks' role conduct climate policy. Monetary policy instruments The primary tools available to central banks are open market operations (including repurchase agreements), reserve requirements, interest rate policy (through control of the discount rate), and control of the money supply. A central bank affects the monetary base through open market operations, if its country has a well developed market for its government bonds. This entails managing the quantity of money in circulation through the buying and selling of various financial instruments, such as treasury bills, repurchase agreements or "repos", company bonds, or foreign currencies, in exchange for money on deposit at the central bank. Those deposits are convertible to currency, so all of these purchases or sales result in more or less base currency entering or leaving market circulation. For example, if the central bank wishes to decrease interest rates (executing expansionary monetary policy), it purchases government debt, thereby increasing the amount of cash in circulation or crediting banks' reserve accounts. Commercial banks then have more money to lend, so they reduce lending rates, making loans less expensive. Cheaper credit card interest rates increase consumer spending. Additionally, when business loans are more affordable, companies can expand to keep up with consumer demand. They ultimately hire more workers, whose incomes increase, which in its turn also increases the demand. This method is usually enough to stimulate demand and drive economic growth to a healthy rate. Usually, the short-term goal of open market operations is to achieve a specific short-term interest rate target. In other instances, monetary policy might instead entail the targeting of a specific exchange rate relative to some foreign currency or else relative to gold. For example, in the case of the United States the Federal Reserve targets the federal funds rate, the rate at which member banks lend to one another overnight; however, the monetary policy of China (since 2014) is to target the exchange rate between the Chinese renminbi and a basket of foreign currencies. If the open market operations do not lead to the desired effects, a second tool can be used: the central bank can increase or decrease the interest rate it charges on discounts or overdrafts (loans from the central bank to commercial banks, see discount window). If the interest rate on such transactions is sufficiently low, commercial banks can borrow from the central bank to meet reserve requirements and use the additional liquidity to expand their balance sheets, increasing the credit available to the economy. A third alternative is to change the reserve requirements. The reserve requirement refers to the proportion of total liabilities that banks must keep on hand overnight, either in its vaults or at the central bank. Banks only maintain a small portion of their assets as cash available for immediate withdrawal; the rest is invested in illiquid assets like mortgages and loans. Lowering the reserve requirement frees up funds for banks to increase loans or buy other profitable assets. This is expansionary because it creates credit. However, even though this tool immediately increases liquidity, central banks rarely change the reserve requirement because doing so frequently adds uncertainty to banks' planning. The use of open market operations is therefore preferred. Unconventional monetary policy Other forms of monetary policy, particularly used when interest rates are at or near 0% and there are concerns about deflation or deflation is occurring, are referred to as unconventional monetary policy. These include credit easing, quantitative easing, forward guidance, and signalling. In credit easing, a central bank purchases private sector assets to improve liquidity and improve access to credit. Signaling can be used to lower market expectations for lower interest rates in the future. For example, during the credit crisis of 2008, the US Federal Reserve indicated rates would be low for an "extended period", and the Bank of Canada made a "conditional commitment" to keep rates at the lower bound of 25 basis points (0.25%) until the end of the second quarter of 2010. Some have envisaged the use of what Milton Friedman once called "helicopter money" whereby the central bank would make direct transfers to citizens in order to lift inflation up to the central bank's intended target. Such policy option could be particularly effective at the zero lower bound. Banking supervision and other activities In some countries a central bank, through its subsidiaries, controls and monitors the banking sector. In other countries banking supervision is carried out by a government department such as the UK Treasury, or by an independent government agency, for example, UK's Financial Conduct Authority. It examines the banks' balance sheets and behaviour and policies toward consumers. Apart from refinancing, it also provides banks with services such as transfer of funds, bank notes and coins or foreign currency. Thus it is often described as the "bank of banks". Many countries will monitor and control the banking sector through several different agencies and for different purposes. The Bank regulation in the United States for example is highly fragmented with 3 federal agencies, the Federal Deposit Insurance Corporation, the Federal Reserve Board, or Office of the Comptroller of the Currency and numerous others on the state and the private level. There is usually significant cooperation between the agencies. For example, money center banks, deposit-taking institutions, and other types of financial institutions may be subject to different (and occasionally overlapping) regulation. Some types of banking regulation may be delegated to other levels of government, such as state or provincial governments. Any cartel of banks is particularly closely watched and controlled. Most countries control bank mergers and are wary of concentration in this industry due to the danger of groupthink and runaway lending bubbles based on a single point of failure, the credit culture of the few large banks. Independence Numerous governments have opted to make central banks independent. The economic logic behind central bank independence is that when governments delegate monetary policy to an independent central bank (with an anti-inflationary purpose) and away from elected politicians, monetary policy will not reflect the interests of the politicians. When governments control monetary policy, politicians may be tempted to boost economic activity in advance of an election to the detriment of the long-term health of the economy and the country. As a consequence, financial markets may not consider future commitments to low inflation to be credible when monetary policy is in the hands of elected officials, which increases the risk of capital flight. An alternative to central bank independence is to have fixed exchange rate regimes. Governments generally have some degree of influence over even "independent" central banks; the aim of independence is primarily to prevent short-term interference. In 1951, the Deutsche Bundesbank became the first central bank to be given full independence, leading this form of central bank to be referred to as the "Bundesbank model", as opposed, for instance, to the New Zealand model, which has a goal (i.e. inflation target) set by the government. Central bank independence is usually guaranteed by legislation and the institutional framework governing the bank's relationship with elected officials, particularly the minister of finance. Central bank legislation will enshrine specific procedures for selecting and appointing the head of the central bank. Often the minister of finance will appoint the governor in consultation with the central bank's board and its incumbent governor. In addition, the legislation will specify banks governor's term of appointment. The most independent central banks enjoy a fixed non-renewable term for the governor in order to eliminate pressure on the governor to please the government in the hope of being re-appointed for a second term. Generally, independent central banks enjoy both goal and instrument independence. In return to their independence, central bank are usually accountable at some level to government officials, either to the finance ministry or to parliament. For example, the Board of Governors of the U.S. Federal Reserve are nominated by the U.S. President and confirmed by the Senate, publishes verbatim transcripts, and balance sheets are audited by the Government Accountability Office. In the 1990s there was a trend towards increasing the independence of central banks as a way of improving long-term economic performance. While a large volume of economic research has been done to define the relationship between central bank independence and economic performance, the results are ambiguous. The literature on central bank independence has defined a cumulative and complementary number of aspects: Institutional independence: The independence of the central bank is enshrined in law and shields central bank from political interference. In general terms, institutional independence means that politicians should refrain to seek to influence monetary policy decisions, while symmetrically central banks should also avoid influencing government politics. Goal independence: The central bank has the right to set its own policy goals, whether inflation targeting, control of the money supply, or maintaining a fixed exchange rate. While this type of independence is more common, many central banks prefer to announce their policy goals in partnership with the appropriate government departments. This increases the transparency of the policy-setting process and thereby increases the credibility of the goals chosen by providing assurance that they will not be changed without notice. In addition, the setting of common goals by the central bank and the government helps to avoid situations where monetary and fiscal policy are in conflict; a policy combination that is clearly sub-optimal. Functional & operational independence: The central bank has the independence to determine the best way of achieving its policy goals, including the types of instruments used and the timing of their use. To achieve its mandate, the central bank has the authority to run its own operations (appointing staff, setting budgets, and so on.) and to organize its internal structures without excessive involvement of the government. This is the most common form of central bank independence. The granting of independence to the Bank of England in 1997 was, in fact, the granting of operational independence; the inflation target continued to be announced in the Chancellor's annual budget speech to Parliament. Personal independence: The other forms of independence are not possible unless central bank heads have a high security of tenure. In practice, this means that governors should hold long mandates (at least longer than the electoral cycle) and a certain degree of legal immunity. One of the most common statistical indicators used in the literature as a proxy for central bank independence is the "turn-over-rate" of central bank governors. If a government is in the habit of appointing and replacing the governor frequently, it clearly has the capacity to micro-manage the central bank through its choice of governors. Financial independence: central banks have full autonomy on their budget, and some are even prohibited from financing governments. This is meant to remove incentives from politicians to influence central banks. Legal independence : some central banks have their own legal personality, which allows them to ratify international agreements without government's approval (like the ECB ) and to go in court. There is very strong consensus among economists that an independent central bank can run a more credible monetary policy, making market expectations more responsive to signals from the central bank. Both the Bank of England (1997) and the European Central Bank have been made independent and follow a set of published inflation targets so that markets know what to expect. Even the People's Bank of China has been accorded great latitude, though in China the official role of the bank remains that of a national bank rather than a central bank, underlined by the official refusal to "unpeg" the yuan or to revalue it "under pressure". The fact that the Communist Party is not elected also relieves the pressure to please people, increasing its independence. International organizations such as the World Bank, the Bank for International Settlements (BIS) and the International Monetary Fund (IMF) strongly support central bank independence. This results, in part, from a belief in the intrinsic merits of increased independence. The support for independence from the international organizations also derives partly from the connection between increased independence for the central bank and increased transparency in the policy-making process. The IMF's Financial Services Action Plan (FSAP) review self-assessment, for example, includes a number of questions about central bank independence in the transparency section. An independent central bank will score higher in the review than one that is not independent. History Early history The use of money as a unit of account predates history. Government control of money is documented in the ancient Egyptian economy (2750–2150 BCE). The Egyptians measured the value of goods with a central unit called shat. Like many other currencies, the shat was linked to gold. The value of a shat in terms of goods was defined by government administrations. Other cultures in Asia Minor later materialized their currencies in the form of gold and silver coins. In the medieval and the early modern period a network of professional banks was established in Southern and Central Europe. The institutes built a new tier in the financial economy. The monetary system was still controlled by government institutions, mainly through the coinage prerogative. Banks, however, could use book money to create deposits for their customers. Thus, they had the possibility to issue, lend and transfer money autonomously without direct governmental control. In order to consolidate the monetary system, a network of public exchange banks was established at the beginning of the 17th century in main European trade centres. The Amsterdam Wisselbank was founded as a first institute in 1609. Further exchange banks were located in Hamburg, Venice and Nuremberg. The institutes offered a public infrastructure for cashless international payments. They aimed to increase the efficiency of international trade and to safeguard monetary stability. The exchange banks thus fulfilled comparable functions to modern central banks. The institutes even issued their own (book) currency, called Mark Banco. The | with services such as transfer of funds, bank notes and coins or foreign currency. Thus it is often described as the "bank of banks". Many countries will monitor and control the banking sector through several different agencies and for different purposes. The Bank regulation in the United States for example is highly fragmented with 3 federal agencies, the Federal Deposit Insurance Corporation, the Federal Reserve Board, or Office of the Comptroller of the Currency and numerous others on the state and the private level. There is usually significant cooperation between the agencies. For example, money center banks, deposit-taking institutions, and other types of financial institutions may be subject to different (and occasionally overlapping) regulation. Some types of banking regulation may be delegated to other levels of government, such as state or provincial governments. Any cartel of banks is particularly closely watched and controlled. Most countries control bank mergers and are wary of concentration in this industry due to the danger of groupthink and runaway lending bubbles based on a single point of failure, the credit culture of the few large banks. Independence Numerous governments have opted to make central banks independent. The economic logic behind central bank independence is that when governments delegate monetary policy to an independent central bank (with an anti-inflationary purpose) and away from elected politicians, monetary policy will not reflect the interests of the politicians. When governments control monetary policy, politicians may be tempted to boost economic activity in advance of an election to the detriment of the long-term health of the economy and the country. As a consequence, financial markets may not consider future commitments to low inflation to be credible when monetary policy is in the hands of elected officials, which increases the risk of capital flight. An alternative to central bank independence is to have fixed exchange rate regimes. Governments generally have some degree of influence over even "independent" central banks; the aim of independence is primarily to prevent short-term interference. In 1951, the Deutsche Bundesbank became the first central bank to be given full independence, leading this form of central bank to be referred to as the "Bundesbank model", as opposed, for instance, to the New Zealand model, which has a goal (i.e. inflation target) set by the government. Central bank independence is usually guaranteed by legislation and the institutional framework governing the bank's relationship with elected officials, particularly the minister of finance. Central bank legislation will enshrine specific procedures for selecting and appointing the head of the central bank. Often the minister of finance will appoint the governor in consultation with the central bank's board and its incumbent governor. In addition, the legislation will specify banks governor's term of appointment. The most independent central banks enjoy a fixed non-renewable term for the governor in order to eliminate pressure on the governor to please the government in the hope of being re-appointed for a second term. Generally, independent central banks enjoy both goal and instrument independence. In return to their independence, central bank are usually accountable at some level to government officials, either to the finance ministry or to parliament. For example, the Board of Governors of the U.S. Federal Reserve are nominated by the U.S. President and confirmed by the Senate, publishes verbatim transcripts, and balance sheets are audited by the Government Accountability Office. In the 1990s there was a trend towards increasing the independence of central banks as a way of improving long-term economic performance. While a large volume of economic research has been done to define the relationship between central bank independence and economic performance, the results are ambiguous. The literature on central bank independence has defined a cumulative and complementary number of aspects: Institutional independence: The independence of the central bank is enshrined in law and shields central bank from political interference. In general terms, institutional independence means that politicians should refrain to seek to influence monetary policy decisions, while symmetrically central banks should also avoid influencing government politics. Goal independence: The central bank has the right to set its own policy goals, whether inflation targeting, control of the money supply, or maintaining a fixed exchange rate. While this type of independence is more common, many central banks prefer to announce their policy goals in partnership with the appropriate government departments. This increases the transparency of the policy-setting process and thereby increases the credibility of the goals chosen by providing assurance that they will not be changed without notice. In addition, the setting of common goals by the central bank and the government helps to avoid situations where monetary and fiscal policy are in conflict; a policy combination that is clearly sub-optimal. Functional & operational independence: The central bank has the independence to determine the best way of achieving its policy goals, including the types of instruments used and the timing of their use. To achieve its mandate, the central bank has the authority to run its own operations (appointing staff, setting budgets, and so on.) and to organize its internal structures without excessive involvement of the government. This is the most common form of central bank independence. The granting of independence to the Bank of England in 1997 was, in fact, the granting of operational independence; the inflation target continued to be announced in the Chancellor's annual budget speech to Parliament. Personal independence: The other forms of independence are not possible unless central bank heads have a high security of tenure. In practice, this means that governors should hold long mandates (at least longer than the electoral cycle) and a certain degree of legal immunity. One of the most common statistical indicators used in the literature as a proxy for central bank independence is the "turn-over-rate" of central bank governors. If a government is in the habit of appointing and replacing the governor frequently, it clearly has the capacity to micro-manage the central bank through its choice of governors. Financial independence: central banks have full autonomy on their budget, and some are even prohibited from financing governments. This is meant to remove incentives from politicians to influence central banks. Legal independence : some central banks have their own legal personality, which allows them to ratify international agreements without government's approval (like the ECB ) and to go in court. There is very strong consensus among economists that an independent central bank can run a more credible monetary policy, making market expectations more responsive to signals from the central bank. Both the Bank of England (1997) and the European Central Bank have been made independent and follow a set of published inflation targets so that markets know what to expect. Even the People's Bank of China has been accorded great latitude, though in China the official role of the bank remains that of a national bank rather than a central bank, underlined by the official refusal to "unpeg" the yuan or to revalue it "under pressure". The fact that the Communist Party is not elected also relieves the pressure to please people, increasing its independence. International organizations such as the World Bank, the Bank for International Settlements (BIS) and the International Monetary Fund (IMF) strongly support central bank independence. This results, in part, from a belief in the intrinsic merits of increased independence. The support for independence from the international organizations also derives partly from the connection between increased independence for the central bank and increased transparency in the policy-making process. The IMF's Financial Services Action Plan (FSAP) review self-assessment, for example, includes a number of questions about central bank independence in the transparency section. An independent central bank will score higher in the review than one that is not independent. History Early history The use of money as a unit of account predates history. Government control of money is documented in the ancient Egyptian economy (2750–2150 BCE). The Egyptians measured the value of goods with a central unit called shat. Like many other currencies, the shat was linked to gold. The value of a shat in terms of goods was defined by government administrations. Other cultures in Asia Minor later materialized their currencies in the form of gold and silver coins. In the medieval and the early modern period a network of professional banks was established in Southern and Central Europe. The institutes built a new tier in the financial economy. The monetary system was still controlled by government institutions, mainly through the coinage prerogative. Banks, however, could use book money to create deposits for their customers. Thus, they had the possibility to issue, lend and transfer money autonomously without direct governmental control. In order to consolidate the monetary system, a network of public exchange banks was established at the beginning of the 17th century in main European trade centres. The Amsterdam Wisselbank was founded as a first institute in 1609. Further exchange banks were located in Hamburg, Venice and Nuremberg. The institutes offered a public infrastructure for cashless international payments. They aimed to increase the efficiency of international trade and to safeguard monetary stability. The exchange banks thus fulfilled comparable functions to modern central banks. The institutes even issued their own (book) currency, called Mark Banco. The Bank of Amsterdam established in 1609 is considered to be the precursor to modern central banks. The central bank of Sweden ("Sveriges Riksbank" or simply "Riksbanken") was founded in Stockholm from the remains of the failed bank Stockholms Banco in 1664 and answered to the parliament ("Riksdag of the Estates"). One role of the Swedish central bank was lending money to the government. Bank of England The establishment of the Bank of England, the model on which most modern central banks have been based, was devised by Charles Montagu, 1st Earl of Halifax, in 1694, following a proposal by the banker William Paterson three years earlier, which had not been acted upon. In the Kingdom of England in the 1690s, public funds were in short supply, and the credit of William III's government was so low in London that it was impossible for it to borrow the £1,200,000 (at 8 percent) needed to finance the ongoing Nine Years' War with France. In order to induce subscription to the loan, Montagu proposed that the subscribers were to be incorporated as The Governor and Company of the Bank of England with long-term banking privileges including the issue of notes. The lenders would give the government cash (bullion) and also issue notes against the government bonds, which could be lent again. A royal charter was granted on 27 July through the passage of the Tonnage Act 1694. The bank was given exclusive possession of the government's balances, and was the only limited-liability corporation allowed to issue banknotes. The £1.2 million was raised in 12 days; half of this was used to rebuild the navy. Although this establishment of the Bank of England marks the origin of central banking, it did not have the functions of a modern central bank, namely, to regulate the value of the national currency, to finance the government, to be the sole authorized distributor of banknotes, and to function as a 'lender of last resort' to banks suffering a liquidity crisis. These modern central banking functions evolved slowly through the 18th and 19th centuries. Although the bank was originally a private institution, by the end of the 18th century it was increasingly being regarded as a public authority with civic responsibility toward the upkeep of a healthy financial system. The currency crisis of 1797, caused by panicked depositors withdrawing from the bank led to the government suspending convertibility of notes into specie payment. The bank was soon accused by the bullionists of causing the exchange rate to fall from over issuing banknotes, a charge which the bank denied. Nevertheless, it was clear that the bank was being treated as an organ of the state. Henry Thornton, a merchant banker and monetary theorist has been described as the father of the modern central bank. An opponent of the real bills doctrine, he was a defender of the bullionist position and a significant figure in monetary theory. Thornton's process of monetary expansion anticipated the theories of Knut Wicksell regarding the "cumulative process which restates the Quantity Theory in a theoretically coherent form". As a response to the 1797 currency crisis, Thornton wrote in 1802 An Enquiry into the Nature and Effects of the Paper Credit of Great Britain, in which he argued that the increase in paper credit did not cause the crisis. The book also gives a detailed account of the British monetary system as well as a detailed examination of the ways in which the Bank of England should act to counteract fluctuations in the value of the pound. Until the mid-nineteenth century, commercial banks were able to issue their own banknotes, and notes issued by provincial banking companies were commonly in circulation. Many consider the origins of the central bank to lie with the passage of the Bank Charter Act 1844. Under the 1844 Act, bullionism was institutionalized in Britain, creating a ratio between the gold reserves held by the Bank of England and the notes that the bank could issue. The Act also placed strict curbs on the issuance of notes by the country banks. The bank accepted the role of 'lender of last resort' in the 1870s after criticism of its lacklustre response to the Overend-Gurney crisis. The journalist Walter Bagehot wrote on the subject in Lombard Street: A Description of the Money Market, in which he advocated for the bank to officially become a lender of last resort during a credit crunch, sometimes referred to as "Bagehot's dictum". Paul Tucker phrased the dictum in 2009 as follows: Spread around the world Central banks were established in many European countries during the 19th century. Napoleon created the Banque de France in 1800, in an attempt to improve the financing of his wars. On the continent of Europe, the Bank of France remained the most important central bank throughout the 19th century. The Bank of Finland was founded in 1812, soon after Finland had been taken over from Sweden by Russia to become its grand duchy. A central banking role was played by a small group of powerful family banking houses, typified by the House of Rothschild, with branches in major cities across Europe, as well as the Hottinguer family in Switzerland and the Oppenheim family in Germany. Although central banks today are generally associated with fiat money, the 19th and early 20th centuries central banks in most of Europe and Japan developed under the international gold standard. Free banking or currency boards were common at this time. Problems with collapses |
weapon on April 22, 1915, at Ypres by the German Army. The effect on the allies was devastating because the existing gas masks were difficult to deploy and had not been broadly distributed. Properties Chlorine is the second halogen, being a nonmetal in group 17 of the periodic table. Its properties are thus similar to fluorine, bromine, and iodine, and are largely intermediate between those of the first two. Chlorine has the electron configuration [Ne]3s23p5, with the seven electrons in the third and outermost shell acting as its valence electrons. Like all halogens, it is thus one electron short of a full octet, and is hence a strong oxidising agent, reacting with many elements in order to complete its outer shell. Corresponding to periodic trends, it is intermediate in electronegativity between fluorine and bromine (F: 3.98, Cl: 3.16, Br: 2.96, I: 2.66), and is less reactive than fluorine and more reactive than bromine. It is also a weaker oxidising agent than fluorine, but a stronger one than bromine. Conversely, the chloride ion is a weaker reducing agent than bromide, but a stronger one than fluoride. It is intermediate in atomic radius between fluorine and bromine, and this leads to many of its atomic properties similarly continuing the trend from iodine to bromine upward, such as first ionisation energy, electron affinity, enthalpy of dissociation of the X2 molecule (X = Cl, Br, I), ionic radius, and X–X bond length. (Fluorine is anomalous due to its small size.) All four stable halogens experience intermolecular van der Waals forces of attraction, and their strength increases together with the number of electrons among all homonuclear diatomic halogen molecules. Thus, the melting and boiling points of chlorine are intermediate between those of fluorine and bromine: chlorine melts at −101.0 °C and boils at −34.0 °C. As a result of the increasing molecular weight of the halogens down the group, the density and heats of fusion and vaporisation of chlorine are again intermediate between those of bromine and fluorine, although all their heats of vaporisation are fairly low (leading to high volatility) thanks to their diatomic molecular structure. The halogens darken in colour as the group is descended: thus, while fluorine is a pale yellow gas, chlorine is distinctly yellow-green. This trend occurs because the wavelengths of visible light absorbed by the halogens increase down the group. Specifically, the colour of a halogen, such as chlorine, results from the electron transition between the highest occupied antibonding πg molecular orbital and the lowest vacant antibonding σu molecular orbital. The colour fades at low temperatures, so that solid chlorine at −195 °C is almost colourless. Like solid bromine and iodine, solid chlorine crystallises in the orthorhombic crystal system, in a layered lattice of Cl2 molecules. The Cl–Cl distance is 198 pm (close to the gaseous Cl–Cl distance of 199 pm) and the Cl···Cl distance between molecules is 332 pm within a layer and 382 pm between layers (compare the van der Waals radius of chlorine, 180 pm). This structure means that chlorine is a very poor conductor of electricity, and indeed its conductivity is so low as to be practically unmeasurable. Isotopes Chlorine has two stable isotopes, 35Cl and 37Cl. These are its only two natural isotopes occurring in quantity, with 35Cl making up 76% of natural chlorine and 37Cl making up the remaining 24%. Both are synthesised in stars in the oxygen-burning and silicon-burning processes. Both have nuclear spin 3/2+ and thus may be used for nuclear magnetic resonance, although the spin magnitude being greater than 1/2 results in non-spherical nuclear charge distribution and thus resonance broadening as a result of a nonzero nuclear quadrupole moment and resultant quadrupolar relaxation. The other chlorine isotopes are all radioactive, with half-lives too short to occur in nature primordially. Of these, the most commonly used in the laboratory are 36Cl (t1/2 = 3.0×105 y) and 38Cl (t1/2 = 37.2 min), which may be produced from the neutron activation of natural chlorine. The most stable chlorine radioisotope is 36Cl. The primary decay mode of isotopes lighter than 35Cl is electron capture to isotopes of sulfur; that of isotopes heavier than 37Cl is beta decay to isotopes of argon; and 36Cl may decay by either mode to stable 36S or 36Ar. 36Cl occurs in trace quantities in nature as a cosmogenic nuclide in a ratio of about (7–10) × 10−13 to 1 with stable chlorine isotopes: it is produced in the atmosphere by spallation of 36Ar by interactions with cosmic ray protons. In the top meter of the lithosphere, 36Cl is generated primarily by thermal neutron activation of 35Cl and spallation of 39K and 40Ca. In the subsurface environment, muon capture by 40Ca becomes more important as a way to generate 36Cl. Chemistry and compounds Chlorine is intermediate in reactivity between fluorine and bromine, and is one of the most reactive elements. Chlorine is a weaker oxidising agent than fluorine but a stronger one than bromine or iodine. This can be seen from the standard electrode potentials of the X2/X− couples (F, +2.866 V; Cl, +1.395 V; Br, +1.087 V; I, +0.615 V; At, approximately +0.3 V). However, this trend is not shown in the bond energies because fluorine is singular due to its small size, low polarisability, and inability to show hypervalence. As another difference, chlorine has a significant chemistry in positive oxidation states while fluorine does not. Chlorination often leads to higher oxidation states than bromination or iodination but lower oxidation states than fluorination. Chlorine tends to react with compounds including M–M, M–H, or M–C bonds to form M–Cl bonds. Given that E°(O2/H2O) = +1.229 V, which is less than +1.395 V, it would be expected that chlorine should be able to oxidise water to oxygen and hydrochloric acid. However, the kinetics of this reaction are unfavorable, and there is also a bubble overpotential effect to consider, so that electrolysis of aqueous chloride solutions evolves chlorine gas and not oxygen gas, a fact that is very useful for the industrial production of chlorine. Hydrogen chloride The simplest chlorine compound is hydrogen chloride, HCl, a major chemical in industry as well as in the laboratory, both as a gas and dissolved in water as hydrochloric acid. It is often produced by burning hydrogen gas in chlorine gas, or as a byproduct of chlorinating hydrocarbons. Another approach is to treat sodium chloride with concentrated sulfuric acid to produce hydrochloric acid, also known as the "salt-cake" process: NaCl + H2SO4 NaHSO4 + HCl NaCl + NaHSO4 Na2SO4 + HCl In the laboratory, hydrogen chloride gas may be made by drying the acid with concentrated sulfuric acid. Deuterium chloride, DCl, may be produced by reacting benzoyl chloride with heavy water (D2O). At room temperature, hydrogen chloride is a colourless gas, like all the hydrogen halides apart from hydrogen fluoride, since hydrogen cannot form strong hydrogen bonds to the larger electronegative chlorine atom; however, weak hydrogen bonding is present in solid crystalline hydrogen chloride at low temperatures, similar to the hydrogen fluoride structure, before disorder begins to prevail as the temperature is raised. Hydrochloric acid is a strong acid (pKa = −7) because the hydrogen bonds to chlorine are too weak to inhibit dissociation. The HCl/H2O system has many hydrates HCl·nH2O for n = 1, 2, 3, 4, and 6. Beyond a 1:1 mixture of HCl and H2O, the system separates completely into two separate liquid phases. Hydrochloric acid forms an azeotrope with boiling point 108.58 °C at 20.22 g HCl per 100 g solution; thus hydrochloric acid cannot be concentrated beyond this point by distillation. Unlike hydrogen fluoride, anhydrous liquid hydrogen chloride is difficult to work with as a solvent, because its boiling point is low, it has a small liquid range, its dielectric constant is low and it does not dissociate appreciably into H2Cl+ and ions – the latter, in any case, are much less stable than the bifluoride ions () due to the very weak hydrogen bonding between hydrogen and chlorine, though its salts with very large and weakly polarising cations such as Cs+ and (R = Me, Et, Bun) may still be isolated. Anhydrous hydrogen chloride is a poor solvent, only able to dissolve small molecular compounds such as nitrosyl chloride and phenol, or salts with very low lattice energies such as tetraalkylammonium halides. It readily protonates electrophiles containing lone-pairs or π bonds. Solvolysis, ligand replacement reactions, and oxidations are well-characterised in hydrogen chloride solution: Ph3SnCl + HCl ⟶ Ph2SnCl2 + PhH (solvolysis) Ph3COH + 3 HCl ⟶ + H3O+Cl− (solvolysis) + BCl3 ⟶ + HCl (ligand replacement) PCl3 + Cl2 + HCl ⟶ (oxidation) Other binary chlorides Nearly all elements in the periodic table form binary chlorides. The exceptions are decidedly in the minority and stem in each case from one of three causes: extreme inertness and reluctance to participate in chemical reactions (the noble gases, with the exception of xenon in the highly unstable XeCl2 and XeCl4); extreme nuclear instability hampering chemical investigation before decay and transmutation (many of the heaviest elements beyond bismuth); and having an electronegativity higher than chlorine's (oxygen and fluorine) so that the resultant binary compounds are formally not chlorides but rather oxides or fluorides of chlorine. Even though nitrogen in NCl3 is bearing a negative charge, the compound is usually called nitrogen trichloride. Chlorination of metals with Cl2 usually leads to a higher oxidation state than bromination with Br2 when multiple oxidation states are available, such as in MoCl5 and MoBr3. Chlorides can be made by reaction of an element or its oxide, hydroxide, or carbonate with hydrochloric acid, and then dehydrated by mildly high temperatures combined with either low pressure or anhydrous hydrogen chloride gas. These methods work best when the chloride product is stable to hydrolysis; otherwise, the possibilities include high-temperature oxidative chlorination of the element with chlorine or hydrogen chloride, high-temperature chlorination of a metal oxide or other halide by chlorine, a volatile metal chloride, carbon tetrachloride, or an organic chloride. For instance, zirconium dioxide reacts with chlorine at standard conditions to produce zirconium tetrachloride, and uranium trioxide reacts with hexachloropropene when heated under reflux to give uranium tetrachloride. The second example also involves a reduction in oxidation state, which can also be achieved by reducing a higher chloride using hydrogen or a metal as a reducing agent. This may also be achieved by thermal decomposition or disproportionation as follows: EuCl3 + H2 ⟶ EuCl2 + HCl ReCl5 ReCl3 + Cl2 AuCl3 AuCl + Cl2 Most of the chlorides the metals in groups 1, 2, and 3, along with the lanthanides and actinides in the +2 and +3 oxidation states, are mostly ionic, while nonmetals tend to form covalent molecular chlorides, as do metals in high oxidation states from +3 and above. Silver chloride is very insoluble in water and is thus often used as a qualitative test for chlorine. Polychlorine compounds Although dichlorine is a strong oxidising agent with a high first ionisation energy, it may be oxidised under extreme conditions to form the cation. This is very unstable and has only been characterised by its electronic band spectrum when produced in a low-pressure discharge tube. The yellow cation is more stable and may be produced as follows: Cl2 + ClF + AsF5 This reaction is conducted in the oxidising solvent arsenic pentafluoride. The trichloride anion, , has also been characterised; it is analogous to triiodide. Chlorine fluorides The three fluorides of chlorine form a subset of the interhalogen compounds, all of which are diamagnetic. Some cationic and anionic derivatives are known, such as , , , and Cl2F+. Some pseudohalides of chlorine are also known, such as cyanogen chloride (ClCN, linear), chlorine cyanate (ClNCO), chlorine thiocyanate (ClSCN, unlike its oxygen counterpart), and chlorine azide (ClN3). Chlorine monofluoride (ClF) is extremely thermally stable, and is sold commercially in 500-gram steel lecture bottles. It is a colourless gas that melts at −155.6 °C and boils at −100.1 °C. It may be produced by the direction of its elements at 225 °C, though it must then be separated and purified from chlorine trifluoride and its reactants. Its properties are mostly intermediate between those of chlorine and fluorine. It will react with many metals and nonmetals from room temperature and above, fluorinating them and liberating chlorine. It will also act as a chlorofluorinating agent, adding chlorine and fluorine across a multiple bond or by oxidation: for example, it will attack carbon monoxide to form carbonyl chlorofluoride, COFCl. It will react analogously with hexafluoroacetone, (CF3)2CO, with a potassium fluoride catalyst to produce heptafluoroisopropyl hypochlorite, (CF3)2CFOCl; with nitriles RCN to produce RCF2NCl2; and with the sulfur oxides SO2 and SO3 to produce ClSO2F and ClOSO2F respectively. It will also react exothermically with compounds containing –OH and –NH groups, such as water: H2O + 2 ClF ⟶ 2 HF + Cl2O Chlorine trifluoride (ClF3) is a volatile colourless molecular liquid which melts at −76.3 °C and boils at 11.8 °C. It may be formed by directly fluorinating gaseous chlorine or chlorine monofluoride at 200–300 °C. One of the most reactive chemical compounds known, the list of elements it sets on fire is diverse, containing hydrogen, potassium, phosphorus, arsenic, antimony, sulfur, selenium, tellurium, bromine, iodine, and powdered molybdenum, tungsten, rhodium, iridium, and iron. It will also ignite water, along with many substances which in ordinary circumstances would be considered chemically inert such as asbestos, concrete, glass, and sand. When heated, it will even corrode noble metals as palladium, platinum, and gold, and even the noble gases xenon and radon do not escape fluorination. An impermeable fluoride layer is formed by sodium, magnesium, aluminium, zinc, tin, and silver, which may be removed by heating. Nickel, copper, and steel containers are usually used due to their great resistance to attack by chlorine trifluoride, stemming from the formation of an unreactive layer of metal fluoride. Its reaction with hydrazine to form hydrogen fluoride, nitrogen, and chlorine gases was used in experimental rocket engine, but has problems largely stemming from its extreme hypergolicity resulting in ignition without any measurable delay. Today, it is mostly used in nuclear fuel processing, to oxidise uranium to uranium hexafluoride for its enriching and to separate it from plutonium, as well as in the semiconductor industry, where it is used to clean chemical vapor deposition chambers. It can act as a fluoride ion donor or acceptor (Lewis base or acid), although it does not dissociate appreciably into and ions. Chlorine pentafluoride (ClF5) is made on a large scale by direct fluorination of chlorine with excess fluorine gas at 350 °C and 250 atm, and on a small scale by reacting metal chlorides with fluorine gas at 100–300 °C. It melts at −103 °C and boils at −13.1 °C. It is a very strong fluorinating agent, although it is still not as effective as chlorine trifluoride. Only a few specific stoichiometric reactions have been characterised. Arsenic pentafluoride and antimony pentafluoride form ionic adducts of the form [ClF4]+[MF6]− (M = As, Sb) and water reacts vigorously as follows: 2 H2O + ClF5 ⟶ 4 HF + FClO2 The product, chloryl fluoride, is one of the five known chlorine oxide fluorides. These range from the thermally unstable FClO to the chemically unreactive perchloryl fluoride (FClO3), the other three being FClO2, F3ClO, and F3ClO2. All five behave similarly to the chlorine fluorides, both structurally and chemically, and may act as Lewis acids or bases by gaining or losing fluoride ions respectively or as very strong oxidising and fluorinating agents. Chlorine oxides The chlorine oxides are well-studied in spite of their instability (all of them are endothermic compounds). They are important because they are produced when chlorofluorocarbons undergo photolysis in the upper atmosphere and cause the destruction of the ozone layer. None of them can be made from directly reacting the elements. Dichlorine monoxide (Cl2O) is a brownish-yellow gas (red-brown when solid or liquid) which may be obtained by reacting chlorine gas with yellow mercury(II) oxide. It is very soluble in water, in which it is in equilibrium with hypochlorous acid (HOCl), of which it is the anhydride. It is thus an effective bleach and is mostly used to make hypochlorites. It explodes on heating or sparking or in the presence of ammonia gas. Chlorine dioxide (ClO2) was the first chlorine oxide to be discovered in 1811 by Humphry Davy. It is a yellow paramagnetic gas (deep-red as a solid or liquid), as expected from its having an odd number of electrons: it is stable towards dimerisation due to the delocalisation of the unpaired electron. It explodes above −40 °C as a liquid and under pressure as a gas and therefore must be made at low concentrations for wood-pulp bleaching and water treatment. It is usually prepared by reducing a chlorate as follows: + Cl− + 2 H+ ⟶ ClO2 + Cl2 + H2O Its production is thus intimately linked to the redox reactions of the chlorine oxoacids. It is a strong oxidising agent, reacting with sulfur, phosphorus, phosphorus halides, and potassium borohydride. It dissolves exothermically in water to form dark-green solutions that very slowly decompose in the dark. Crystalline clathrate hydrates ClO2·nH2O (n ≈ 6–10) separate out at low temperatures. However, in the presence of light, these solutions rapidly photodecompose to form a mixture of chloric and hydrochloric acids. Photolysis of individual ClO2 molecules result in the radicals ClO and ClOO, while at room temperature mostly chlorine, oxygen, and some ClO3 and Cl2O6 are produced. Cl2O3 is also produced when photolysing the solid at −78 °C: it is a dark brown solid that explodes below 0 °C. The ClO radical leads to the depletion of atmospheric ozone and is thus environmentally important as follows: Cl• + O3 ⟶ ClO• + O2 ClO• + O• ⟶ Cl• + O2 Chlorine perchlorate (ClOClO3) is a pale yellow liquid that is less stable than ClO2 and decomposes at room temperature to form chlorine, oxygen, and dichlorine hexoxide (Cl2O6). Chlorine perchlorate may also be considered a chlorine derivative of perchloric acid (HOClO3), similar to the thermally unstable chlorine derivatives of other oxoacids: examples include chlorine nitrate (ClONO2, vigorously reactive and explosive), and chlorine fluorosulfate (ClOSO2F, more stable but still moisture-sensitive and highly reactive). Dichlorine hexoxide is a dark-red liquid that freezes to form a solid which turns yellow at −180 °C: it is usually made by reaction of chlorine dioxide with oxygen. Despite attempts to rationalise it as the dimer of ClO3, it reacts more as though it were chloryl perchlorate, [ClO2]+[ClO4]−, which has been confirmed to be the correct structure of the solid. It hydrolyses in water to give a mixture of chloric and perchloric acids: the analogous reaction with anhydrous hydrogen fluoride does not proceed to completion. Dichlorine heptoxide (Cl2O7) is the anhydride of perchloric acid (HClO4) and can readily be obtained from it by dehydrating it with phosphoric acid at −10 °C and then distilling the product at −35 °C and 1 mmHg. It is a shock-sensitive, colourless oily liquid. It is the least reactive of the chlorine oxides, being the only one to not set organic materials on fire at room temperature. It may be dissolved in water to regenerate perchloric acid or in aqueous alkalis to regenerate perchlorates. However, it thermally decomposes explosively by breaking one of the central Cl–O bonds, producing the radicals ClO3 and ClO4 which immediately decompose to the elements through intermediate oxides. Chlorine oxoacids and oxyanions Chlorine forms four oxoacids: hypochlorous acid (HOCl), chlorous acid (HOClO), chloric acid (HOClO2), and perchloric acid (HOClO3). As can be seen from the redox potentials given in the adjacent table, chlorine is much more stable towards disproportionation in acidic solutions than in alkaline solutions: {| |- | Cl2 + H2O || HOCl + H+ + Cl− || Kac = 4.2 × 10−4 mol2 l−2 |- | Cl2 + 2 OH− || OCl− + H2O + Cl− || Kalk = 7.5 × 1015 mol−1 l |} The hypochlorite ions also disproportionate further to produce chloride and chlorate (3 ClO− 2 Cl− + ) but this reaction is quite slow at temperatures below 70 °C in spite of the very favourable equilibrium constant of 1027. The chlorate ions may themselves disproportionate to form chloride and perchlorate (4 Cl− + 3 ) but this is still very slow even at 100 °C despite the very favourable equilibrium constant of 1020. The rates of reaction for the chlorine oxyanions increases as the oxidation state of chlorine decreases. The strengths of the chlorine oxyacids increase very quickly as the oxidation state of chlorine increases due to the increasing delocalisation of charge over more and more oxygen atoms in their conjugate bases. Most of the chlorine oxoacids may be produced by exploiting these disproportionation reactions. Hypochlorous acid (HOCl) is highly reactive and quite unstable; its salts are mostly used for their bleaching and sterilising abilities. They are very strong oxidising agents, transferring an oxygen atom to most inorganic species. Chlorous acid (HOClO) is even more unstable and cannot be isolated or | to the larger electronegative chlorine atom; however, weak hydrogen bonding is present in solid crystalline hydrogen chloride at low temperatures, similar to the hydrogen fluoride structure, before disorder begins to prevail as the temperature is raised. Hydrochloric acid is a strong acid (pKa = −7) because the hydrogen bonds to chlorine are too weak to inhibit dissociation. The HCl/H2O system has many hydrates HCl·nH2O for n = 1, 2, 3, 4, and 6. Beyond a 1:1 mixture of HCl and H2O, the system separates completely into two separate liquid phases. Hydrochloric acid forms an azeotrope with boiling point 108.58 °C at 20.22 g HCl per 100 g solution; thus hydrochloric acid cannot be concentrated beyond this point by distillation. Unlike hydrogen fluoride, anhydrous liquid hydrogen chloride is difficult to work with as a solvent, because its boiling point is low, it has a small liquid range, its dielectric constant is low and it does not dissociate appreciably into H2Cl+ and ions – the latter, in any case, are much less stable than the bifluoride ions () due to the very weak hydrogen bonding between hydrogen and chlorine, though its salts with very large and weakly polarising cations such as Cs+ and (R = Me, Et, Bun) may still be isolated. Anhydrous hydrogen chloride is a poor solvent, only able to dissolve small molecular compounds such as nitrosyl chloride and phenol, or salts with very low lattice energies such as tetraalkylammonium halides. It readily protonates electrophiles containing lone-pairs or π bonds. Solvolysis, ligand replacement reactions, and oxidations are well-characterised in hydrogen chloride solution: Ph3SnCl + HCl ⟶ Ph2SnCl2 + PhH (solvolysis) Ph3COH + 3 HCl ⟶ + H3O+Cl− (solvolysis) + BCl3 ⟶ + HCl (ligand replacement) PCl3 + Cl2 + HCl ⟶ (oxidation) Other binary chlorides Nearly all elements in the periodic table form binary chlorides. The exceptions are decidedly in the minority and stem in each case from one of three causes: extreme inertness and reluctance to participate in chemical reactions (the noble gases, with the exception of xenon in the highly unstable XeCl2 and XeCl4); extreme nuclear instability hampering chemical investigation before decay and transmutation (many of the heaviest elements beyond bismuth); and having an electronegativity higher than chlorine's (oxygen and fluorine) so that the resultant binary compounds are formally not chlorides but rather oxides or fluorides of chlorine. Even though nitrogen in NCl3 is bearing a negative charge, the compound is usually called nitrogen trichloride. Chlorination of metals with Cl2 usually leads to a higher oxidation state than bromination with Br2 when multiple oxidation states are available, such as in MoCl5 and MoBr3. Chlorides can be made by reaction of an element or its oxide, hydroxide, or carbonate with hydrochloric acid, and then dehydrated by mildly high temperatures combined with either low pressure or anhydrous hydrogen chloride gas. These methods work best when the chloride product is stable to hydrolysis; otherwise, the possibilities include high-temperature oxidative chlorination of the element with chlorine or hydrogen chloride, high-temperature chlorination of a metal oxide or other halide by chlorine, a volatile metal chloride, carbon tetrachloride, or an organic chloride. For instance, zirconium dioxide reacts with chlorine at standard conditions to produce zirconium tetrachloride, and uranium trioxide reacts with hexachloropropene when heated under reflux to give uranium tetrachloride. The second example also involves a reduction in oxidation state, which can also be achieved by reducing a higher chloride using hydrogen or a metal as a reducing agent. This may also be achieved by thermal decomposition or disproportionation as follows: EuCl3 + H2 ⟶ EuCl2 + HCl ReCl5 ReCl3 + Cl2 AuCl3 AuCl + Cl2 Most of the chlorides the metals in groups 1, 2, and 3, along with the lanthanides and actinides in the +2 and +3 oxidation states, are mostly ionic, while nonmetals tend to form covalent molecular chlorides, as do metals in high oxidation states from +3 and above. Silver chloride is very insoluble in water and is thus often used as a qualitative test for chlorine. Polychlorine compounds Although dichlorine is a strong oxidising agent with a high first ionisation energy, it may be oxidised under extreme conditions to form the cation. This is very unstable and has only been characterised by its electronic band spectrum when produced in a low-pressure discharge tube. The yellow cation is more stable and may be produced as follows: Cl2 + ClF + AsF5 This reaction is conducted in the oxidising solvent arsenic pentafluoride. The trichloride anion, , has also been characterised; it is analogous to triiodide. Chlorine fluorides The three fluorides of chlorine form a subset of the interhalogen compounds, all of which are diamagnetic. Some cationic and anionic derivatives are known, such as , , , and Cl2F+. Some pseudohalides of chlorine are also known, such as cyanogen chloride (ClCN, linear), chlorine cyanate (ClNCO), chlorine thiocyanate (ClSCN, unlike its oxygen counterpart), and chlorine azide (ClN3). Chlorine monofluoride (ClF) is extremely thermally stable, and is sold commercially in 500-gram steel lecture bottles. It is a colourless gas that melts at −155.6 °C and boils at −100.1 °C. It may be produced by the direction of its elements at 225 °C, though it must then be separated and purified from chlorine trifluoride and its reactants. Its properties are mostly intermediate between those of chlorine and fluorine. It will react with many metals and nonmetals from room temperature and above, fluorinating them and liberating chlorine. It will also act as a chlorofluorinating agent, adding chlorine and fluorine across a multiple bond or by oxidation: for example, it will attack carbon monoxide to form carbonyl chlorofluoride, COFCl. It will react analogously with hexafluoroacetone, (CF3)2CO, with a potassium fluoride catalyst to produce heptafluoroisopropyl hypochlorite, (CF3)2CFOCl; with nitriles RCN to produce RCF2NCl2; and with the sulfur oxides SO2 and SO3 to produce ClSO2F and ClOSO2F respectively. It will also react exothermically with compounds containing –OH and –NH groups, such as water: H2O + 2 ClF ⟶ 2 HF + Cl2O Chlorine trifluoride (ClF3) is a volatile colourless molecular liquid which melts at −76.3 °C and boils at 11.8 °C. It may be formed by directly fluorinating gaseous chlorine or chlorine monofluoride at 200–300 °C. One of the most reactive chemical compounds known, the list of elements it sets on fire is diverse, containing hydrogen, potassium, phosphorus, arsenic, antimony, sulfur, selenium, tellurium, bromine, iodine, and powdered molybdenum, tungsten, rhodium, iridium, and iron. It will also ignite water, along with many substances which in ordinary circumstances would be considered chemically inert such as asbestos, concrete, glass, and sand. When heated, it will even corrode noble metals as palladium, platinum, and gold, and even the noble gases xenon and radon do not escape fluorination. An impermeable fluoride layer is formed by sodium, magnesium, aluminium, zinc, tin, and silver, which may be removed by heating. Nickel, copper, and steel containers are usually used due to their great resistance to attack by chlorine trifluoride, stemming from the formation of an unreactive layer of metal fluoride. Its reaction with hydrazine to form hydrogen fluoride, nitrogen, and chlorine gases was used in experimental rocket engine, but has problems largely stemming from its extreme hypergolicity resulting in ignition without any measurable delay. Today, it is mostly used in nuclear fuel processing, to oxidise uranium to uranium hexafluoride for its enriching and to separate it from plutonium, as well as in the semiconductor industry, where it is used to clean chemical vapor deposition chambers. It can act as a fluoride ion donor or acceptor (Lewis base or acid), although it does not dissociate appreciably into and ions. Chlorine pentafluoride (ClF5) is made on a large scale by direct fluorination of chlorine with excess fluorine gas at 350 °C and 250 atm, and on a small scale by reacting metal chlorides with fluorine gas at 100–300 °C. It melts at −103 °C and boils at −13.1 °C. It is a very strong fluorinating agent, although it is still not as effective as chlorine trifluoride. Only a few specific stoichiometric reactions have been characterised. Arsenic pentafluoride and antimony pentafluoride form ionic adducts of the form [ClF4]+[MF6]− (M = As, Sb) and water reacts vigorously as follows: 2 H2O + ClF5 ⟶ 4 HF + FClO2 The product, chloryl fluoride, is one of the five known chlorine oxide fluorides. These range from the thermally unstable FClO to the chemically unreactive perchloryl fluoride (FClO3), the other three being FClO2, F3ClO, and F3ClO2. All five behave similarly to the chlorine fluorides, both structurally and chemically, and may act as Lewis acids or bases by gaining or losing fluoride ions respectively or as very strong oxidising and fluorinating agents. Chlorine oxides The chlorine oxides are well-studied in spite of their instability (all of them are endothermic compounds). They are important because they are produced when chlorofluorocarbons undergo photolysis in the upper atmosphere and cause the destruction of the ozone layer. None of them can be made from directly reacting the elements. Dichlorine monoxide (Cl2O) is a brownish-yellow gas (red-brown when solid or liquid) which may be obtained by reacting chlorine gas with yellow mercury(II) oxide. It is very soluble in water, in which it is in equilibrium with hypochlorous acid (HOCl), of which it is the anhydride. It is thus an effective bleach and is mostly used to make hypochlorites. It explodes on heating or sparking or in the presence of ammonia gas. Chlorine dioxide (ClO2) was the first chlorine oxide to be discovered in 1811 by Humphry Davy. It is a yellow paramagnetic gas (deep-red as a solid or liquid), as expected from its having an odd number of electrons: it is stable towards dimerisation due to the delocalisation of the unpaired electron. It explodes above −40 °C as a liquid and under pressure as a gas and therefore must be made at low concentrations for wood-pulp bleaching and water treatment. It is usually prepared by reducing a chlorate as follows: + Cl− + 2 H+ ⟶ ClO2 + Cl2 + H2O Its production is thus intimately linked to the redox reactions of the chlorine oxoacids. It is a strong oxidising agent, reacting with sulfur, phosphorus, phosphorus halides, and potassium borohydride. It dissolves exothermically in water to form dark-green solutions that very slowly decompose in the dark. Crystalline clathrate hydrates ClO2·nH2O (n ≈ 6–10) separate out at low temperatures. However, in the presence of light, these solutions rapidly photodecompose to form a mixture of chloric and hydrochloric acids. Photolysis of individual ClO2 molecules result in the radicals ClO and ClOO, while at room temperature mostly chlorine, oxygen, and some ClO3 and Cl2O6 are produced. Cl2O3 is also produced when photolysing the solid at −78 °C: it is a dark brown solid that explodes below 0 °C. The ClO radical leads to the depletion of atmospheric ozone and is thus environmentally important as follows: Cl• + O3 ⟶ ClO• + O2 ClO• + O• ⟶ Cl• + O2 Chlorine perchlorate (ClOClO3) is a pale yellow liquid that is less stable than ClO2 and decomposes at room temperature to form chlorine, oxygen, and dichlorine hexoxide (Cl2O6). Chlorine perchlorate may also be considered a chlorine derivative of perchloric acid (HOClO3), similar to the thermally unstable chlorine derivatives of other oxoacids: examples include chlorine nitrate (ClONO2, vigorously reactive and explosive), and chlorine fluorosulfate (ClOSO2F, more stable but still moisture-sensitive and highly reactive). Dichlorine hexoxide is a dark-red liquid that freezes to form a solid which turns yellow at −180 °C: it is usually made by reaction of chlorine dioxide with oxygen. Despite attempts to rationalise it as the dimer of ClO3, it reacts more as though it were chloryl perchlorate, [ClO2]+[ClO4]−, which has been confirmed to be the correct structure of the solid. It hydrolyses in water to give a mixture of chloric and perchloric acids: the analogous reaction with anhydrous hydrogen fluoride does not proceed to completion. Dichlorine heptoxide (Cl2O7) is the anhydride of perchloric acid (HClO4) and can readily be obtained from it by dehydrating it with phosphoric acid at −10 °C and then distilling the product at −35 °C and 1 mmHg. It is a shock-sensitive, colourless oily liquid. It is the least reactive of the chlorine oxides, being the only one to not set organic materials on fire at room temperature. It may be dissolved in water to regenerate perchloric acid or in aqueous alkalis to regenerate perchlorates. However, it thermally decomposes explosively by breaking one of the central Cl–O bonds, producing the radicals ClO3 and ClO4 which immediately decompose to the elements through intermediate oxides. Chlorine oxoacids and oxyanions Chlorine forms four oxoacids: hypochlorous acid (HOCl), chlorous acid (HOClO), chloric acid (HOClO2), and perchloric acid (HOClO3). As can be seen from the redox potentials given in the adjacent table, chlorine is much more stable towards disproportionation in acidic solutions than in alkaline solutions: {| |- | Cl2 + H2O || HOCl + H+ + Cl− || Kac = 4.2 × 10−4 mol2 l−2 |- | Cl2 + 2 OH− || OCl− + H2O + Cl− || Kalk = 7.5 × 1015 mol−1 l |} The hypochlorite ions also disproportionate further to produce chloride and chlorate (3 ClO− 2 Cl− + ) but this reaction is quite slow at temperatures below 70 °C in spite of the very favourable equilibrium constant of 1027. The chlorate ions may themselves disproportionate to form chloride and perchlorate (4 Cl− + 3 ) but this is still very slow even at 100 °C despite the very favourable equilibrium constant of 1020. The rates of reaction for the chlorine oxyanions increases as the oxidation state of chlorine decreases. The strengths of the chlorine oxyacids increase very quickly as the oxidation state of chlorine increases due to the increasing delocalisation of charge over more and more oxygen atoms in their conjugate bases. Most of the chlorine oxoacids may be produced by exploiting these disproportionation reactions. Hypochlorous acid (HOCl) is highly reactive and quite unstable; its salts are mostly used for their bleaching and sterilising abilities. They are very strong oxidising agents, transferring an oxygen atom to most inorganic species. Chlorous acid (HOClO) is even more unstable and cannot be isolated or concentrated without decomposition: it is known from the decomposition of aqueous chlorine dioxide. However, sodium chlorite is a stable salt and is useful for bleaching and stripping textiles, as an oxidising agent, and as a source of chlorine dioxide. Chloric acid (HOClO2) is a strong acid that is quite stable in cold water up to 30% concentration, but on warming gives chlorine and chlorine dioxide. Evaporation under reduced pressure allows it to be concentrated further to about 40%, but then it decomposes to perchloric acid, chlorine, oxygen, water, and chlorine dioxide. Its most important salt is sodium chlorate, mostly used to make chlorine dioxide to bleach paper pulp. The decomposition of chlorate to chloride and oxygen is a common |
many enzymes; and in fertilization. Calcium ions outside cells are important for maintaining the potential difference across excitable cell membranes, protein synthesis, and bone formation. Characteristics Classification Calcium is a very ductile silvery metal (sometimes described as pale yellow) whose properties are very similar to the heavier elements in its group, strontium, barium, and radium. A calcium atom has twenty electrons, arranged in the electron configuration [Ar]4s2. Like the other elements placed in group 2 of the periodic table, calcium has two valence electrons in the outermost s-orbital, which are very easily lost in chemical reactions to form a dipositive ion with the stable electron configuration of a noble gas, in this case argon. Hence, calcium is almost always divalent in its compounds, which are usually ionic. Hypothetical univalent salts of calcium would be stable with respect to their elements, but not to disproportionation to the divalent salts and calcium metal, because the enthalpy of formation of MX2 is much higher than those of the hypothetical MX. This occurs because of the much greater lattice energy afforded by the more highly charged Ca2+ cation compared to the hypothetical Ca+ cation. Calcium, strontium, barium, and radium are always considered to be alkaline earth metals; the lighter beryllium and magnesium, also in group 2 of the periodic table, are often included as well. Nevertheless, beryllium and magnesium differ significantly from the other members of the group in their physical and chemical behaviour: they behave more like aluminium and zinc respectively and have some of the weaker metallic character of the post-transition metals, which is why the traditional definition of the term "alkaline earth metal" excludes them. This classification is mostly obsolete in English-language sources, but is still used in other countries such as Japan. As a result, comparisons with strontium and barium are more germane to calcium chemistry than comparisons with magnesium. Physical Calcium metal melts at 842 °C and boils at 1494 °C; these values are higher than those for magnesium and strontium, the neighbouring group 2 metals. It crystallises in the face-centered cubic arrangement like strontium; above 450 °C, it changes to an anisotropic hexagonal close-packed arrangement like magnesium. Its density of 1.55 g/cm3 is the lowest in its group. Calcium is harder than lead but can be cut with a knife with effort. While calcium is a poorer conductor of electricity than copper or aluminium by volume, it is a better conductor by mass than both due to its very low density. While calcium is infeasible as a conductor for most terrestrial applications as it reacts quickly with atmospheric oxygen, its use as such in space has been considered. Chemical The chemistry of calcium is that of a typical heavy alkaline earth metal. For example, calcium spontaneously reacts with water more quickly than magnesium and less quickly than strontium to produce calcium hydroxide and hydrogen gas. It also reacts with the oxygen and nitrogen in the air to form a mixture of calcium oxide and calcium nitride. When finely divided, it spontaneously burns in air to produce the nitride. In bulk, calcium is less reactive: it quickly forms a hydration coating in moist air, but below 30% relative humidity it may be stored indefinitely at room temperature. Besides the simple oxide CaO, the peroxide CaO2 can be made by direct oxidation of calcium metal under a high pressure of oxygen, and there is some evidence for a yellow superoxide Ca(O2)2. Calcium hydroxide, Ca(OH)2, is a strong base, though it is not as strong as the hydroxides of strontium, barium or the alkali metals. All four dihalides of calcium are known. Calcium carbonate (CaCO3) and calcium sulfate (CaSO4) are particularly abundant minerals. Like strontium and barium, as well as the alkali metals and the divalent lanthanides europium and ytterbium, calcium metal dissolves directly in liquid ammonia to give a dark blue solution. Due to the large size of the Ca2+ ion, high coordination numbers are common, up to 24 in some intermetallic compounds such as CaZn13. Calcium is readily complexed by oxygen chelates such as EDTA and polyphosphates, which are useful in analytic chemistry and removing calcium ions from hard water. In the absence of steric hindrance, smaller group 2 cations tend to form stronger complexes, but when large polydentate macrocycles are involved the trend is reversed. Although calcium is in the same group as magnesium and organomagnesium compounds are very commonly used throughout chemistry, organocalcium compounds are not similarly widespread because they are more difficult to make and more reactive, although they have recently been investigated as possible catalysts. Organocalcium compounds tend to be more similar to organoytterbium compounds due to the similar ionic radii of Yb2+ (102 pm) and Ca2+ (100 pm). Most of these compounds can only be prepared at low temperatures; bulky ligands tend to favor stability. For example, calcium dicyclopentadienyl, Ca(C5H5)2, must be made by directly reacting calcium metal with mercurocene or cyclopentadiene itself; replacing the C5H5 ligand with the bulkier C5(CH3)5 ligand on the other hand increases the compound's solubility, volatility, and kinetic stability. Isotopes Natural calcium is a mixture of five stable isotopes (40Ca, 42Ca, 43Ca, 44Ca, and 46Ca) and one isotope with a half-life so long that it can be considered stable for all practical purposes (48Ca, with a half-life of about 4.3 × 1019 years). Calcium is the first (lightest) element to have six naturally occurring isotopes. By far the most common isotope of calcium in nature is 40Ca, which makes up 96.941% of all natural calcium. It is produced in the silicon-burning process from fusion of alpha particles and is the heaviest stable nuclide with equal proton and neutron numbers; its occurrence is also supplemented slowly by the decay of primordial 40K. Adding another alpha particle leads to unstable 44Ti, which quickly decays via two successive electron captures to stable 44Ca; this makes up 2.806% of all natural calcium and is the second-most common isotope. The other four natural isotopes, 42Ca, 43Ca, 46Ca, and 48Ca, are significantly rarer, each comprising less than 1% of all natural calcium. The four lighter isotopes are mainly products of the oxygen-burning and silicon-burning processes, leaving the two heavier ones to be produced via neutron capture processes. 46Ca is mostly produced in a "hot" s-process, as its formation requires a rather high neutron flux to allow short-lived 45Ca to capture a neutron. 48Ca is produced by electron capture in the r-process in type Ia supernovae, where high neutron excess and low enough entropy ensures its survival. 46Ca and 48Ca are the first "classically stable" nuclides with a six-neutron or eight-neutron excess respectively. Although extremely neutron-rich for such a light element, 48Ca is very stable because it is a doubly magic nucleus, having 20 protons and 28 neutrons arranged in closed shells. Its beta decay to 48Sc is very hindered because of the gross mismatch of nuclear spin: 48Ca has zero nuclear spin, being even–even, while 48Sc has spin 6+, so the decay is forbidden by the conservation of angular momentum. While two excited states of 48Sc are available for decay as well, they are also forbidden due to their high spins. As a result, when 48Ca does decay, it does so by double beta decay to 48Ti instead, being the lightest nuclide known to undergo double beta decay. The heavy isotope 46Ca can also theoretically undergo double beta decay to 46Ti as well, but this has never been observed. The lightest and most common isotope 40Ca is also doubly magic and could undergo double electron capture to 40Ar, but this has likewise never been observed. Calcium is the only element to have two primordial doubly magic isotopes. The experimental lower limits for the half-lives of 40Ca and 46Ca are 5.9 × 1021 years and 2.8 × 1015 years respectively. Apart from the practically stable 48Ca, the longest lived radioisotope of calcium is 41Ca. It decays by electron capture to stable 41K with a half-life of about a hundred | decay, it does so by double beta decay to 48Ti instead, being the lightest nuclide known to undergo double beta decay. The heavy isotope 46Ca can also theoretically undergo double beta decay to 46Ti as well, but this has never been observed. The lightest and most common isotope 40Ca is also doubly magic and could undergo double electron capture to 40Ar, but this has likewise never been observed. Calcium is the only element to have two primordial doubly magic isotopes. The experimental lower limits for the half-lives of 40Ca and 46Ca are 5.9 × 1021 years and 2.8 × 1015 years respectively. Apart from the practically stable 48Ca, the longest lived radioisotope of calcium is 41Ca. It decays by electron capture to stable 41K with a half-life of about a hundred thousand years. Its existence in the early Solar System as an extinct radionuclide has been inferred from excesses of 41K: traces of 41Ca also still exist today, as it is a cosmogenic nuclide, continuously reformed through neutron activation of natural 40Ca. Many other calcium radioisotopes are known, ranging from 35Ca to 60Ca. They are all much shorter-lived than 41Ca, the most stable among them being 45Ca (half-life 163 days) and 47Ca (half-life 4.54 days). The isotopes lighter than 42Ca usually undergo beta plus decay to isotopes of potassium, and those heavier than 44Ca usually undergo beta minus decay to isotopes of scandium, although near the nuclear drip lines, proton emission and neutron emission begin to be significant decay modes as well. Like other elements, a variety of processes alter the relative abundance of calcium isotopes. The best studied of these processes is the mass-dependent fractionation of calcium isotopes that accompanies the precipitation of calcium minerals such as calcite, aragonite and apatite from solution. Lighter isotopes are preferentially incorporated into these minerals, leaving the surrounding solution enriched in heavier isotopes at a magnitude of roughly 0.025% per atomic mass unit (amu) at room temperature. Mass-dependent differences in calcium isotope composition are conventionally expressed by the ratio of two isotopes (usually 44Ca/40Ca) in a sample compared to the same ratio in a standard reference material. 44Ca/40Ca varies by about 1% among common earth materials. History Calcium compounds were known for millennia, although their chemical makeup was not understood until the 17th century. Lime as a building material and as plaster for statues was used as far back as around 7000 BC. The first dated lime kiln dates back to 2500 BC and was found in Khafajah, Mesopotamia. At about the same time, dehydrated gypsum (CaSO4·2H2O) was being used in the Great Pyramid of Giza. This material would later be used for the plaster in the tomb of Tutankhamun. The ancient Romans instead used lime mortars made by heating limestone (CaCO3). The name "calcium" itself derives from the Latin word calx "lime". Vitruvius noted that the lime that resulted was lighter than the original limestone, attributing this to the boiling of the water. In 1755, Joseph Black proved that this was due to the loss of carbon dioxide, which as a gas had not been recognised by the ancient Romans. In 1787, Antoine Lavoisier suspected that lime might be an oxide of a fundamental chemical element. In his table of the elements, Lavoisier listed five "salifiable earths" (i.e., ores that could be made to react with acids to produce salts (salis = salt, in Latin): chaux (calcium oxide), magnésie (magnesia, magnesium oxide), baryte (barium sulfate), alumine (alumina, aluminium oxide), and silice (silica, silicon dioxide)). About these "elements", Lavoisier speculated: Calcium, along with its congeners magnesium, strontium, and barium, was first isolated by Humphry Davy in 1808. Following the work of Jöns Jakob Berzelius and Magnus Martin af Pontin on electrolysis, Davy isolated calcium and magnesium by putting a mixture of the respective metal oxides with mercury(II) oxide on a platinum plate which was used as the anode, the cathode being a platinum wire partially submerged into mercury. Electrolysis then gave calcium–mercury and magnesium–mercury amalgams, and distilling off the mercury gave the metal. However, pure calcium cannot be prepared in bulk by this method and a workable commercial process for its production was not found until over a century later. Occurrence and production At 3%, calcium is the fifth most abundant element in the Earth's crust, and the third most abundant metal behind aluminium and iron. It is also the fourth most abundant element in the lunar highlands. Sedimentary calcium carbonate deposits pervade the Earth's surface as fossilized remains of past marine life; they occur in two forms, the rhombohedral calcite (more common) and the orthorhombic aragonite (forming in more temperate seas). Minerals of the first type include limestone, dolomite, marble, chalk, and iceland spar; aragonite beds make up the Bahamas, the Florida Keys, and the Red Sea basins. Corals, sea shells, and pearls are mostly made up of calcium carbonate. Among the other important minerals of calcium are gypsum (CaSO4·2H2O), anhydrite (CaSO4), fluorite (CaF2), and apatite ([Ca5(PO4)3F]). The major producers of calcium are China (about 10000 to 12000 tonnes per year), Russia (about 6000 to 8000 tonnes per year), and the United States (about 2000 to 4000 tonnes per year). Canada and France are also among the minor producers. In 2005, about 24000 tonnes of calcium were produced; about half of the world's extracted calcium is used by the United States, with about 80% of the output used each year. In Russia and China, Davy's method of electrolysis is still used, but is instead applied to molten calcium chloride. Since calcium is less reactive than strontium or barium, the oxide–nitride coating that results in air is stable and lathe machining and other standard metallurgical techniques are suitable for calcium. In the United States and Canada, calcium is instead produced by reducing lime with aluminium at high temperatures. Geochemical cycling Calcium cycling provides a link between tectonics, climate, and the carbon cycle. In the simplest terms, uplift of mountains exposes calcium-bearing rocks such as some granites to chemical weathering and releases Ca2+ into surface water. These ions are transported to the ocean where they react with dissolved CO2 to form limestone (), which in turn settles to the sea floor where it is incorporated into new rocks. Dissolved CO2, along with carbonate and bicarbonate ions, are termed "dissolved inorganic carbon" (DIC). The actual reaction is more complicated and involves the bicarbonate ion (HCO) that forms when CO2 reacts with water at seawater pH: + 2 → (s) + + At seawater pH, most of the CO2 is immediately converted back into . The reaction results in a net transport of one molecule of CO2 from the ocean/atmosphere into the lithosphere. The result is that each Ca2+ ion released by chemical weathering ultimately removes one CO2 molecule from the surficial system (atmosphere, ocean, soils and living organisms), storing it in carbonate rocks where it is likely to stay for hundreds of millions of years. The weathering of calcium from rocks thus scrubs CO2 from the ocean and atmosphere, exerting a strong long-term effect on climate. Uses The largest use of metallic calcium is in steelmaking, due to its strong chemical affinity for oxygen and sulfur. Its oxides and sulfides, once formed, give liquid lime aluminate and sulfide inclusions in steel which float out; on treatment, these inclusions disperse throughout the steel and became small and spherical, improving castability, cleanliness and general mechanical properties. Calcium is also used in maintenance-free automotive batteries, in which the use of 0.1% calcium–lead alloys instead of the usual antimony–lead alloys leads to lower water loss and lower self-discharging. Due to the risk of expansion and cracking, aluminium is sometimes also incorporated into these alloys. These lead–calcium alloys are also used in casting, replacing lead–antimony alloys. Calcium is also used to strengthen aluminium alloys used for bearings, for the control of graphitic carbon in cast iron, and to remove bismuth impurities from lead. Calcium metal is found in some drain cleaners, where it functions to generate heat and calcium hydroxide that saponifies the fats and liquefies the proteins (for example, those in hair) that block drains. Besides metallurgy, the reactivity of calcium is exploited to remove nitrogen from high-purity argon gas and as a getter for oxygen and nitrogen. It is also used as a reducing agent in the production of chromium, zirconium, thorium, and uranium. It can also be used to store hydrogen gas, as it reacts with hydrogen to form solid calcium hydride, from which the hydrogen can easily be re-extracted. Calcium isotope fractionation during mineral formation has led to several applications of calcium isotopes. In particular, the 1997 observation by Skulan and DePaolo that calcium minerals are isotopically lighter than the solutions from which the minerals precipitate is the basis of analogous applications in medicine and in paleoceanography. In animals with skeletons mineralized with calcium, the calcium isotopic composition of soft tissues reflects the relative rate of formation and dissolution of skeletal mineral. In humans, changes in the calcium isotopic composition of urine have been shown to be related to changes in bone mineral balance. When the rate of bone formation exceeds the rate of bone resorption, the 44Ca/40Ca ratio in soft tissue rises and vice versa. Because of this relationship, calcium isotopic measurements of urine or blood may be useful in the early detection of metabolic bone diseases like osteoporosis. A similar system exists in seawater, where 44Ca/40Ca tends to rise when the rate of removal of Ca2+ by mineral precipitation exceeds the input of new calcium into the ocean. In 1997, Skulan and DePaolo presented the first evidence of change in seawater 44Ca/40Ca over geologic time, along with a theoretical explanation of these changes. More recent papers have confirmed this observation, demonstrating that seawater Ca2+ concentration is not constant, and that the ocean is never in a "steady state" with respect to calcium input and output. This has important climatological implications, as the marine calcium cycle is closely tied to the carbon cycle. Many calcium compounds are used in food, as pharmaceuticals, and in medicine, among others. For example, calcium and phosphorus are supplemented in foods through the addition of calcium lactate, calcium diphosphate, and tricalcium phosphate. The last is also used as a polishing agent in toothpaste and in antacids. Calcium lactobionate is a white powder that is used as a suspending agent for pharmaceuticals. In baking, calcium monophosphate is used as a leavening agent. Calcium sulfite is used as a bleach in papermaking and as a disinfectant, calcium silicate is used as a reinforcing agent in rubber, and calcium acetate is a component of liming rosin and is used to make metallic soaps and synthetic resins. Calcium is on the World Health Organization's List of Essential Medicines. Food sources Foods rich in calcium include dairy products, such as yogurt and cheese, sardines, salmon, soy products, kale, and fortified breakfast cereals. Because of concerns for long-term adverse side effects, including calcification of arteries and kidney stones, both the U.S. Institute of Medicine (IOM) and the European Food Safety Authority (EFSA) set Tolerable Upper Intake Levels (ULs) for combined dietary and supplemental calcium. From the IOM, people of ages 9–18 years are not to exceed 3 g/day combined intake; for ages 19–50, not to exceed 2.5 g/day; for ages 51 and older, not to exceed 2 g/day. EFSA set the UL for all adults at 2.5 g/day, but decided the information for children and adolescents was not sufficient to determine ULs. Biological and pathological role Function Calcium is an essential element needed in large quantities. The Ca2+ ion acts as an electrolyte and is vital to the health of the muscular, circulatory, and digestive systems; is indispensable to the building of bone; and supports synthesis and function of blood cells. For example, it regulates the contraction of muscles, nerve conduction, and the clotting of blood. As a result, intra- and extracellular calcium levels are tightly regulated by the body. Calcium |
sapphire. A red-colored artificial ruby may also be achieved by doping chromium(III) into artificial corundum crystals, thus making chromium a requirement for making synthetic rubies. Such a synthetic ruby crystal was the basis for the first laser, produced in 1960, which relied on stimulated emission of light from the chromium atoms in such a crystal. Ruby has a laser transition at 694.3 nanometers, in a deep red color. Because of their toxicity, chromium(VI) salts are used for the preservation of wood. For example, chromated copper arsenate (CCA) is used in timber treatment to protect wood from decay fungi, wood-attacking insects, including termites, and marine borers. The formulations contain chromium based on the oxide CrO3 between 35.3% and 65.5%. In the United States, 65,300 metric tons of CCA solution were used in 1996. Chromium(III) salts, especially chrome alum and chromium(III) sulfate, are used in the tanning of leather. The chromium(III) stabilizes the leather by cross linking the collagen fibers. Chromium tanned leather can contain between 4 and 5% of chromium, which is tightly bound to the proteins. Although the form of chromium used for tanning is not the toxic hexavalent variety, there remains interest in management of chromium in the tanning industry. Recovery and reuse, direct/indirect recycling, and "chrome-less" or "chrome-free" tanning are practiced to better manage chromium usage. The high heat resistivity and high melting point makes chromite and chromium(III) oxide a material for high temperature refractory applications, like blast furnaces, cement kilns, molds for the firing of bricks and as foundry sands for the casting of metals. In these applications, the refractory materials are made from mixtures of chromite and magnesite. The use is declining because of the environmental regulations due to the possibility of the formation of chromium(VI). Several chromium compounds are used as catalysts for processing hydrocarbons. For example, the Phillips catalyst, prepared from chromium oxides, is used for the production of about half the world's polyethylene. Fe-Cr mixed oxides are employed as high-temperature catalysts for the water gas shift reaction. Copper chromite is a useful hydrogenation catalyst. Uses of compounds Chromium(IV) oxide (CrO2) is a magnetic compound. Its ideal shape anisotropy, which imparts high coercivity and remnant magnetization, made it a compound superior to γ-Fe2O3. Chromium(IV) oxide is used to manufacture magnetic tape used in high-performance audio tape and standard audio cassettes. Chromium(III) oxide (Cr2O3) is a metal polish known as green rouge. Chromic acid is a powerful oxidizing agent and is a useful compound for cleaning laboratory glassware of any trace of organic compounds. It is prepared by dissolving potassium dichromate in concentrated sulfuric acid, which is then used to wash the apparatus. Sodium dichromate is sometimes used because of its higher solubility (50 g/L versus 200 g/L respectively). The use of dichromate cleaning solutions is now phased out due to the high toxicity and environmental concerns. Modern cleaning solutions are highly effective and chromium free. Potassium dichromate is a chemical reagent, used as a titrating agent. Chromates are added to drilling muds to prevent corrosion of steel under wet conditions. Chrome alum is Chromium(III) potassium sulfate and is used as a mordant (i.e., a fixing agent) for dyes in fabric and in tanning. Biological role The biologically beneficial effects of chromium(III) are debated. Chromium is accepted by the U.S. National Institutes of Health as a trace element for its roles in the action of insulin, a hormone that mediates the metabolism and storage of carbohydrate, fat, and protein. The mechanism of its actions in the body, however, have not been defined, leaving in question the essentiality of chromium. In contrast, hexavalent chromium (Cr(VI) or Cr6+) is highly toxic and mutagenic. Ingestion of chromium(VI) in water has been linked to stomach tumors, and it may also cause allergic contact dermatitis (ACD). "Chromium deficiency", involving a lack of Cr(III) in the body, or perhaps some complex of it, such as glucose tolerance factor, is controversial. Some studies suggest that the biologically active form of chromium (III) is transported in the body via an oligopeptide called low-molecular-weight chromium-binding substance (LMWCr), which might play a role in the insulin signaling pathway. The chromium content of common foods is generally low (1-13 micrograms per serving). The chromium content of food varies widely, due to differences in soil mineral content, growing season, plant cultivar, and contamination during processing. Chromium (and nickel) leach into food cooked in stainless steel, with the effect being largest when the cookware is new. Acidic foods that are cooked for many hours also exacerbate this effect. Dietary recommendations There is disagreement on chromium's status as an essential nutrient. Governmental departments from Australia, New Zealand, India, Japan, and the United States consider chromium essential while the European Food Safety Authority (EFSA) of the European Union does not. The U.S. National Academy of Medicine (NAM) updated the Estimated Average Requirements (EARs) and the Recommended Dietary Allowances (RDAs) for chromium in 2001. For chromium, there was insufficient information to set EARs and RDAs, so its needs are described as estimates for Adequate Intakes (AIs). The current AIs of chromium for women ages 14 through 50 is 25 μg/day, and the AIs for women ages 50 and above is 20 μg/day. The AIs for women who are pregnant are 30 μg/day, and for women who are lactating, the set AIs are 45 μg/day. The AIs for men ages 14 through 50 are 35 μg/day, and the AIs for men ages 50 and above are 30 μg/day. For children ages 1 through 13, the AIs increase with age from 0.2 μg/day up to 25 μg/day. As for safety, the NAM sets Tolerable Upper Intake Levels (ULs) for vitamins and minerals when the evidence is sufficient. In the case of chromium, there is not yet enough information, hence no UL has been established. Collectively, the EARs, RDAs, AIs, and ULs are the parameters for the nutrition recommendation system known as Dietary Reference Intake (DRI). Australia and New Zealand consider chromium to be an essential nutrient, with an AI of 35 μg/day for men, 25 μg/day for women, 30 μg/day for women who are pregnant, and 45 μg/day for women who are lactating. A UL has not been set due to the lack of sufficient data. India considers chromium to be an essential nutrient, with an adult recommended intake of 33 μg/day. Japan also considers chromium to be an essential nutrient, with an AI of 10 μg/day for adults, including women who are pregnant or lactating. A UL has not been set. The EFSA of the European Union however, does not consider chromium to be an essential nutrient; chromium is the only mineral for which the United States and the European Union disagree. Labeling For U.S. food and dietary supplement labeling purposes, the amount of the substance in a serving is expressed as a percent of the Daily Value (%DV). For chromium labeling purposes, 100% of the Daily Value was 120 μg. As of May 27, 2016, the percentage of daily value was revised to 35 μg to bring the chromium intake into a consensus with the official Recommended Dietary Allowance. A table of the old and new adult daily values is provided at Reference Daily Intake. Food sources Food composition databases such as those maintained by the U.S. Department of Agriculture do not contain information on the chromium content of foods. A wide variety of animal and vegetable foods contain chromium. Content per serving is influenced by the chromium content of the soil in which the plants are grown, by foodstuffs fed to animals, and by processing methods, as chromium is leached into foods if processed or cooked in stainless steel equipment. One diet analysis study conducted in Mexico reported an average daily chromium intake of 30 micrograms. An estimated 31% of adults in the United States consume multi-vitamin/mineral dietary supplements, which often contain 25 to 60 micrograms of chromium. Supplementation Chromium is an ingredient in total parenteral nutrition (TPN), because deficiency can occur after months of intravenous feeding with chromium-free TPN. It is also added to nutritional products for preterm infants. Although the mechanism of action in biological roles for chromium is unclear, in the United States chromium-containing products are sold as non-prescription dietary supplements in amounts ranging from 50 to 1,000 μg. Lower amounts of chromium are also often incorporated into multi-vitamin/mineral supplements consumed by an estimated 31% of adults in the United States. Chemical compounds used in dietary supplements include chromium chloride, chromium citrate, chromium(III) picolinate, chromium(III) polynicotinate, and other chemical compositions. The benefit of supplements has not been proven. Approved and disapproved health claims In 2005, the U.S. Food and Drug Administration had approved a Qualified Health Claim for chromium picolinate with a requirement for very specific label wording: "One small study suggests that chromium picolinate may reduce the risk of insulin resistance, and therefore possibly may reduce the risk of type 2 diabetes. FDA concludes, however, that the existence of such a relationship between chromium picolinate and either insulin resistance or type 2 diabetes is highly uncertain." At the same time, in answer to other parts of the petition, the FDA rejected claims for chromium picolinate and cardiovascular disease, retinopathy or kidney disease caused by abnormally high blood sugar levels. In 2010, chromium(III) picolinate was approved by Health Canada to be used in dietary supplements. Approved labeling statements include: a factor in the maintenance of good health, provides support for healthy glucose metabolism, helps the body to metabolize carbohydrates and helps the body to metabolize fats. The European Food Safety Authority (EFSA) approved claims in 2010 that chromium contributed to normal macronutrient metabolism and maintenance of normal blood glucose concentration, but rejected claims for maintenance or achievement of a normal body weight, or reduction of tiredness or fatigue. Given the evidence for chromium deficiency causing problems with glucose management in the context of intravenous nutrition products formulated without chromium, research interest turned to whether chromium supplementation would benefit people who have type 2 diabetes but are not chromium deficient. Looking at the results from four meta-analyses, one reported a statistically significant decrease in fasting plasma glucose levels (FPG) and a non-significant trend in lower hemoglobin A1C. A second reported the same, a third reported significant decreases for both measures, while a fourth reported no benefit for either. A review published in 2016 listed 53 randomized clinical trials that were included in one or more of six meta-analyses. It concluded that whereas there may be modest decreases in FPG and/or HbA1C that achieve statistical significance in some of these meta-analyses, few of the trials achieved decreases large enough to be expected to be relevant to clinical outcome. Two systematic reviews looked at chromium supplements as a mean of managing body weight in overweight and obese people. One, limited to chromium picolinate, a popular supplement ingredient, reported a statistically significant −1.1 kg (2.4 lb) weight loss in trials longer than 12 weeks. The other included all chromium compounds and reported a statistically significant −0.50 kg (1.1 lb) weight change. Change in percent body fat did not reach statistical significance. Authors of both reviews considered the clinical relevance of this modest weight loss as uncertain/unreliable. The European Food Safety Authority reviewed the literature and concluded that there was insufficient evidence to support a claim. Chromium is promoted as a sports performance dietary supplement, based on the theory that it potentiates insulin activity, with anticipated results of increased muscle mass, and faster recovery of glycogen storage during post-exercise recovery. A review of clinical trials reported that chromium supplementation did not improve exercise performance or increase muscle strength. The International Olympic Committee reviewed dietary supplements for high-performance athletes in 2018 and concluded there was no need to increase chromium intake for athletes, nor support for claims of losing body fat. Fresh-water fish Chromium is naturally present in the environment in trace amounts, but industrial use in rubber and stainless steel manufacturing, chrome plating, dyes for textiles, tanneries and other uses contaminates aquatic systems. In Bangladesh, rivers in or downstream from industrialized areas exhibit heavy metal contamination. Irrigation water standards for chromium are 0.1 mg/L, but some rivers are more than five times that amount. The standard for fish for human consumption is less than 1 mg/kg, but many tested samples were more than five times that amount. Chromium, especially hexavalent chromium, is highly toxic to fish because it is easily absorbed across the gills, readily enters blood circulation, crosses cell membranes and bioconcentrates up the food chain. In contrast, the toxicity of trivalent chromium is very low, attributed to poor membrane permeability and little biomagnification. Acute and chronic exposure to chromium(VI) affects fish behavior, physiology, reproduction and survival. Hyperactivity and erratic swimming have been reported in contaminated environments. Egg hatching and fingerling survival are affected. In adult fish there are reports of histopathological | a high turnout of reflected photon waves in general, especially the 90% in infrared, can be attributed to chromium's magnetic properties. Chromium has unique magnetic properties - chromium is the only elemental solid that shows antiferromagnetic ordering at room temperature and below. Above 38 °C, its magnetic ordering becomes paramagnetic. The antiferromagnetic properties, which cause the chromium atoms to temporarily ionize and bond with themselves, are present because the body-centric cubic's magnetic properties are disproportionate to the lattice periodicity. This is due to the magnetic moments at the cube's corners and the unequal, but antiparallel, cube centers. From here, the frequency-dependent relative permittivity of chromium, deriving from Maxwell's equations and chromium's antiferromagnetism, leaves chromium with a high infrared and visible light reflectance. Passivation Chromium metal left standing in air is passivated - it forms a thin, protective, surface layer of oxide. This layer has a spinel structure a few atomic layers thick; it is very dense and inhibits the diffusion of oxygen into the underlying metal. In contrast, iron forms a more porous oxide through which oxygen can migrate, causing continued rusting. Passivation can be enhanced by short contact with oxidizing acids like nitric acid. Passivated chromium is stable against acids. Passivation can be removed with a strong reducing agent that destroys the protective oxide layer on the metal. Chromium metal treated in this way readily dissolves in weak acids. Chromium, unlike iron and nickel, does not suffer from hydrogen embrittlement. However, it does suffer from nitrogen embrittlement, reacting with nitrogen from air and forming brittle nitrides at the high temperatures necessary to work the metal parts. Isotopes Naturally occurring chromium is composed of three stable isotopes; 52Cr, 53Cr and 54Cr, with 52Cr being the most abundant (83.789% natural abundance). 19 radioisotopes have been characterized, with the most stable being 50Cr with a half-life of (more than) 1.8 years, and 51Cr with a half-life of 27.7 days. All of the remaining radioactive isotopes have half-lives that are less than 24 hours and the majority less than 1 minute. Chromium also has two metastable nuclear isomers. 53Cr is the radiogenic decay product of 53Mn (half-life = 3.74 million years). Chromium isotopes are typically collocated (and compounded) with manganese isotopes. This circumstance is useful in isotope geology. Manganese-chromium isotope ratios reinforce the evidence from 26Al and 107Pd concerning the early history of the Solar System. Variations in 53Cr/52Cr and Mn/Cr ratios from several meteorites indicate an initial 53Mn/55Mn ratio that suggests Mn-Cr isotopic composition must result from in-situ decay of 53Mn in differentiated planetary bodies. Hence 53Cr provides additional evidence for nucleosynthetic processes immediately before coalescence of the Solar System. The isotopes of chromium range in atomic mass from 43 u (43Cr) to 67 u (67Cr). The primary decay mode before the most abundant stable isotope, 52Cr, is electron capture and the primary mode after is beta decay. 53Cr has been posited as a proxy for atmospheric oxygen concentration. Chemistry and compounds Chromium is a member of group 6, of the transition metals. The +3 and +6 states occur most commonly within chromium compounds, followed by +2; charges of +1, +4 and +5 for chromium are rare, but do nevertheless occasionally exist. Common oxidation states Chromium(0) Many Cr(0) complexes are known. Bis(benzene)chromium and chromium hexacarbonyl are highlights in organochromium chemistry. Chromium(II) Chromium(II) compounds are uncommon, in part because they readily oxidize to chromium(III) derivatives in air. Water-stable chromium(II) chloride that can be made by reducing chromium(III) chloride with zinc. The resulting bright blue solution created from dissolving chromium(II) chloride is stable at neutral pH. Some other notable chromium(II) compounds include chromium(II) oxide , and chromium(II) sulfate . Many chromium(II) carboxylates are known. The red chromium(II) acetate (Cr2(O2CCH3)4) is somewhat famous. It features a Cr-Cr quadruple bond. Chromium(III) A large number of chromium(III) compounds are known, such as chromium(III) nitrate, chromium(III) acetate, and chromium(III) oxide. Chromium(III) can be obtained by dissolving elemental chromium in acids like hydrochloric acid or sulfuric acid, but it can also be formed through the reduction of chromium(VI) by cytochrome c7. The ion has a similar radius (63 pm) to (radius 50 pm), and they can replace each other in some compounds, such as in chrome alum and alum. Chromium(III) tends to form octahedral complexes. Commercially available chromium(III) chloride hydrate is the dark green complex [CrCl2(H2O)4]Cl. Closely related compounds are the pale green [CrCl(H2O)5]Cl2 and violet [Cr(H2O)6]Cl3. If anhydrous violet chromium(III) chloride is dissolved in water, the violet solution turns green after some time as the chloride in the inner coordination sphere is replaced by water. This kind of reaction is also observed with solutions of chrome alum and other water-soluble chromium(III) salts. A tetrahedral coordination of Chromium(III) has been reported for the Cr-centered Keggin anion [α-CrW12O40]5–. Chromium(III) hydroxide (Cr(OH)3) is amphoteric, dissolving in acidic solutions to form [Cr(H2O)6]3+, and in basic solutions to form . It is dehydrated by heating to form the green chromium(III) oxide (Cr2O3), a stable oxide with a crystal structure identical to that of corundum. Chromium(VI) Chromium(VI) compounds are oxidants at low or neutral pH. Chromate anions () and dichromate (Cr2O72−) anions are the principal ions at this oxidation state. They exist at an equilibrium, determined by pH: 2 [CrO4]2− + 2 H+ [Cr2O7]2− + H2O Chromium(VI) oxyhalides are known also and include chromyl fluoride (CrO2F2) and chromyl chloride (). However, despite several erroneous claims, chromium hexafluoride (as well as all higher hexahalides) remains unknown, as of 2020. Sodium chromate is produced industrially by the oxidative roasting of chromite ore with sodium carbonate. The change in equilibrium is visible by a change from yellow (chromate) to orange (dichromate), such as when an acid is added to a neutral solution of potassium chromate. At yet lower pH values, further condensation to more complex oxyanions of chromium is possible. Both the chromate and dichromate anions are strong oxidizing reagents at low pH: + 14 + 6 e− → 2 + 21 (ε0 = 1.33 V) They are, however, only moderately oxidizing at high pH: + 4 + 3 e− → + 5 (ε0 = −0.13 V) Chromium(VI) compounds in solution can be detected by adding an acidic hydrogen peroxide solution. The unstable dark blue chromium(VI) peroxide (CrO5) is formed, which can be stabilized as an ether adduct . Chromic acid has the hypothetical formula . It is a vaguely described chemical, despite many well-defined chromates and dichromates being known. The dark red chromium(VI) oxide , the acid anhydride of chromic acid, is sold industrially as "chromic acid". It can be produced by mixing sulfuric acid with dichromate and is a strong oxidizing agent. Other oxidation states Compounds of chromium(V) are rather rare; the oxidation state +5 is only realized in few compounds but are intermediates in many reactions involving oxidations by chromate. The only binary compound is the volatile chromium(V) fluoride (CrF5). This red solid has a melting point of 30 °C and a boiling point of 117 °C. It can be prepared by treating chromium metal with fluorine at 400 °C and 200 bar pressure. The peroxochromate(V) is another example of the +5 oxidation state. Potassium peroxochromate (K3[Cr(O2)4]) is made by reacting potassium chromate with hydrogen peroxide at low temperatures. This red brown compound is stable at room temperature but decomposes spontaneously at 150–170 °C. Compounds of chromium(IV) are slightly more common than those of chromium(V). The tetrahalides, CrF4, CrCl4, and CrBr4, can be produced by treating the trihalides () with the corresponding halogen at elevated temperatures. Such compounds are susceptible to disproportionation reactions and are not stable in water. Organic compounds containing Cr(IV) state such as chromium tetra t-butoxide are also known. Most chromium(I) compounds are obtained solely by oxidation of electron-rich, octahedral chromium(0) complexes. Other chromium(I) complexes contain cyclopentadienyl ligands. As verified by X-ray diffraction, a Cr-Cr quintuple bond (length 183.51(4) pm) has also been described. Extremely bulky monodentate ligands stabilize this compound by shielding the quintuple bond from further reactions. Occurrence Chromium is the 21st most abundant element in Earth's crust with an average concentration of 100 ppm. Chromium compounds are found in the environment from the erosion of chromium-containing rocks, and can be redistributed by volcanic eruptions. Typical background concentrations of chromium in environmental media are: atmosphere <10 ng/m3; soil <500 mg/kg; vegetation <0.5 mg/kg; freshwater <10 μg/L; seawater <1 μg/L; sediment <80 mg/kg. Chromium is mined as chromite (FeCr2O4) ore. About two-fifths of the chromite ores and concentrates in the world are produced in South Africa, about a third in Kazakhstan, while India, Russia, and Turkey are also substantial producers. Untapped chromite deposits are plentiful, but geographically concentrated in Kazakhstan and southern Africa. Although rare, deposits of native chromium exist. The Udachnaya Pipe in Russia produces samples of the native metal. This mine is a kimberlite pipe, rich in diamonds, and the reducing environment helped produce both elemental chromium and diamonds. The relation between Cr(III) and Cr(VI) strongly depends on pH and oxidative properties of the location. In most cases, Cr(III) is the dominating species, but in some areas, the ground water can contain up to 39 µg/L of total chromium, of which 30 µg/L is Cr(VI). History Early applications Chromium minerals as pigments came to the attention of the west in the eighteenth century. On 26 July 1761, Johann Gottlob Lehmann found an orange-red mineral in the Beryozovskoye mines in the Ural Mountains which he named Siberian red lead. Though misidentified as a lead compound with selenium and iron components, the mineral was in fact crocoite with a formula of PbCrO4. In 1770, Peter Simon Pallas visited the same site as Lehmann and found a red lead mineral that was discovered to possess useful properties as a pigment in paints. After Pallas, the use of Siberian red lead as a paint pigment began to develop rapidly throughout the region. Crocoite would be the principal source of chromium in pigments until the discovery of chromite many years later. In 1794, Louis Nicolas Vauquelin received samples of crocoite ore. He produced chromium trioxide (CrO3) by mixing crocoite with hydrochloric acid. In 1797, Vauquelin discovered that he could isolate metallic chromium by heating the oxide in a charcoal oven, for which he is credited as the one who truly discovered the element. Vauquelin was also able to detect traces of chromium in precious gemstones, such as ruby and emerald. During the nineteenth century, chromium was primarily used not only as a component of paints, but in tanning salts as well. For quite some time, the crocoite found in Russia was the main source for such tanning materials. In 1827, a larger chromite deposit was discovered near Baltimore, United States, which quickly met the demand for tanning salts much more adequately than the crocoite that had been used previously. This made the United States the largest producer of chromium products until the year 1848, when larger deposits of chromite were uncovered near the city of Bursa, Turkey. With the development of metallurgy and chemical industries in the Western world, the need for chromium increased. Chromium is also famous for its reflective, metallic luster when polished. It is used as a protective and decorative coating on car parts, plumbing fixtures, furniture parts and many other items, usually applied by electroplating. Chromium was used for electroplating as early as 1848, but this use only became widespread with the development of an improved process in 1924. Production Approximately 28.8 million metric tons (Mt) of marketable chromite ore was produced in 2013, and converted into 7.5 Mt of ferrochromium. According to John F. Papp, writing for the USGS, "Ferrochromium is the leading end use of chromite ore, [and] stainless steel is the leading end use of ferrochromium." The largest producers of chromium ore in 2013 have been South Africa (48%), Kazakhstan (13%), Turkey (11%), and India (10%), with several other countries producing the rest of about 18% of the world production. The two main products of chromium ore refining are ferrochromium and metallic chromium. For those products the ore smelter process differs considerably. For the production of ferrochromium, the chromite ore (FeCr2O4) is reduced in large scale in electric arc furnace or in smaller smelters with either aluminium or silicon in an aluminothermic reaction. For the production of pure chromium, the iron must be separated from the chromium in a two step roasting and leaching process. The chromite ore is heated with a mixture of calcium carbonate and sodium carbonate in the presence of air. The chromium is oxidized to the hexavalent form, while the iron forms the stable Fe2O3. The subsequent leaching at higher elevated temperatures dissolves the chromates and leaves the insoluble iron oxide. The chromate is converted by sulfuric acid into the dichromate. 4 FeCr2O4 + 8 Na2CO3 + 7 O2 → 8 Na2CrO4 + 2 Fe2O3 + 8 CO2 2 Na2CrO4 + H2SO4 → Na2Cr2O7 + Na2SO4 + H2O The dichromate is converted to the chromium(III) oxide by reduction with carbon and then reduced in an aluminothermic reaction to chromium. Na2Cr2O7 + 2 C → Cr2O3 + Na2CO3 + CO Cr2O3 + 2 Al → Al2O3 + 2 Cr Applications The creation of metal alloys account for 85% of the available chromium's usage. The remainder of chromium is used in the chemical, refractory, and foundry industries. Metallurgy The strengthening effect of forming stable metal carbides at grain boundaries, and the strong increase in corrosion resistance made chromium an important alloying material for steel. High-speed tool steels contain between 3 and 5% chromium. Stainless steel, the primary corrosion-resistant metal alloy, is formed when chromium is introduced to iron in concentrations above 11%. For stainless steel's formation, ferrochromium is added to the molten iron. Also, nickel-based alloys have increased strength due to the formation of discrete, stable, metal, carbide particles at the grain boundaries. For example, Inconel 718 contains 18.6% chromium. Because of the excellent high-temperature properties of these nickel superalloys, they are used in jet engines and gas turbines in lieu of common structural materials. ASTM B163 relies on Chromium for condenser and heat-exchanger tubes, while castings with high strength at elevated temperatures that contain Chromium are standardised with ASTM A567. AISI type 332 is used where high temperature would normally cause carburization, oxidation or corrosion. Incoloy 800 "is capable of remaining stable and maintaining its austenitic structure even after long time exposures to high temperatures". Nichrome is used as resistance wire for heating elements in things like toasters and space heaters. These uses make chromium a strategic material. Consequently, during World War II, U.S. road engineers were instructed to avoid chromium in yellow road paint, as it "may become a critical material during the emergency." The United Stated likewise considered chromium "essential for the German war industry" and made intense diplomatic efforts to keep it out of the hands of Nazi Germany. The high hardness and corrosion resistance of unalloyed chromium makes it a reliable metal for surface coating; it is still the most popular metal for sheet coating, with its above-average durability, compared to other coating metals. A layer of chromium is deposited on pretreated metallic surfaces by electroplating techniques. There are two deposition methods: thin, and thick. Thin deposition involves a layer of chromium below 1 µm thickness deposited by chrome plating, and is used for decorative surfaces. Thicker chromium layers are deposited if wear-resistant surfaces are needed. Both methods use acidic chromate or dichromate solutions. To prevent the energy-consuming change in oxidation state, the use of chromium(III) sulfate is under development; for most applications of chromium, the previously established process is used. In the chromate conversion coating process, the strong oxidative properties of chromates are used to deposit a protective oxide layer on metals like aluminium, zinc, and cadmium. This passivation and the self-healing properties of the chromate stored in the chromate conversion coating, which is able to migrate to local defects, are the benefits of this coating method. Because of environmental and health regulations on chromates, alternative coating methods are under development. Chromic acid anodizing (or Type I anodizing) of aluminium is another electrochemical process that does not lead to the deposition of chromium, but uses chromic acid as an electrolyte in the solution. During anodization, an oxide layer is formed on the aluminium. The use of chromic acid, instead of the normally used sulfuric acid, leads to a slight difference of these oxide layers. The high toxicity of Cr(VI) compounds, used in the established chromium electroplating process, and the strengthening of safety and environmental regulations demand a search for substitutes for chromium, or at least a change to less toxic chromium(III) compounds. Pigment The mineral crocoite (which is also lead chromate PbCrO4) was used as a yellow pigment shortly after its discovery. After a synthesis method became available starting from the more abundant chromite, chrome yellow was, together with cadmium yellow, one of the most used yellow pigments. The pigment does not photodegrade, but it tends to darken due to the formation of chromium(III) oxide. It has a strong color, and was used for school buses in the United States and for the Postal Service (for example, the Deutsche Post) in Europe. The use of chrome yellow has since declined due to environmental and safety concerns and was replaced by organic pigments or other alternatives that are free from lead and chromium. Other pigments that are based around chromium are, for example, the deep shade of |
through (for hand playing). The bell, dome, or cup is the raised section immediately surrounding the hole. The bell produces a higher "pinging" pitch than the rest of the cymbal. The bow is the rest of the surface surrounding the bell. The bow is sometimes described in two areas: the ride and crash area. The ride area is the thicker section closer to the bell while the crash area is the thinner tapering section near the edge. The edge or rim is the immediate circumference of the cymbal. Cymbals are measured by their diameter either in inches or centimeters. The size of the cymbal affects its sound, larger cymbals usually being louder and having longer sustain. The weight describes how thick the cymbal is. Cymbal weights are important to the sound they produce and how they play. Heavier cymbals have a louder volume, more cut, and better stick articulation (when using drum sticks). Thin cymbals have a fuller sound, lower pitch, and faster response. The profile of the cymbal is the vertical distance of the bow from the bottom of the bell to the cymbal edge (higher profile cymbals are more bowl-shaped). The profile affects the pitch of the cymbal: higher profile cymbals have higher pitch. Types Orchestral cymbals Cymbals offer a composer nearly endless amounts of color and effect. Their unique timbre allows them to project even against a full orchestra and through the heaviest of orchestrations and enhance articulation and nearly any dynamic. Cymbals have been utilized historically to suggest frenzy, fury or bacchanalian revels, as seen in the Venus music in Wagner's Tannhäuser, Grieg's Peer Gynt suite, and Osmin's aria "O wie will ich triumphieren" from Mozart's Die Entführung aus dem Serail. Clash cymbals Orchestral clash cymbals are traditionally used in pairs, each one having a strap set in the bell of the cymbal by which they are held. Such a pair is known as clash cymbals, crash cymbals, hand cymbals, or plates. Certain sounds can be obtained by rubbing their edges together in a sliding movement for a "sizzle", striking them against each other in what is called a "crash", tapping the edge of one against the body of the other in what is called a "tap-crash", scraping the edge of one from the inside of the bell to the edge for a "scrape" or "zischen", or shutting the cymbals together and choking the sound in what is called a "hi-hat" or "crush". A skilled percussionist can obtain an enormous dynamic range from such cymbals. For example, in Beethoven's Symphony No. 9, the percussionist is employed to first play cymbals pianissimo, adding a touch of colour rather than loud crash. Crash cymbals are usually damped by pressing them against the percussionist's body. A composer may write laissez vibrer, or, "let vibrate" (usually abbreviated l.v.), secco (dry), or equivalent indications on the score; more usually, the percussionist must judge when to damp based on the written duration of a crash and the context in which it occurs. Crash cymbals have traditionally been accompanied by the bass drum playing an identical part. This combination, played loudly, is an effective way to accentuate a note since it contributes to both very low and very high-frequency ranges and provides a satisfying "crash-bang-wallop". In older music the composer sometimes provided one part for this pair of instruments, writing senza piatti or piatti soli () if only one is needed. This came from the common practice of having one percussionist play using one cymbal mounted to the shell of the bass drum. The percussionist would crash the cymbals with the left hand and use a mallet to strike the bass drum with the right. This method is nowadays often employed in pit orchestras and called for specifically by composers who desire a certain effect. Stravinsky calls for this in his ballet Petrushka, and Mahler calls for this in his Titan Symphony. The modern | than loud crash. Crash cymbals are usually damped by pressing them against the percussionist's body. A composer may write laissez vibrer, or, "let vibrate" (usually abbreviated l.v.), secco (dry), or equivalent indications on the score; more usually, the percussionist must judge when to damp based on the written duration of a crash and the context in which it occurs. Crash cymbals have traditionally been accompanied by the bass drum playing an identical part. This combination, played loudly, is an effective way to accentuate a note since it contributes to both very low and very high-frequency ranges and provides a satisfying "crash-bang-wallop". In older music the composer sometimes provided one part for this pair of instruments, writing senza piatti or piatti soli () if only one is needed. This came from the common practice of having one percussionist play using one cymbal mounted to the shell of the bass drum. The percussionist would crash the cymbals with the left hand and use a mallet to strike the bass drum with the right. This method is nowadays often employed in pit orchestras and called for specifically by composers who desire a certain effect. Stravinsky calls for this in his ballet Petrushka, and Mahler calls for this in his Titan Symphony. The modern convention is for the instruments to have independent parts. However, in kit drumming, a cymbal crash is still most often accompanied by a simultaneous kick to the bass drum, which provides a musical effect and support to the crash. Hi hats Crash cymbals evolved into the low-sock and from this to the modern hi-hat. Even in a modern drum kit, they remain paired with the bass drum as the two instruments which are played with the player's feet. However, hi-hat cymbals tend to be heavy with little taper, more similar to a ride cymbal than to a clash cymbal as found in a drum kit, and perform a ride rather than a crash function. Suspended cymbal Another use of cymbals is the suspended cymbal. This instrument takes its name from the traditional method of suspending the cymbal by means of a leather strap or rope, thus allowing the cymbal to vibrate as freely as possible for maximum musical effect. Early jazz drumming pioneers borrowed this style of cymbal mounting during the early 1900s and later drummers further developed this instrument into the mounted horizontal or nearly horizontally mounted "crash" cymbals of a modern drum kit, However, most modern drum kits do not employ a leather strap suspension |
can be absorbed by crops such as rice. Chinese ministry of agriculture measured in 2002 that 28% of rice it sampled had excess lead and 10% had excess cadmium above limits defined by law. Some plants such as willow trees and poplars have been found to clean both lead and cadmium from soil. Typical background concentrations of cadmium do not exceed 5 ng/m3 in the atmosphere; 2 mg/kg in soil; 1 μg/L in freshwater and 50 ng/L in seawater. Concentrations of cadmium above 10 μg/l may be stable in water having low total solute concentrations and p H and can be difficult to remove by conventional water treatment processes. Production Cadmium is a common impurity in zinc ores, and it is most often isolated during the production of zinc. Some zinc ores concentrates from zinc sulfate ores contain up to 1.4% of cadmium. In the 1970s, the output of cadmium was 6.5 pounds per ton of zinc. Zinc sulfide ores are roasted in the presence of oxygen, converting the zinc sulfide to the oxide. Zinc metal is produced either by smelting the oxide with carbon or by electrolysis in sulfuric acid. Cadmium is isolated from the zinc metal by vacuum distillation if the zinc is smelted, or cadmium sulfate is precipitated from the electrolysis solution. The British Geological Survey reports that in 2001, China was the top producer of cadmium with almost one-sixth of the world's production, closely followed by South Korea and Japan. Applications Cadmium is a common component of electric batteries, pigments, coatings, and electroplating. Batteries In 2009, 86% of cadmium was used in batteries, predominantly in rechargeable nickel-cadmium batteries. Nickel-cadmium cells have a nominal cell potential of 1.2 V. The cell consists of a positive nickel hydroxide electrode and a negative cadmium electrode plate separated by an alkaline electrolyte (potassium hydroxide). The European Union put a limit on cadmium in electronics in 2004 of 0.01%, with some exceptions, and in 2006 reduced the limit on cadmium content to 0.002%. Another type of battery based on cadmium is the silver-cadmium battery. Electroplating Cadmium electroplating, consuming 6% of the global production, is used in the aircraft industry to reduce corrosion of steel components. This coating is passivated by chromate salts. A limitation of cadmium plating is hydrogen embrittlement of high-strength steels from the electroplating process. Therefore, steel parts heat-treated to tensile strength above 1300 MPa (200 ksi) should be coated by an alternative method (such as special low-embrittlement cadmium electroplating processes or physical vapor deposition). Titanium embrittlement from cadmium-plated tool residues resulted in banishment of those tools (and the implementation of routine tool testing to detect cadmium contamination) in the A-12/SR-71, U-2, and subsequent aircraft programs that use titanium. Nuclear fission Cadmium is used in the control rods of nuclear reactors, acting as a very effective neutron poison to control neutron flux in nuclear fission. When cadmium rods are inserted in the core of a nuclear reactor, cadmium absorbs neutrons, preventing them from creating additional fission events, thus controlling the amount of reactivity. The pressurized water reactor designed by Westinghouse Electric Company uses an alloy consisting of 80% silver, 15% indium, and 5% cadmium. Televisions QLED TVs have been starting to include cadmium in construction. Some companies have been looking to reduce the environmental impact of human exposure and pollution of the material in televisions during production. Anticancer drugs Complexes based on heavy metals have great potential for the treatment of a wide variety of cancers but their use is often limited due to toxic side effects. However, scientists are advancing in the field and new promising cadmium complex compounds with reduced toxicity have been discovered. Compounds Cadmium oxide was used in black and white television phosphors and in the blue and green phosphors of color television cathode ray tubes. Cadmium sulfide (CdS) is used as a photoconductive surface coating for photocopier drums. Various cadmium salts are used in paint pigments, with CdS as a yellow pigment being the most common. Cadmium selenide is a red pigment, commonly called cadmium red. To painters who work with the pigment, cadmium provides the most brilliant and durable yellows, oranges, and reds – so much so that during production, these colors are significantly toned down before they are ground with oils and binders or blended into watercolors, gouaches, acrylics, and other paint and pigment formulations. Because these pigments are potentially toxic, users should use a barrier cream on the hands to prevent absorption through the skin even though the amount of cadmium absorbed into the body through the skin is reported to be less than 1%. In PVC, cadmium was used as heat, light, and weathering stabilizers. Currently, cadmium stabilizers have been completely replaced with barium-zinc, calcium-zinc and organo-tin stabilizers. Cadmium is used in many kinds of solder and bearing alloys, because it has a low coefficient of friction and fatigue resistance. It is also found in some of the lowest-melting alloys, such as Wood's metal. Semiconductors Cadmium is an element in some semiconductor materials. Cadmium sulfide, cadmium selenide, and cadmium telluride are used in some photodetectors and solar cells. HgCdTe detectors are sensitive to mid-infrared light and used in some motion detectors. Laboratory uses Helium–cadmium lasers are a common source of blue or ultraviolet laser light. Lasers at wavelengths of 325, 354 and 442 nm are made using this gain medium; some models can switch between these wavelengths. They are notably used in fluorescence microscopy as well as various laboratory uses requiring laser light at these wavelengths. Cadmium selenide quantum dots emit bright luminescence under UV excitation (He-Cd laser, for example). The color of this luminescence can be green, yellow or red depending on the particle size. Colloidal solutions of those particles are used for imaging of biological tissues and solutions with a fluorescence microscope. In molecular biology, cadmium is used to block voltage-dependent calcium channels from fluxing calcium ions, as well as in hypoxia research to stimulate proteasome-dependent degradation of Hif-1α. Cadmium-selective sensors based on the fluorophore BODIPY have been developed for imaging and sensing of cadmium in cells. One powerful method for monitoring cadmium in aqueous environments involves electrochemistry. By employing a self-assembled monolayer one can obtain a cadmium selective electrode with a ppt-level sensitivity. Biological role and research Cadmium has no known function in higher organisms and is considered toxic. Cadmium is considered an environmental pollutant that causes health hazard to living organisms. Administration of cadmium to cells causes oxidative stress and increases the levels of antioxidants produced by cells to protect against macro molecular damage. However a cadmium-dependent carbonic anhydrase has been found in some marine diatoms. The diatoms live in environments with very low zinc concentrations and cadmium performs the function normally carried out by zinc in other anhydrases. This was discovered with X-ray absorption near edge structure (XANES) spectroscopy. Cadmium is preferentially absorbed in the kidneys of humans. Up to about 30 mg of cadmium is commonly inhaled throughout human childhood and adolescence. Cadmium is under research regarding its toxicity in humans, potentially elevating risks of cancer, cardiovascular disease, and osteoporosis. Environment The biogeochemistry of cadmium and its release to the environment has been the subject of review, as has the speciation of cadmium in the environment. Safety Individuals and organizations have been reviewing cadmium's bioinorganic aspects for its toxicity. The most dangerous form of occupational exposure to cadmium is inhalation of fine dust and fumes, or ingestion of highly soluble cadmium compounds. Inhalation of cadmium fumes can result initially in metal fume fever, but may progress to chemical pneumonitis, pulmonary edema, and death. Cadmium is also an environmental hazard. Human exposure is primarily from fossil fuel combustion, phosphate fertilizers, natural sources, iron and steel production, cement production and related activities, nonferrous metals production, and municipal solid waste incineration. Other sources of cadmium include bread, root crops, and vegetables. There have been a few instances of general population poisoning as the result of long-term exposure to cadmium in contaminated food and water. Research into an estrogen mimicry that may induce breast cancer is ongoing. In the decades leading up to World War II, mining operations contaminated the Jinzū River in Japan with cadmium and traces of other toxic metals. As a consequence, cadmium accumulated in the rice crops along the riverbanks downstream of the mines. Some members of the local agricultural communities consumed the contaminated rice and developed itai-itai disease and renal abnormalities, including proteinuria and glucosuria. The victims of this poisoning were almost exclusively post-menopausal women with low iron and low body stores of other minerals. Similar general population cadmium exposures in other parts of the world have not resulted in the same health problems because the populations maintained sufficient iron and other mineral levels. Thus, although cadmium is a major factor in the itai-itai disease in Japan, most researchers have concluded that it was one of several factors. Cadmium is one of six substances banned by the European Union's Restriction of Hazardous Substances (RoHS) directive, which regulates hazardous substances in electrical and electronic equipment, but allows for certain exemptions and exclusions from the scope of the law. The International Agency for Research on Cancer has classified cadmium and cadmium compounds as carcinogenic to humans. Although occupational exposure to cadmium is linked to lung and prostate cancer, there is still uncertainty about the carcinogenicity of cadmium in low environmental exposure. Recent data from epidemiological studies suggest that intake of cadmium through diet is associated with a higher risk of endometrial, breast, and prostate cancer as well as with osteoporosis in humans. A recent study has demonstrated that endometrial tissue is characterized by higher levels of cadmium in current and former smoking females. Cadmium exposure is associated with a large number of illnesses including kidney disease, early atherosclerosis, hypertension, and cardiovascular diseases. Although studies show a significant correlation between cadmium exposure and occurrence of disease in human populations, a molecular mechanism has not yet been identified. One hypothesis holds that cadmium is an endocrine disruptor and some experimental studies have shown that it can interact with different hormonal signaling pathways. For example, cadmium can bind to the estrogen receptor alpha, and affect signal transduction along the estrogen and MAPK signaling pathways at low doses. The tobacco plant absorbs and accumulates heavy metals such as cadmium from the surrounding soil into its leaves. Following tobacco smoke inhalation, these are readily absorbed into the body of users. Tobacco smoking is the most important single source of cadmium exposure in the general population. An estimated 10% of the cadmium content of a cigarette is inhaled through smoking. Absorption of cadmium through the lungs is more effective than through the gut. As much as 50% of the cadmium inhaled in cigarette smoke may be absorbed. On average, cadmium concentrations in the blood of smokers is 4 to 5 times greater than non-smokers and in the kidney, 2–3 times greater than in non-smokers. Despite the high cadmium content in cigarette smoke, there seems to be little exposure to cadmium from passive smoking. In a non-smoking population, food is the greatest source of exposure. High quantities of cadmium can be found in crustaceans, mollusks, offal, frog legs, cocoa solids, bitter and semi-bitter chocolate, seaweed, fungi and algae products. However, grains, vegetables, and starchy roots and tubers are consumed in much greater quantity in the U.S., and are the source of the greatest dietary exposure there. | 112 u, the primary decay mode is electron capture and the dominant decay product is element 47 (silver). Heavier isotopes decay mostly through beta emission producing element 49 (indium). One isotope of cadmium, 113Cd, absorbs neutrons with high selectivity: With very high probability, neutrons with energy below the cadmium cut-off will be absorbed; those higher than the cut-off will be transmitted. The cadmium cut-off is about 0.5 eV, and neutrons below that level are deemed slow neutrons, distinct from intermediate and fast neutrons. Cadmium is created via the s-process in low- to medium-mass stars with masses of 0.6 to 10 solar masses, over thousands of years. In that process, a silver atom captures a neutron and then undergoes beta decay. History Cadmium (Latin cadmia, Greek καδμεία meaning "calamine", a cadmium-bearing mixture of minerals that was named after the Greek mythological character Κάδμος, Cadmus, the founder of Thebes) was discovered in contaminated zinc compounds sold in pharmacies in Germany in 1817 by Friedrich Stromeyer. Karl Samuel Leberecht Hermann simultaneously investigated the discoloration in zinc oxide and found an impurity, first suspected to be arsenic, because of the yellow precipitate with hydrogen sulfide. Additionally Stromeyer discovered that one supplier sold zinc carbonate instead of zinc oxide. Stromeyer found the new element as an impurity in zinc carbonate (calamine), and, for 100 years, Germany remained the only important producer of the metal. The metal was named after the Latin word for calamine, because it was found in this zinc ore. Stromeyer noted that some impure samples of calamine changed color when heated but pure calamine did not. He was persistent in studying these results and eventually isolated cadmium metal by roasting and reducing the sulfide. The potential for cadmium yellow as pigment was recognized in the 1840s, but the lack of cadmium limited this application. Even though cadmium and its compounds are toxic in certain forms and concentrations, the British Pharmaceutical Codex from 1907 states that cadmium iodide was used as a medication to treat "enlarged joints, scrofulous glands, and chilblains". In 1907, the International Astronomical Union defined the international ångström in terms of a red cadmium spectral line (1 wavelength = 6438.46963 Å). This was adopted by the 7th General Conference on Weights and Measures in 1927. In 1960, the definitions of both the metre and ångström were changed to use krypton. After the industrial scale production of cadmium started in the 1930s and 1940s, the major application of cadmium was the coating of iron and steel to prevent corrosion; in 1944, 62% and in 1956, 59% of the cadmium in the United States was used for plating. In 1956, 24% of the cadmium in the United States was used for a second application in red, orange and yellow pigments from sulfides and selenides of cadmium. The stabilizing effect of cadmium chemicals like the carboxylates cadmium laurate and cadmium stearate on PVC led to an increased use of those compounds in the 1970s and 1980s. The demand for cadmium in pigments, coatings, stabilizers, and alloys declined as a result of environmental and health regulations in the 1980s and 1990s; in 2006, only 7% of to total cadmium consumption was used for plating, and only 10% was used for pigments. At the same time, these decreases in consumption were compensated by a growing demand for cadmium for nickel-cadmium batteries, which accounted for 81% of the cadmium consumption in the United States in 2006. Occurrence Cadmium makes up about 0.1 ppm of Earth's crust. It is much rarer than zinc, which makes up about 65 ppm. No significant deposits of cadmium-containing ores are known. The only cadmium mineral of importance, greenockite (CdS), is nearly always associated with sphalerite (ZnS). This association is caused by geochemical similarity between zinc and cadmium, with no geological process likely to separate them. Thus, cadmium is produced mainly as a byproduct of mining, smelting, and refining sulfidic ores of zinc, and, to a lesser degree, lead and copper. Small amounts of cadmium, about 10% of consumption, are produced from secondary sources, mainly from dust generated by recycling iron and steel scrap. Production in the United States began in 1907, but wide use began after World War I. Metallic cadmium can be found in the Vilyuy River basin in Siberia. Rocks mined for phosphate fertilizers contain varying amounts of cadmium, resulting in a cadmium concentration of as much as 300 mg/kg in the fertilizers and a high cadmium content in agricultural soils. Coal can contain significant amounts of cadmium, which ends up mostly in flue dust. Cadmium in soil can be absorbed by crops such as rice. Chinese ministry of agriculture measured in 2002 that 28% of rice it sampled had excess lead and 10% had excess cadmium above limits defined by law. Some plants such as willow trees and poplars have been found to clean both lead and cadmium from soil. Typical background concentrations of cadmium do not exceed 5 ng/m3 in the atmosphere; 2 mg/kg in soil; 1 μg/L in freshwater and 50 ng/L in seawater. Concentrations of cadmium above 10 μg/l may be stable in water having low total solute concentrations and p H and can be difficult to remove by conventional water treatment processes. Production Cadmium is a common impurity in zinc ores, and it is most often isolated during the production of zinc. Some zinc ores concentrates from zinc sulfate ores contain up to 1.4% of cadmium. In the 1970s, the output of cadmium was 6.5 pounds per ton of zinc. Zinc sulfide ores are roasted in the presence of oxygen, converting the zinc sulfide to the oxide. Zinc metal is produced either by smelting the oxide with carbon or by electrolysis in sulfuric acid. Cadmium is isolated from the zinc metal by vacuum distillation if the zinc is smelted, or cadmium sulfate is precipitated from the electrolysis solution. The British Geological Survey reports that in 2001, China was the top producer of cadmium with almost one-sixth of the world's production, closely followed by South Korea and Japan. Applications Cadmium is a common component of electric batteries, pigments, coatings, and electroplating. Batteries In 2009, 86% of cadmium was used in batteries, predominantly in rechargeable nickel-cadmium batteries. Nickel-cadmium cells have a nominal cell potential of 1.2 V. The cell consists of a positive nickel hydroxide electrode and a negative cadmium electrode plate separated by an alkaline electrolyte (potassium hydroxide). The European Union put a limit on cadmium in electronics in 2004 of 0.01%, with some exceptions, and in 2006 reduced the limit on cadmium content to 0.002%. Another type of battery based on cadmium is the silver-cadmium battery. Electroplating Cadmium electroplating, consuming 6% of the global production, is used in the aircraft industry to reduce corrosion of steel components. This coating is passivated by chromate salts. A limitation of cadmium plating is hydrogen embrittlement of high-strength steels from the electroplating process. Therefore, steel parts heat-treated to tensile strength above 1300 MPa (200 ksi) should be coated by an alternative method (such as special low-embrittlement cadmium electroplating processes or physical vapor deposition). Titanium embrittlement from cadmium-plated tool residues resulted in banishment of those tools (and the implementation of routine tool testing to detect cadmium contamination) in the A-12/SR-71, U-2, and subsequent aircraft programs that use titanium. Nuclear fission Cadmium is used in the control rods of nuclear reactors, acting as a very effective neutron poison to control neutron flux in nuclear fission. When cadmium rods are inserted in the core of a nuclear reactor, cadmium absorbs neutrons, preventing them from creating additional fission events, thus controlling the amount of reactivity. The pressurized water reactor designed by Westinghouse Electric Company uses an alloy consisting of 80% silver, 15% indium, and 5% cadmium. Televisions QLED TVs have been starting to include cadmium in construction. Some companies have been looking to reduce the environmental impact of human exposure and pollution of the material in televisions during production. Anticancer drugs Complexes based on heavy metals have great potential for the treatment of a wide variety of cancers but their use is often limited due to toxic side effects. However, scientists are advancing in the field and new promising cadmium complex compounds with reduced toxicity have been discovered. Compounds Cadmium oxide was used in black and white television phosphors and in the blue and green phosphors of color television cathode ray tubes. Cadmium sulfide (CdS) is used as a photoconductive surface coating for photocopier drums. Various cadmium salts are used in paint pigments, with CdS as a yellow pigment being the most common. Cadmium selenide is a red pigment, commonly called cadmium red. To painters who work with the pigment, cadmium provides the most brilliant and durable yellows, oranges, and reds – so much so that during production, these colors are significantly toned down before they are ground with oils and binders or blended into watercolors, gouaches, acrylics, and other paint and pigment formulations. Because these pigments are potentially toxic, users should use a barrier cream on the hands to prevent absorption through the skin even though the amount of cadmium absorbed into the body through the skin is reported to be less than 1%. In PVC, cadmium was used as heat, light, and weathering stabilizers. Currently, cadmium stabilizers have been completely replaced with barium-zinc, calcium-zinc and organo-tin stabilizers. Cadmium is used in many kinds of solder and bearing alloys, because it has a low coefficient of friction and fatigue resistance. It is also found in some of the lowest-melting alloys, such as Wood's metal. Semiconductors Cadmium is an element in some semiconductor materials. Cadmium sulfide, cadmium selenide, and cadmium telluride are used in some photodetectors and solar cells. HgCdTe detectors are sensitive to mid-infrared light and used in some motion detectors. Laboratory uses Helium–cadmium lasers are a common source of blue or ultraviolet laser light. Lasers at wavelengths of 325, 354 and 442 nm are made using this gain medium; some models can switch between these wavelengths. They are notably used in fluorescence microscopy as well as |
nuclear reactors, curium is formed from 238U in a series of nuclear reactions. In the first chain, 238U captures a neutron and converts into 239U, which via β− decay transforms into 239Np and 239Pu. Further neutron capture followed by β−-decay produces the 241Am isotope of americium which further converts into 242Cm: For research purposes, curium is obtained by irradiating not uranium but plutonium, which is available in large amounts from spent nuclear fuel. A much higher neutron flux is used for the irradiation that results in a different reaction chain and formation of 244Cm: Curium-244 decays into 240Pu by emission of alpha particle, but it also absorbs neutrons resulting in a small amount of heavier curium isotopes. Among those, 247Cm and 248Cm are popular in scientific research because of their long half-lives. However, the production rate of 247Cm in thermal neutron reactors is relatively low because it is prone to undergo fission induced by thermal neutrons. Synthesis of 250Cm via neutron absorption is also rather unlikely because of the short half-life of the intermediate product 249Cm (64 min), which converts by β− decay to the berkelium isotope 249Bk. The above cascade of (n,γ) reactions produces a mixture of different curium isotopes. Their post-synthesis separation is cumbersome, and therefore a selective synthesis is desired. Curium-248 is favored for research purposes because of its long half-life. The most efficient preparation method of this isotope is via α-decay of the californium isotope 252Cf, which is available in relatively large quantities due to its long half-life (2.65 years). About 35–50 mg of 248Cm is being produced by this method every year. The associated reaction produces 248Cm with isotopic purity of 97%. Another interesting for research isotope 245Cm can be obtained from the α-decay of 249Cf, and the latter isotope is produced in minute quantities from the β−-decay of the berkelium isotope 249Bk. Metal preparation Most synthesis routines yield a mixture of different actinide isotopes as oxides, from which a certain isotope of curium needs to be separated. An example procedure could be to dissolve spent reactor fuel (e.g. MOX fuel) in nitric acid, and remove the bulk of the uranium and plutonium using a PUREX (Plutonium – URanium EXtraction) type extraction with tributyl phosphate in a hydrocarbon. The lanthanides and the remaining actinides are then separated from the aqueous residue (raffinate) by a diamide-based extraction to give, after stripping, a mixture of trivalent actinides and lanthanides. A curium compound is then selectively extracted using multi-step chromatographic and centrifugation techniques with an appropriate reagent. Bis-triazinyl bipyridine complex has been recently proposed as such reagent which is highly selective to curium. Separation of curium from a very similar americium can also be achieved by treating a slurry of their hydroxides in aqueous sodium bicarbonate with ozone at elevated temperature. Both americium and curium are present in solutions mostly in the +3 valence state; whereas americium oxidizes to soluble Am(IV) complexes, curium remains unchanged and can thus be isolated by repeated centrifugation. Metallic curium is obtained by reduction of its compounds. Initially, curium(III) fluoride was used for this purpose. The reaction was conducted in the environment free from water and oxygen, in the apparatus made of tantalum and tungsten, using elemental barium or lithium as reducing agents. Another possibility is the reduction of curium(IV) oxide using a magnesium-zinc alloy in a melt of magnesium chloride and magnesium fluoride. Compounds and reactions Oxides Curium readily reacts with oxygen forming mostly Cm2O3 and CmO2 oxides, but the divalent oxide CmO is also known. Black CmO2 can be obtained by burning curium oxalate (), nitrate (), or hydroxide in pure oxygen. Upon heating to 600–650 °C in vacuum (about 0.01 Pa), it transforms into the whitish Cm2O3: 4CmO2 ->[\Delta T] 2Cm2O3 + O2. Alternatively, Cm2O3 can be obtained by reducing CmO2 with molecular hydrogen: 2CmO2 + H2 -> Cm2O3 + H2O Furthermore, a number of ternary oxides of the type M(II)CmO3 are known, where M stands for a divalent metal, such as barium. Thermal oxidation of trace quantities of curium hydride (CmH2–3) has been reported to produce a volatile form of CmO2 and the volatile trioxide CmO3, one of the two known examples of the very rare +6 state for curium. Another observed species was reported to behave similarly to a supposed plutonium tetroxide and was tentatively characterized as CmO4, with curium in the extremely rare +8 state; however, new experiments seem to indicate that CmO4 does not exist, and have cast doubt on the existence of PuO4 as well. Halides The colorless curium(III) fluoride (CmF3) can be produced by introducing fluoride ions into curium(III)-containing solutions. The brown tetravalent curium(IV) fluoride (CmF4) on the other hand is only obtained by reacting curium(III) fluoride with molecular fluorine: A series of ternary fluorides are known of the form A7Cm6F31, where A stands for alkali metal. The colorless curium(III) chloride (CmCl3) is produced in the reaction of curium hydroxide (Cm(OH)3) with anhydrous hydrogen chloride gas. It can further be converted into other halides, such as curium(III) bromide (colorless to light green) and curium(III) iodide (colorless), by reacting it with the ammonia salt of the corresponding halide at elevated temperature of about 400–450 °C: An alternative procedure is heating curium oxide to about 600 °C with the corresponding acid (such as hydrobromic for curium bromide). Vapor phase hydrolysis of curium(III) chloride results in curium oxychloride: Chalcogenides and pnictides Sulfides, selenides and tellurides of curium have been obtained by treating curium with gaseous sulfur, selenium or tellurium in vacuum at elevated temperature. The pnictides of curium of the type CmX are known for the elements nitrogen, phosphorus, arsenic and antimony. They can be prepared by reacting either curium(III) hydride (CmH3) or metallic curium with these elements at elevated temperatures. Organocurium compounds and biological aspects Organometallic complexes analogous to uranocene are known also for other actinides, such as thorium, protactinium, neptunium, plutonium and americium. Molecular orbital theory predicts a stable "curocene" complex (η8-C8H8)2Cm, but it has not been reported experimentally yet. Formation of the complexes of the type , where BTP stands for 2,6-di(1,2,4-triazin-3-yl)pyridine, in solutions containing n-C3H7-BTP and Cm3+ ions has been confirmed by EXAFS. Some of these BTP-type complexes selectively interact with curium and therefore are useful in its selective separation from lanthanides and another actinides. Dissolved Cm3+ ions bind with many organic compounds, such as hydroxamic acid, urea, fluorescein and adenosine triphosphate. Many of these compounds are related to biological activity of various microorganisms. The resulting complexes exhibit strong yellow-orange emission under UV light excitation, which is convenient not only for their detection, but also for studying the interactions between the Cm3+ ion and the ligands via changes in the half-life (of the order ~0.1 ms) and spectrum of the fluorescence. Curium has no biological significance. There are a few reports on biosorption of Cm3+ by bacteria and archaea, however no evidence for incorporation of curium into them. Applications Radionuclides Curium is one of the most radioactive isolable elements. Its two most common isotopes 242Cm and 244Cm are strong alpha emitters (energy 6 MeV); they have relatively short half-lives of 162.8 days and 18.1 years, and produce as much as 120 W/g and 3 W/g of thermal energy, respectively. Therefore, curium can be used in its common oxide form in radioisotope thermoelectric generators like those in spacecraft. This application has been studied for the 244Cm isotope, while 242Cm was abandoned due to its prohibitive price of around 2000 USD/g. 243Cm with a ~30 year half-life and good energy yield of ~1.6 W/g could make a suitable fuel, but it produces significant amounts of harmful gamma and beta radiation from radioactive decay products. Though as an α-emitter, 244Cm requires a much thinner radiation protection shielding, it has a high spontaneous fission rate, and thus the neutron and gamma radiation rate are relatively strong. As compared to a competing thermoelectric generator isotope such as 238Pu, 244Cm emits a 500-fold greater fluence of neutrons, and its higher gamma emission requires a shield that is 20 times thicker—about 2 inches of lead for a 1 kW source, as compared to 0.1 in for 238Pu. Therefore, this application of curium is currently considered impractical. A more promising application of 242Cm is to produce 238Pu, a more suitable radioisotope for thermoelectric generators such as in cardiac pacemakers. The alternative routes to 238Pu use the (n,γ) reaction of 237Np, or the deuteron bombardment of uranium, which both always produce 236Pu as an undesired by-product—since the latter decays to 232U with strong gamma emission. Curium is also a common starting material for the production of higher transuranium elements and superheavy elements. Thus, bombardment of 248Cm with neon (22Ne), magnesium (26Mg), or calcium (48Ca) yielded certain isotopes of seaborgium (265Sg), hassium (269Hs and 270Hs), and livermorium (292Lv, 293Lv, and possibly 294Lv). Californium was discovered when a microgram-sized target of curium-242 was irradiated with 35 MeV alpha particles using the cyclotron at Berkeley: + → + Only about 5,000 atoms of californium were produced in this experiment. The odd-mass curium isotopes 243Cm, 245Cm, and 247Cm are all highly fissile and can be used to generate additional energy in a thermal spectrum nuclear reactor; while all of the Cm isotopes are fissionable in fast neutron spectrum reactors. This is one of the motivations for minor actinide separations and transmutation in the nuclear fuel cycle, helping to reduce the long-term radiotoxicity of used, or spent nuclear fuel. X-ray spectrometer The most practical application of 244Cm—though rather limited in total volume—is as α-particle source in the alpha particle X-ray spectrometers (APXS). These instruments were installed on the Sojourner, Mars, Mars 96, Mars Exploration Rovers and Philae comet lander, as well as the Mars Science Laboratory to analyze the composition and structure of the rocks on the surface of planet Mars. APXS was also used in the Surveyor 5–7 moon probes but with a 242Cm source. An elaborated APXS setup is equipped with a sensor head containing six curium sources having the total radioactive decay rate of several tens of millicuries (roughly a gigabecquerel). The sources are collimated on the sample, and the energy spectra of the alpha particles and protons scattered from the sample are analyzed (the proton analysis is implemented only in some spectrometers). These spectra contain quantitative information on all major elements in the samples except for hydrogen, helium and lithium. Safety Owing to its high radioactivity, curium and its compounds must be handled in appropriate laboratories under special arrangements. Whereas curium itself mostly emits α-particles which are absorbed by thin layers of common materials, some of its decay products emit significant fractions of beta and gamma radiation, which require | isotope is produced in minute quantities from the β−-decay of the berkelium isotope 249Bk. Metal preparation Most synthesis routines yield a mixture of different actinide isotopes as oxides, from which a certain isotope of curium needs to be separated. An example procedure could be to dissolve spent reactor fuel (e.g. MOX fuel) in nitric acid, and remove the bulk of the uranium and plutonium using a PUREX (Plutonium – URanium EXtraction) type extraction with tributyl phosphate in a hydrocarbon. The lanthanides and the remaining actinides are then separated from the aqueous residue (raffinate) by a diamide-based extraction to give, after stripping, a mixture of trivalent actinides and lanthanides. A curium compound is then selectively extracted using multi-step chromatographic and centrifugation techniques with an appropriate reagent. Bis-triazinyl bipyridine complex has been recently proposed as such reagent which is highly selective to curium. Separation of curium from a very similar americium can also be achieved by treating a slurry of their hydroxides in aqueous sodium bicarbonate with ozone at elevated temperature. Both americium and curium are present in solutions mostly in the +3 valence state; whereas americium oxidizes to soluble Am(IV) complexes, curium remains unchanged and can thus be isolated by repeated centrifugation. Metallic curium is obtained by reduction of its compounds. Initially, curium(III) fluoride was used for this purpose. The reaction was conducted in the environment free from water and oxygen, in the apparatus made of tantalum and tungsten, using elemental barium or lithium as reducing agents. Another possibility is the reduction of curium(IV) oxide using a magnesium-zinc alloy in a melt of magnesium chloride and magnesium fluoride. Compounds and reactions Oxides Curium readily reacts with oxygen forming mostly Cm2O3 and CmO2 oxides, but the divalent oxide CmO is also known. Black CmO2 can be obtained by burning curium oxalate (), nitrate (), or hydroxide in pure oxygen. Upon heating to 600–650 °C in vacuum (about 0.01 Pa), it transforms into the whitish Cm2O3: 4CmO2 ->[\Delta T] 2Cm2O3 + O2. Alternatively, Cm2O3 can be obtained by reducing CmO2 with molecular hydrogen: 2CmO2 + H2 -> Cm2O3 + H2O Furthermore, a number of ternary oxides of the type M(II)CmO3 are known, where M stands for a divalent metal, such as barium. Thermal oxidation of trace quantities of curium hydride (CmH2–3) has been reported to produce a volatile form of CmO2 and the volatile trioxide CmO3, one of the two known examples of the very rare +6 state for curium. Another observed species was reported to behave similarly to a supposed plutonium tetroxide and was tentatively characterized as CmO4, with curium in the extremely rare +8 state; however, new experiments seem to indicate that CmO4 does not exist, and have cast doubt on the existence of PuO4 as well. Halides The colorless curium(III) fluoride (CmF3) can be produced by introducing fluoride ions into curium(III)-containing solutions. The brown tetravalent curium(IV) fluoride (CmF4) on the other hand is only obtained by reacting curium(III) fluoride with molecular fluorine: A series of ternary fluorides are known of the form A7Cm6F31, where A stands for alkali metal. The colorless curium(III) chloride (CmCl3) is produced in the reaction of curium hydroxide (Cm(OH)3) with anhydrous hydrogen chloride gas. It can further be converted into other halides, such as curium(III) bromide (colorless to light green) and curium(III) iodide (colorless), by reacting it with the ammonia salt of the corresponding halide at elevated temperature of about 400–450 °C: An alternative procedure is heating curium oxide to about 600 °C with the corresponding acid (such as hydrobromic for curium bromide). Vapor phase hydrolysis of curium(III) chloride results in curium oxychloride: Chalcogenides and pnictides Sulfides, selenides and tellurides of curium have been obtained by treating curium with gaseous sulfur, selenium or tellurium in vacuum at elevated temperature. The pnictides of curium of the type CmX are known for the elements nitrogen, phosphorus, arsenic and antimony. They can be prepared by reacting either curium(III) hydride (CmH3) or metallic curium with these elements at elevated temperatures. Organocurium compounds and biological aspects Organometallic complexes analogous to uranocene are known also for other actinides, such as thorium, protactinium, neptunium, plutonium and americium. Molecular orbital theory predicts a stable "curocene" complex (η8-C8H8)2Cm, but it has not been reported experimentally yet. Formation of the complexes of the type , where BTP stands for 2,6-di(1,2,4-triazin-3-yl)pyridine, in solutions containing n-C3H7-BTP and Cm3+ ions has been confirmed by EXAFS. Some of these BTP-type complexes selectively interact with curium and therefore are useful in its selective separation from lanthanides and another actinides. Dissolved Cm3+ ions bind with many organic compounds, such as hydroxamic acid, urea, fluorescein and adenosine triphosphate. Many of these compounds are related to biological activity of various microorganisms. The resulting complexes exhibit strong yellow-orange emission under UV light excitation, which is convenient not only for their detection, but also for studying the interactions between the Cm3+ ion and the ligands via changes in the half-life (of the order ~0.1 ms) and spectrum of the fluorescence. Curium has no biological significance. There are a few reports on biosorption of Cm3+ by bacteria and archaea, however no evidence for incorporation of curium into them. Applications Radionuclides Curium is one of the most radioactive isolable elements. Its two most common isotopes 242Cm and 244Cm are strong alpha emitters (energy 6 MeV); they have relatively short half-lives of 162.8 days and 18.1 years, and produce as much as 120 W/g and 3 W/g of thermal energy, respectively. Therefore, curium can be used in its common oxide form in radioisotope thermoelectric generators like those in spacecraft. This application has been studied for the 244Cm isotope, while 242Cm was abandoned due to its prohibitive price of around 2000 USD/g. 243Cm with a ~30 year half-life and good energy yield of ~1.6 W/g could make a suitable fuel, but it produces significant amounts of harmful gamma and beta radiation from radioactive decay products. Though as an α-emitter, 244Cm requires a much thinner radiation protection shielding, it has a high spontaneous fission rate, and thus the neutron and gamma radiation rate are relatively strong. As compared to a competing thermoelectric generator isotope such as 238Pu, 244Cm emits a 500-fold greater fluence of neutrons, and its higher gamma emission requires a shield that is 20 times thicker—about 2 inches of lead for a 1 kW source, as compared to 0.1 in for 238Pu. Therefore, this application of curium is currently considered impractical. A more promising application of 242Cm is to produce 238Pu, a more suitable radioisotope for thermoelectric generators such as in cardiac pacemakers. The alternative routes to 238Pu use the (n,γ) reaction of 237Np, or the deuteron bombardment of uranium, which both always produce 236Pu as an undesired by-product—since the latter decays to 232U with strong gamma emission. Curium is also a common starting material for the production of higher transuranium elements and superheavy elements. Thus, bombardment of 248Cm with neon (22Ne), magnesium (26Mg), or calcium (48Ca) yielded certain isotopes of seaborgium (265Sg), hassium (269Hs and 270Hs), and livermorium (292Lv, 293Lv, and possibly 294Lv). Californium was discovered when a microgram-sized target of curium-242 was irradiated with 35 MeV alpha particles using the cyclotron at Berkeley: + → + Only about 5,000 atoms of californium were produced in this experiment. The odd-mass curium isotopes 243Cm, 245Cm, and 247Cm are all highly fissile and can be used to generate additional energy in a thermal spectrum nuclear reactor; while all of the Cm isotopes are fissionable in fast neutron spectrum reactors. This is one of the motivations for minor actinide separations and transmutation in the nuclear fuel cycle, helping to reduce the long-term radiotoxicity of used, or spent nuclear fuel. X-ray spectrometer The most practical application of 244Cm—though rather limited in total volume—is as α-particle source in the alpha particle X-ray spectrometers (APXS). These instruments were installed on the Sojourner, Mars, Mars 96, Mars Exploration Rovers and Philae comet lander, as well as the Mars Science Laboratory to analyze the composition and structure of the rocks on the surface of planet Mars. APXS was also used in the Surveyor 5–7 moon probes but with a 242Cm source. An elaborated APXS setup is equipped with a sensor head containing six curium sources |
compounds—californium trichloride, californium(III) oxychloride, and californium oxide—by treating californium with steam and hydrochloric acid. The High Flux Isotope Reactor (HFIR) at the Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee, started producing small batches of californium in the 1960s. By 1995, the HFIR nominally produced of californium annually. Plutonium supplied by the United Kingdom to the United States under the 1958 US–UK Mutual Defence Agreement was used for californium production. The Atomic Energy Commission sold californium-252 to industrial and academic customers in the early 1970s for $10 per microgram, and an average of of californium-252 were shipped each year from 1970 to 1990. Californium metal was first prepared in 1974 by Haire and Baybarz, who reduced californium(III) oxide with lanthanum metal to obtain microgram amounts of sub-micrometer thick films. Occurrence Traces of californium can be found near facilities that use the element in mineral prospecting and in medical treatments. The element is fairly insoluble in water, but it adheres well to ordinary soil; and concentrations of it in the soil can be 500 times higher than in the water surrounding the soil particles. Nuclear fallout from atmospheric nuclear weapons testing prior to 1980 contributed a small amount of californium to the environment. Californium isotopes with mass numbers 249, 252, 253, and 254 have been observed in the radioactive dust collected from the air after a nuclear explosion. Californium is not a major radionuclide at United States Department of Energy legacy sites since it was not produced in large quantities. Californium was once believed to be produced in supernovas, as their decay matches the 60-day half-life of 254Cf. However, subsequent studies failed to demonstrate any californium spectra, and supernova light curves are now thought to follow the decay of nickel-56. The transuranium elements from americium to fermium, including californium, occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so. Spectral lines of californium, along with those of several other non-primordial elements, were detected in Przybylski's Star in 2008. Production Californium is produced in nuclear reactors and particle accelerators. Californium-250 is made by bombarding berkelium-249 () with neutrons, forming berkelium-250 () via neutron capture (n,γ) which, in turn, quickly beta decays (β−) to californium-250 () in the following reaction: (n,γ) → + β− Bombardment of californium-250 with neutrons produces californium-251 and californium-252. Prolonged irradiation of americium, curium, and plutonium with neutrons produces milligram amounts of californium-252 and microgram amounts of californium-249. As of 2006, curium isotopes 244 to 248 are irradiated by neutrons in special reactors to produce primarily californium-252 with lesser amounts of isotopes 249 to 255. Microgram quantities of californium-252 are available for commercial use through the U.S. Nuclear Regulatory Commission. Only two sites produce californium-252: the Oak Ridge National Laboratory in the United States, and the Research Institute of Atomic Reactors in Dimitrovgrad, Russia. As of 2003, the two sites produce 0.25 grams and 0.025 grams of californium-252 per year, respectively. Three californium isotopes with significant half-lives are produced, requiring a total of 15 neutron captures by uranium-238 without nuclear fission or alpha decay occurring during the process. Californium-253 is at the end of a production chain that starts with uranium-238, includes several isotopes of plutonium, americium, curium, berkelium, and the californium isotopes 249 to 253 (see diagram). Applications Californium-252 has a number of specialized applications as a strong neutron emitter, and each microgram of fresh californium produces 139 million neutrons per minute. This property makes californium useful as a startup neutron source for some nuclear reactors and as a portable (non-reactor based) neutron source for neutron activation analysis to detect trace amounts of elements in samples. Neutrons from californium are employed as a treatment of certain cervical and brain cancers where other radiation therapy is ineffective. It has been used in educational applications since 1969 when the Georgia Institute of Technology received a loan of 119 μg of californium-252 from the Savannah River Site. It is also used with online elemental coal analyzers and bulk material analyzers in the coal and cement industries. Neutron penetration into materials makes californium useful in detection instruments such as fuel rod scanners; neutron radiography of aircraft and weapons components to detect corrosion, bad welds, cracks and trapped moisture; and in portable metal detectors. Neutron moisture gauges use californium-252 to find water and petroleum layers in oil wells, as a portable neutron source for gold and silver prospecting for on-the-spot analysis, and to detect ground water movement. The major uses of californium-252 in 1982 were, in order of use, reactor start-up (48.3%), fuel rod scanning (25.3%), and activation analysis (19.4%). By 1994, most californium-252 was used in neutron radiography (77.4%), with fuel rod scanning (12.1%) and reactor start-up (6.9%) as important but distant secondary uses. In 2021, fast neutrons from a californium-252 source were used for wireless data transmission. Californium-251 has a very small calculated critical mass of about , high lethality, and a relatively short period of toxic environmental irradiation. The low critical mass of californium led to some exaggerated claims about possible uses for the element. In October 2006, researchers announced that three atoms of oganesson (element 118) had been identified at the Joint Institute for Nuclear Research in Dubna, Russia, as the product of bombardment of californium-249 with calcium-48, making it the heaviest element ever synthesized. The target for this experiment contained about 10 mg of californium-249 deposited on a titanium foil of 32 cm2 area. Californium has also been used to produce other transuranium elements; for example, element 103 (later named lawrencium) was first synthesized in 1961 by bombarding californium with boron nuclei. Precautions Californium that bioaccumulates in skeletal tissue releases radiation that disrupts the body's ability to form red blood cells. The element plays no natural biological role in any organism due to its intense radioactivity and | the lanthanide above californium in the periodic table. Compounds in the +4 oxidation state are strong oxidizing agents and those in the +2 state are strong reducing agents. The element slowly tarnishes in air at room temperature, with the rate increasing when moisture is added. Californium reacts when heated with hydrogen, nitrogen, or a chalcogen (oxygen family element); reactions with dry hydrogen and aqueous mineral acids are rapid. Californium is only water-soluble as the californium(III) cation. Attempts to reduce or oxidize the +3 ion in solution have failed. The element forms a water-soluble chloride, nitrate, perchlorate, and sulfate and is precipitated as a fluoride, oxalate, or hydroxide. Californium is the heaviest actinide to exhibit covalent properties, as is observed in the californium borate. Isotopes Twenty radioisotopes of californium have been characterized, the most stable being californium-251 with a half-life of 898 years, californium-249 with a half-life of 351 years, californium-250 with a half-life of 13.08 years, and californium-252 with a half-life of 2.645 years. All the remaining isotopes have half-lives shorter than a year, and the majority of these have half-lives shorter than 20 minutes. The isotopes of californium range in mass number from 237 to 256. Californium-249 is formed from the beta decay of berkelium-249, and most other californium isotopes are made by subjecting berkelium to intense neutron radiation in a nuclear reactor. Although californium-251 has the longest half-life, its production yield is only 10% due to its tendency to collect neutrons (high neutron capture) and its tendency to interact with other particles (high neutron cross section). Californium-252 is a very strong neutron emitter, which makes it extremely radioactive and harmful. Californium-252 undergoes alpha decay 96.9% of the time to form curium-248 while the remaining 3.1% of decays are spontaneous fission. One microgram (μg) of californium-252 emits 2.3 million neutrons per second, an average of 3.7 neutrons per spontaneous fission. Most of the other isotopes of californium decay to isotopes of curium (atomic number 96) via alpha decay. History Californium was first synthesized at the University of California Radiation Laboratory in Berkeley, by the physics researchers Stanley Gerald Thompson, Kenneth Street Jr., Albert Ghiorso, and Glenn T. Seaborg on or around February 9, 1950. It was the sixth transuranium element to be discovered; the team announced its discovery on March 17, 1950. To produce californium, a microgram-sized target of curium-242 () was bombarded with 35 MeV alpha particles () in the cyclotron at Berkeley, which produced californium-245 () plus one free neutron (). + → + To identify and separate out the element, ion exchange and adsorsion methods were undertaken. Only about 5,000 atoms of californium were produced in this experiment, and these atoms had a half-life of 44 minutes. The discoverers named the new element after the university and the state. This was a break from the convention used for elements 95 to 97, which drew inspiration from how the elements directly above them in the periodic table were named. However, the element directly above element 98 in the periodic table, dysprosium, has a name that simply means "hard to get at", so the researchers decided to set aside the informal naming convention. They added that "the best we can do is to point out [that] ... searchers a century ago found it difficult to get to California". Weighable quantities of californium were first produced by the irradiation of plutonium targets at the Materials Testing Reactor at the National Reactor Testing Station in eastern Idaho; and these findings were reported in 1954. The high spontaneous fission rate of californium-252 was observed in these samples. The first experiment with californium in concentrated form occurred in 1958. The isotopes californium-249 to californium-252 |
16 states. Edmund Stoiber took over the CSU leadership in 1999. He ran for Chancellor of Germany in 2002, but his preferred CDU/CSU–FDP coalition lost against the SPD candidate Gerhard Schröder's SPD–Green alliance. In the 2003 Bavarian state election, the CSU won 60.7% of the vote and 124 of 180 seats in the state parliament. This was the first time any party had won a two-thirds majority in a German state parliament. The Economist later suggested that this exceptional result was due to a backlash against Schröder's government in Berlin. The CSU's popularity declined in subsequent years. Stoiber stepped down from the posts of Minister-President and CSU chairman in September 2007. A year later, the CSU lost its majority in the 2008 Bavarian state election, with its vote share dropping from 60.7% to 43.4%. The CSU remained in power by forming a coalition with the FDP. In the 2009 general election, the CSU received only 42.5% of the vote in Bavaria in the 2009 election, which by then constituted its weakest showing in the party's history. The CSU made gains in the 2013 Bavarian state election and the 2013 federal election, which were held a week apart in September 2013. The CSU regained their majority in the Bavarian Landtag and remained in government in Berlin. They had three ministers in the Fourth Merkel cabinet, namely Horst Seehofer (Minister of the Interior, Building and Community), Andreas Scheuer (Minister of Transport and Digital Infrastructure) and Gerd Müller (Minister for Economic Cooperation and Development). The 2018 Bavarian state election yielded the worst result for the CSU in the state elections (top candidate Markus Söder) since 1950 with 37.2% of votes, a decline of over ten percentage points compared to the last result in 2013. After that, the CSU had to form a new coalition government with the minor partner Free Voters of Bavaria. The 2021 German federal election saw the worst election result ever for the Union. The CSU also had a weak showing with 5.2% of votes nationally and 31.7% of the total in Bavaria. Relationship with the CDU The CSU is the sister party of the Christian Democratic Union (CDU). Together, they are called the Union. The CSU operates only within Bavaria, and the CDU operates in all states other than Bavaria. While virtually independent, at the federal level the parties form a common CDU/CSU faction. No Chancellor has ever come from the CSU, although Strauß and Edmund Stoiber were CDU/CSU candidates for Chancellor in the 1980 federal election and the 2002 federal election, respectively, which were | considered the de facto successor of the Weimar-era Catholic Bavarian People's Party. At the federal level, the CSU forms a common faction in the Bundestag with the CDU which is frequently referred to as the Union Faction (die Unionsfraktion) or simply CDU/CSU. The CSU has 45 seats in the Bundestag since the 2021 federal election, making it currently the smallest of the seven parties represented. The CSU is a member of the European People's Party and the International Democrat Union. Party leader Markus Söder serves as Minister-President of Bavaria, a position that CSU representatives have held from 1946 to 1954 and again since 1957. History Franz Josef Strauß (1915–1988) had left behind the strongest legacy as a leader of the party, having led the party from 1961 until his death in 1988. His political career in the federal cabinet was unique in that he had served four ministerial posts in the years between 1953 and 1969. From 1978 until his death in 1988, Strauß served as the Minister-President of Bavaria. Strauß was the first leader of the CSU to be a candidate for the German chancellery in 1980. In the 1980 federal election, Strauß ran against the incumbent Helmut Schmidt of the Social Democratic Party of Germany (SPD) but lost thereafter as the SPD and the Free Democratic Party (FDP) managed to secure an absolute majority together, forming a social-liberal coalition. The CSU has led the Bavarian state government since it came into existence in 1946, save from 1954 to 1957 when the SPD formed a state government in coalition with the Bavaria Party and the state branches of the GB/BHE and FDP. Initially, the separatist Bavaria Party (BP) successfully competed for the same electorate as the CSU, as both parties saw and presented themselves as successors to the BVP. The CSU was ultimately able to win this power struggle for itself. Among other things, the BP was involved in the "casino affair" under dubious circumstances by the CSU at the end of the 1950s and lost considerable prestige and votes. In the 1966 state election, the BP finally left the state parliament. Before the 2008 elections in Bavaria, the CSU perennially achieved absolute majorities at the state level by itself. This level of dominance is unique among Germany's 16 states. Edmund Stoiber took over the CSU leadership in 1999. He ran |
board of directors is technically not part of management itself, although its chairman may be considered part of the corporate office if he or she is an executive chairman. A corporation often consists of different businesses, whose senior executives report directly to the CEO or COO, but depends on the form of the business. If organized as a division then the top manager is often known as an executive vice president (EVP). If that business is a subsidiary which has considerably more independence, then the title might be chairman and CEO. In many countries, particularly in Europe and Asia, there is a separate executive board for day-to-day business and supervisory board (elected by shareholders) for control purposes. In these countries, the CEO presides over the executive board and the chairman presides over the supervisory board, and these two roles will always be held by different people. This ensures a distinction between management by the executive board and governance by the supervisory board. This seemingly allows for clear lines of authority. There is a strong parallel here with the structure of government, which tends to separate the political cabinet from the management civil service. In the United States and other countries that follow a single-board corporate structure, the board of directors (elected by the shareholders) is often equivalent to the European or Asian supervisory board, while the functions of the executive board may be vested either in the board of directors or in a separate committee, which may be called an operating committee (J.P. Morgan Chase), management committee (Goldman Sachs), executive committee (Lehman Brothers), or executive council (Hewlett-Packard), or executive board (HeiG) composed of the division/subsidiary heads and senior officers that report directly to the CEO. United States State laws in the United States traditionally required certain positions to be created within every corporation, such as president, secretary and treasurer. Today, the approach under the Model Business Corporation Act, which is employed in many states, is to grant companies discretion in determining which titles to have, with the only mandated organ being the board of directors. Some states that do not employ the MBCA continue to require that certain offices be established. Under the law of Delaware, where most large US corporations are established, stock certificates must be signed by two officers with titles specified by law (e.g. a president and secretary or a president and treasurer). Every corporation incorporated in California must have a chairman of the board or a president (or both), as well as a secretary and a chief financial officer. Limited liability company (LLC)-structured companies are generally run directly by their members, but the members can agree to appoint officers such as a CEO or to appoint "managers" to operate the company. American companies are generally led by a CEO. In some companies, the CEO also has the title of "president". In other companies, a president is a different person, and the primary duties of the two positions are defined in the company's bylaws (or the laws of the governing legal jurisdiction). Many companies also have a CFO, a chief operating officer (COO) and other senior positions such as General Counsel (CLO), chief strategy officer (CSO), chief marketing officer (CMO), etc. that report to the president and CEO. The next level, which are not executive positions, is middle management and may be called "vice presidents", "directors" or "managers", depending on the size and required managerial depth of the company. United Kingdom In British English, the title of managing director is generally synonymous with that of chief executive officer. Managing directors do not have any particular authority under the Companies Act in the UK, but do have implied authority based on the general understanding of what their position entails, as well as any authority expressly delegated by the board of directors. Japan and South Korea In Japan, corporate titles are roughly standardized across companies and organizations; although there is variation from company to company, corporate titles within a company are always consistent, and the large companies in Japan generally follow the same outline. These titles are the formal titles that are used on business cards. Korean corporate titles are similar to those of Japan. Legally, Japanese and Korean companies are only required to have a board of directors with at least one representative director. In Japanese, a company director is called a torishimariyaku (取締役) and a representative director is called a daihyō torishimariyaku (代表取締役). The equivalent Korean titles are isa (이사, 理事) and daepyo-isa (대표이사, 代表理事). These titles are often combined with lower titles, e.g. senmu torishimariyaku or jōmu torishimariyaku for Japanese executives who are also board members. Most Japanese companies also have statutory auditors, who operate alongside the board of directors in supervisory roles. The typical structure of executive titles in large companies includes the following: {| class="wikitable" |- !English gloss !Kanji (hanja) !Japanese !Korean !Comments |- |Chairman |会長(會長) |Kaichō |Hwejang(회장) |Often a semi-retired president or company founder. Denotes a position with considerable power within the company exercised through behind-the-scenes influence via the active president. |- |Vice chairman |副会長(副會長) |Fuku-kaichō |Bu-hwejang(부회장) |At Korean family-owned chaebol companies such as Samsung, the vice-chairman commonly holds the CEO title (i.e., vice chairman and CEO) |- |President |社長 |Shachō |Sajang(사장) |Often CEO of the corporation. Some companies do not have the "chairman" position, in which case the "president" is the top position that is equally respected and authoritative. |- |Deputy presidentor Senior executive vice president |副社長 |Fuku-shachō |Bu-sajang(부사장) |Reports to the president |- |Executive vice president |専務 |Senmu |Jŏnmu(전무) | |- |Senior vice president |常務 |Jōmu |Sangmu(상무) | |- |Vice presidentor general manageror department head |部長 |Buchō |Bujang(부장) |Highest non-executive title; denotes a head of a division or department. There is significant variation in the official English translation used by different companies. |- |Deputy general manager |次長 |Jichō |Chajang(차장) |Direct subordinate to buchō/bujang |- |Manageror section head |課長 |Kachō |Gwajang(과장) |Denotes a head of a team or section underneath a larger division/department |- |Assistant manageror team leader |係長(代理) |Kakarichō |Daeri'''(대리) | |- |Staff |社員 |Shain|Sawon(사원) |Staff without managerial titles are often referred to without using a title at all |} The top management group, comprising jomu/sangmu and above, is often referred to collectively as "senior management" (幹部 or 重役; kambu or juyaku in Japanese; ganbu or jungyŏk in Korean). Some Japanese and Korean companies have also adopted American-style titles, but these are not yet widespread and their usage varies. For example, although there is a Korean translation for chief operating officer (최고운영책임자, choego unyŏng chaegimja), not many companies have yet adopted it with an exception of a few multi-national companies such as Samsung and CJ (a spin-off from Samsung), while the CFO title is often used alongside other titles such as bu-sajang (SEVP) or Jŏnmu (EVP). Since the late 1990s, many Japanese companies have introduced the title of | do have implied authority based on the general understanding of what their position entails, as well as any authority expressly delegated by the board of directors. Japan and South Korea In Japan, corporate titles are roughly standardized across companies and organizations; although there is variation from company to company, corporate titles within a company are always consistent, and the large companies in Japan generally follow the same outline. These titles are the formal titles that are used on business cards. Korean corporate titles are similar to those of Japan. Legally, Japanese and Korean companies are only required to have a board of directors with at least one representative director. In Japanese, a company director is called a torishimariyaku (取締役) and a representative director is called a daihyō torishimariyaku (代表取締役). The equivalent Korean titles are isa (이사, 理事) and daepyo-isa (대표이사, 代表理事). These titles are often combined with lower titles, e.g. senmu torishimariyaku or jōmu torishimariyaku for Japanese executives who are also board members. Most Japanese companies also have statutory auditors, who operate alongside the board of directors in supervisory roles. The typical structure of executive titles in large companies includes the following: {| class="wikitable" |- !English gloss !Kanji (hanja) !Japanese !Korean !Comments |- |Chairman |会長(會長) |Kaichō |Hwejang(회장) |Often a semi-retired president or company founder. Denotes a position with considerable power within the company exercised through behind-the-scenes influence via the active president. |- |Vice chairman |副会長(副會長) |Fuku-kaichō |Bu-hwejang(부회장) |At Korean family-owned chaebol companies such as Samsung, the vice-chairman commonly holds the CEO title (i.e., vice chairman and CEO) |- |President |社長 |Shachō |Sajang(사장) |Often CEO of the corporation. Some companies do not have the "chairman" position, in which case the "president" is the top position that is equally respected and authoritative. |- |Deputy presidentor Senior executive vice president |副社長 |Fuku-shachō |Bu-sajang(부사장) |Reports to the president |- |Executive vice president |専務 |Senmu |Jŏnmu(전무) | |- |Senior vice president |常務 |Jōmu |Sangmu(상무) | |- |Vice presidentor general manageror department head |部長 |Buchō |Bujang(부장) |Highest non-executive title; denotes a head of a division or department. There is significant variation in the official English translation used by different companies. |- |Deputy general manager |次長 |Jichō |Chajang(차장) |Direct subordinate to buchō/bujang |- |Manageror section head |課長 |Kachō |Gwajang(과장) |Denotes a head of a team or section underneath a larger division/department |- |Assistant manageror team leader |係長(代理) |Kakarichō |Daeri'''(대리) | |- |Staff |社員 |Shain|Sawon(사원) |Staff without managerial titles are often referred to without using a title at all |} The top management group, comprising jomu/sangmu and above, is often referred to collectively as "senior management" (幹部 or 重役; kambu or juyaku in Japanese; ganbu or jungyŏk in Korean). Some Japanese and Korean companies have also adopted American-style titles, but these are not yet widespread and their usage varies. For example, although there is a Korean translation for chief operating officer (최고운영책임자, choego unyŏng chaegimja), not many companies have yet adopted it with an exception of a few multi-national companies such as Samsung and CJ (a spin-off from Samsung), while the CFO title is often used alongside other titles such as bu-sajang (SEVP) or Jŏnmu (EVP). Since the late 1990s, many Japanese companies have introduced the title of shikkō yakuin (執行役員) or "officer", seeking to emulate the separation of directors and officers found in American companies. In 2002, the statutory title of shikkō yaku (執行役) was introduced for use in companies that introduced a three-committee structure in their board of directors. The titles are frequently given to buchō and higher-level personnel. Although the two titles are very similar in intent and usage, there are several legal distinctions: shikkō yaku make their own decisions in the course of performing work delegated to them by the board of directors, and are considered managers of the company rather than employees, with a legal status similar to that of directors. Shikkō yakuin are considered employees of the company that follow the decisions of the board of directors, although in some cases directors may have the shikkō yakuin title as well. Senior management The highest-level executives in senior management usually have titles beginning with "chief" and ending with "officer", forming what is often called the "C-suite" or "CxO", where "x" is a variable that could be any functional area (not to be confused with CXO). The traditional three such officers are CEO, COO, and CFO. Depending on the management structure, titles may exist instead of, or be blended/overlapped with, other traditional executive titles, such as president, various designations of vice presidents (e.g. VP of marketing), and general managers or directors of various divisions (such as director of marketing); the latter may or may not imply membership of the board of directors. Certain other prominent positions have emerged, some of which are sector-specific. For example, chief audit executive (CAE), chief procurement officer (CPO) and chief risk officer (CRO) positions are often found in many types of financial services companies. Technology companies of all sorts now tend to have a chief technology officer (CTO) to manage technology development. A CIO oversees information technology (IT) matters, either in companies that specialize in IT or in any kind of company that relies on it for supporting infrastructure. Many companies now also have a chief marketing officer (CMO), particularly mature companies in competitive sectors, where brand management is a high priority. A chief value officer (CVO) is introduced in companies where business processes and organizational entities are focused on the creation and maximization of value. Approximately 50% of the S&P 500 companies have created a chief strategy officer (CSO) in their top management team to lead strategic planning and manage inorganic growth, which provides a long range perspective versus the tactical view of the COO or CFO. This function often replaces a COO on the C-Suite team, in cases where the company wants to focus on growth rather than efficiency and cost containment. A chief administrative officer may be found in many large complex organizations that have various departments or divisions. Additionally, many companies now call their top diversity leadership position the chief diversity officer (CDO). However, this and many other nontraditional and lower-ranking titles are not universally recognized as corporate officers, and they tend to be specific to particular organizational cultures or the preferences of employees. Specific corporate officer positions Chairman of the board – presiding officer of the corporate board of directors. The chairman influences the board of directors, which in turn elects and removes the officers of a corporation and oversees the human, financial, environmental and technical operations of a corporation. The CEO may also hold the title of "chairman", resulting in an executive chairman. In this case, the board frequently names an independent member of the board as a lead director. Executive chairman – the chairman's post may also exist as an office separate from that of CEO, and it is considered an executive chairman if that titleholder wields influence over company operations, such as Steve Case of AOL Time Warner and Douglas Flint of HSBC. In particular, the group chairmanship of HSBC is considered the top position of that institution, outranking the chief executive, and is responsible for leading the board and representing the company in meetings with government figures. Prior to the creation of the group management board in 2006, HSBC's chairman essentially held the duties of a chief executive at an equivalent institution, while HSBC's chief executive served as the deputy. After the 2006 reorganization, the management cadre ran the business, while the chairman oversaw the controls of the business through compliance and audit and the direction of the business. Non-executive chairman – also a separate post from the CEO, unlike an executive chairman, a non-executive chairman does not interfere in day-to-day company matters. Across the world, many companies have separated the roles of chairman and CEO, often resulting in a non-executive chairman, saying that this move improves corporate governance. Chief business officer is a corporate senior executive who assumes full management responsibility for the company's deal making, provides leadership and executes a deal strategy that will allow the company to fulfill its scientific/technology mission and build shareholder value, provides managerial guidance to the company's product development staff as needed. Chief of staff is a corporate director level manager who has overall responsibility for the staff activity within the company who often would have responsibility of hiring and firing of the highest level managers and sometimes directors. They can work with and report directly to managing directors and the chief executive officer. Commissioner Financial control officer, FCO or FC, also comptroller or |
part of Cambridge's economy in the late 19th and early 20th century, but educational institutions are its biggest employers today. Harvard and MIT together employ about 20,000. As a cradle of technological innovation, Cambridge was home to technology firms Analog Devices, Akamai, Bolt, Beranek, and Newman (BBN Technologies) (now part of Raytheon), General Radio (later GenRad), Lotus Development Corporation (now part of IBM), Polaroid, Symbolics, and Thinking Machines. In 1996, Polaroid, Arthur D. Little, and Lotus were Cambridge's top employers, with over 1,000 employees, but they faded out a few years later. Health care and biotechnology firms such as Genzyme, Biogen Idec, bluebird bio, Millennium Pharmaceuticals, Sanofi, Pfizer and Novartis have significant presences in the city. Though headquartered in Switzerland, Novartis continues to expand its operations in Cambridge. Other major biotech and pharmaceutical firms expanding their presence in Cambridge include GlaxoSmithKline, AstraZeneca, Shire, and Pfizer. Most of Cambridge's biotech firms are in Kendall Square and East Cambridge, which decades ago were the city's center of manufacturing. Some others are in University Park at MIT, a new development in another former manufacturing area. None of the high-technology firms that once dominated the economy was among the 25 largest employers in 2005, but by 2008 Akamai and ITA Software were. Google, IBM Research, Microsoft Research, and Philips Research maintain offices in Cambridge. In late January 2012—less than a year after acquiring Billerica-based analytic database management company, Vertica—Hewlett-Packard announced it would also be opening its first offices in Cambridge. Also around that time, e-commerce giants Staples and Amazon.com said they would be opening research and innovation centers in Kendall Square. And LabCentral provides a shared laboratory facility for approximately 25 emerging biotech companies. The proximity of Cambridge's universities has also made the city a center for nonprofit groups and think tanks, including the National Bureau of Economic Research, the Smithsonian Astrophysical Observatory, the Lincoln Institute of Land Policy, Cultural Survival, and One Laptop per Child. In September 2011, the City of Cambridge launched the "Entrepreneur Walk of Fame" initiative. The Walk recognizes people who have made contributions to innovation in global business. In 2021, Cambridge was one of approximately 27 US cities to receive a AAA rating from each of the three major credit rating agencies in the nation, Moody's Investors Service, Standard & Poor's and Fitch Ratings. 2021 marked the 22nd consecutive year that Cambridge had retained this distinction. Top employers , the city's ten largest employers are: Arts and culture Museums Harvard Art Museum, including the Busch-Reisinger Museum, a collection of Germanic art, the Fogg Art Museum, a comprehensive collection of Western art, and the Arthur M. Sackler Museum, a collection of Middle East and Asian art Harvard Museum of Natural History, including the Glass Flowers collection Peabody Museum of Archaeology and Ethnology, Harvard MIT Museum List Visual Arts Center, MIT Semitic Museum, Harvard Public art Cambridge has a large and varied collection of permanent public art, on both city property (managed by the Cambridge Arts Council) and the Harvard and MIT campuses. Temporary public artworks are displayed as part of the annual Cambridge River Festival on the banks of the Charles River, during winter celebrations in Harvard and Central Squares, and at university campus sites. Experimental forms of public artistic and cultural expression include the Central Square World's Fair, the annual Somerville-based Honk! Festival, and If This House Could Talk, a neighborhood art and history event. Street musicians and other performers entertain tourists and locals in Harvard Square during the warmer months. The performances are coordinated through a public process that has been developed collaboratively by the performers, city administrators, private organizations and business groups. The Cambridge public library contains four Works Progress Administration murals completed in 1935 by Elizabeth Tracy Montminy: Religion, Fine Arts, History of Books and Paper, and The Development of the Printing Press. Architecture Despite intensive urbanization during the late 19th century and the 20th century, Cambridge has several historic buildings, including some from the 17th century. The city also has abundant contemporary architecture, largely built by Harvard and MIT. Notable historic buildings in the city include: The Asa Gray House (1810) Austin Hall, Harvard University (1882–84) Cambridge City Hall (1888–89) Cambridge Public Library (1888) Christ Church, Cambridge (1761) Cooper-Frost-Austin House (1689–1817) Elmwood House (1767), residence of the president of Harvard University First Church of Christ, Scientist (1924–30) The First Parish in Cambridge (1833) Harvard-Epworth United Methodist Church (1891–93) Harvard Lampoon Building (1909) The Hooper-Lee-Nichols House (1685–1850) Longfellow House–Washington's Headquarters National Historic Site (1759), former home of poet Henry Wadsworth Longfellow and headquarters of George Washington The Memorial Church of Harvard University (1932) Memorial Hall, Harvard University (1870–77) Middlesex County Courthouse (1814–48) Urban Rowhouse (1875) O'Reilly Spite House (1908), built to spite a neighbor who would not sell his adjacent land Contemporary architecture: Baker House dormitory, MIT, by Finnish architect Alvar Aalto, one of only two Aalto buildings in the US Harvard Graduate Center/Harkness Commons, by The Architects Collaborative (TAC, with Walter Gropius) Carpenter Center for the Visual Arts, Harvard, the only Le Corbusier building in North America Harvard's Science Center, Holyoke Center and Peabody Terrace, by Catalan architect and Harvard Graduate School of Design Dean Josep Lluís Sert Kresge Auditorium, MIT, by Eero Saarinen MIT Chapel, by Eero Saarinen Design Research Building, by Benjamin Thompson and Associates American Academy of Arts and Sciences, by Kallmann McKinnell and Wood, also architects of Boston City Hall Arthur M. Sackler Museum, Harvard, one of the few buildings in the US by Pritzker Prize winner James Stirling Harvard Art Museums, renovation and major expansion of Fogg Museum building, completed in 2014 by Renzo Piano Stata Center, home to the MIT Computer Science and Artificial Intelligence Laboratory, the Department of Linguistics, and the Department of Philosophy, by Frank Gehry The two MIT Media Lab buildings by I. M. Pei and Fumihiko Maki Simmons Hall, MIT, by Steven Holl Music The city has an active music scene, from classical performances to the latest popular bands. Beyond its colleges and universities, Cambridge has many music venues, including The Middle East, Club Passim, The Plough and Stars, The Lizard Lounge and the Nameless Coffeehouse. Parks and recreation Consisting largely of densely built residential space, Cambridge lacks significant tracts of public parkland. Easily accessible open space on the university campuses, including Harvard Yard, the Radcliffe Yard, and MIT's Great Lawn, as well as the considerable open space of Mount Auburn Cemetery and Fresh Pond Reservation, partly compensates for this. At Cambridge's western edge, the cemetery is known as a garden cemetery because of its landscaping (the oldest planned landscape in the country) and arboretum. Although known as a Cambridge landmark, much of the cemetery lies within Watertown. It is also an Important Bird Area (IBA) in the Greater Boston area. Fresh Pond Reservation is the largest open green space in Cambridge with 162 acres (656,000 m2) of land around a 155-acre (627,000 m2) kettle hole lake. This land includes a 2.25-mile walking trail around the reservoir and a public 9-hole golf course. Public parkland includes the esplanade along the Charles River, which mirrors its Boston counterpart; Cambridge Common, a busy and historic public park adjacent to Harvard's campus; Danehy Park, formerly a landfill; and the Alewife Brook Reservation. Government Federal and state representation Cambridge is split between Massachusetts's 5th and 7th U.S. congressional districts. The 5th district seat is held by Democrat Katherine Clark, who replaced now-Senator Ed Markey in a 2013 special election; the 7th is represented by Democrat Ayanna Pressley, elected in 2018. The state's senior United States senator is Democrat Elizabeth Warren, elected in 2012, who lives in Cambridge. The governor of Massachusetts is Republican Charlie Baker, elected in 2014. Cambridge is represented in six districts in the Massachusetts House of Representatives: the 24th Middlesex (which includes parts of Belmont and Arlington), the 25th and 26th Middlesex (the latter of which includes a portion of Somerville), the 29th Middlesex (which includes a small part of Watertown), and the Eighth and Ninth Suffolk (both including parts of the City of Boston). The city is represented in the Massachusetts Senate as a part of the 2nd Middlesex, Middlesex and Suffolk, and 1st Suffolk and Middlesex districts. Politics From 1860 to 1880, Republicans Abraham Lincoln, Ulysses S. Grant, Rutherford B. Hayes, and James Garfield each won Cambridge, Grant doing so by margins of over 20 points in both of his campaigns. Following that, from 1884–1892, Grover Cleveland won Cambridge in all three of his presidential campaigns, by less than ten points each time. Then from 1896 to 1924, Cambridge became something of a "swing" city with a slight Republican lean. GOP nominees carried the city in five of the eight presidential elections during that time frame, with five of the elections resulting in either a plurality or a margin of victory of fewer than ten points. The city of Cambridge is extremely Democratic in modern times, however. In the last 23 presidential elections dating back to the nomination of Al Smith in 1928, the Democratic nominee has carried Cambridge in every election. Every Democratic nominee since Massachusetts native John F. Kennedy in 1960 has received at least 70% of the vote, except for Jimmy Carter in 1976 and 1980. Since 1928, the only Republican nominee to come within ten points of carrying Cambridge is Dwight Eisenhower in his 1956 re-election bid. City government Cambridge has a city government led by a mayor and a nine-member city council. There is also a six-member school committee that functions alongside the superintendent of public schools. The councilors and school committee members are elected every two years using proportional representation. The mayor is elected by the city councilors from among themselves and serves as the chair of city council meetings. The mayor also sits on the school committee. The mayor is not the city's chief executive. Rather, the city manager, who is appointed by the city council, serves in that capacity. Under the city's Plan E form of government, the city council does not have the power to appoint or remove city officials who are under the direction of the city manager. The city council and its members are also forbidden from giving orders to any subordinate of the city manager. Louis DePasquale is the City Manager, having succeeded Lisa C. Peterson, the Acting City Manager and Cambridge's first woman City Manager, on November 14, 2016. Peterson became Acting City Manager on September 30, 2016, after Richard C. Rossi announced that he would opt out of his contract renewal. Rossi succeeded Robert W. Healy, who retired in June 2013 after 32 years in the position. In recent history, the media has highlighted the salary of the city manager as one of the highest for a Massachusetts civic employee. * = current mayor ** = former mayor On March 8, 2021, Cambridge City Council voted to recognize polyamorous domestic partnerships, becoming the second city in the United States following neighboring Somerville, which had done so in 2020. County government Cambridge was a county seat of Middlesex County, along with Lowell, until the abolition of county government. Though the county government was abolished in 1997, the county still exists as a geographical and political region. The employees of Middlesex County courts, jails, registries, and other county agencies now work directly for the state. The county's registrars of Deeds and Probate remain in Cambridge, but the Superior Court and District Attorney have had their operations transferred to Woburn. Third District Court has shifted operations to Medford, and the county Sheriff's office awaits near-term relocation. Education Higher education Cambridge is perhaps best known as an academic and intellectual center. Its colleges and universities include: Cambridge School of Culinary Arts Harvard University Hult International Business School Lesley University Longy School of Music of Bard College Massachusetts Institute of Technology Radcliffe College (now merged with Harvard College) At least 258 of the world's total 962 Nobel Prize winners have at some point in their careers been affiliated with universities in Cambridge. The American Academy of Arts and Sciences is also based in Cambridge. Primary and secondary public education Amigos School Baldwin School (formerly the Agassiz School) Cambridgeport School Fletcher-Maynard Academy Graham and Parks Alternative School Haggerty School Kennedy-Longfellow School King Open School Martin Luther King, Jr. School Morse School (a Core Knowledge school) Peabody School Tobin School (a Montessori school) Five upper schools offer grades 6–8 in some of the same buildings as the elementary schools: Amigos School Cambridge Street Upper School Putnam Avenue Upper School Rindge Avenue Upper School Vassal Lane Upper School Cambridge has three district public high school programs, the principal one being Cambridge Rindge and Latin School (CRLS). Other public charter schools include Benjamin Banneker Charter School, which serves grades K–6; Community Charter School of Cambridge in Kendall Square, which serves grades 7–12; and Prospect Hill Academy, a charter school whose upper school is in Central Square though it is not a part of the Cambridge Public School District. Primary and secondary private education Cambridge also has several private schools, including: Boston Archdiocesan Choir School Buckingham Browne & Nichols School Cambridge Montessori school Cambridge Friends School Fayerweather Street School International School of Boston (formerly École Bilingue) Matignon High School Shady Hill School St. Peter School Media Newspapers Cambridge is served by the Cambridge Chronicle, the oldest surviving weekly paper in the United States. Another popular online newspaper is Cambridge Day. Radio Cambridge is home to the following commercially licensed and student-run radio stations: Television and broadband Cambridge Community Television (CCTV) has served the city since its inception in 1988. CCTV operates Cambridge's public access television facility and three television channels, 8, 9, and 96, on the Cambridge cable system (Comcast). The city has invited tenders from other cable providers, but Comcast remains its only fixed television and broadband utility, though services from American satellite TV providers are available. In October 2014, Cambridge City Manager Richard Rossi appointed a citizen Broadband Task Force to "examine options to increase competition, reduce pricing, and improve speed, reliability and customer service for both residents and businesses." Infrastructure Utilities Cable television service is provided by Comcast Communications. Parts of Cambridge are served by a district heating systems loop for industrial organizations that also cover Boston. Electric service and natural gas are both provided by Eversource Energy. Landline service is provided by Verizon Communication. All phones in Cambridge are connected to Verizon's series of central office locations in the metropolitan area. The city maintains its own Public, educational, and government access (PEG) known as Cambridge Community Television (CCTV). Water department Cambridge obtains water from Hobbs Brook (in Lincoln and Waltham) and [[Stony Brook (Boston)|Stony Brook (Waltham and Weston), as well as an emergency connection to the Massachusetts Water Resources Authority. The city owns over of land in other towns that includes these reservoirs and portions of their watershed. Water from these reservoirs flows by gravity through an aqueduct to Fresh Pond in Cambridge. It is then treated in an adjacent plant and pumped uphill to an elevation of above sea level at the Payson Park Reservoir (Belmont). The water is then redistributed downhill via gravity to individual users in the city. A new water treatment plant opened in 2001. In October 2016, the City of Cambridge announced that, owing to drought conditions, they would begin buying water from the MWRA. On January 3, 2017, Cambridge announced that "As a result of continued rainfall each month since October 2016, we have been able to significantly reduce the need to use MWRA water. We have not purchased any MWRA water since December 12, 2016 and if 'average' rainfall continues this could continue for several months." Sewer service is available in Cambridge. The city is inter-connected with the Massachusetts Water Resources Authority (MWRA)'s sewage network with sewage treatment plant in the Boston Harbor. Transportation Road Several major roads lead to Cambridge, including Route 2, Route 16, and the McGrath Highway (Route 28). The Massachusetts Turnpike does not pass through Cambridge but provides access by an exit in nearby Allston. Both U.S. Route 1 and Interstate 93 also provide additional access on the eastern end of Cambridge at Leverett Circle in Boston. Route 2A runs the length of the city, chiefly along Massachusetts Avenue. The Charles River forms the southern border of Cambridge and is crossed by 11 bridges connecting Cambridge to Boston, including the Longfellow Bridge and the Harvard Bridge, eight of which are open to motorized road traffic. Cambridge has an irregular street network because many of the roads date from the colonial era. Contrary to popular belief, the road system did not evolve from longstanding cow-paths. Roads connected various village settlements with each other and nearby towns and were shaped by geographic features, most notably streams, hills, and swampy areas. Today, the major "squares" are typically connected by long, mostly straight roads, such as Massachusetts Avenue between Harvard Square and Central Square, or Hampshire Street between Kendall Square and Inman Square. Mass transit Cambridge is served by the MBTA, including the Porter Square Station on the regional Commuter Rail; the Lechmere Station on the Green Line; and the Red Line at Alewife, Porter Square, Harvard Square, Central Square, and Kendall Square/MIT Stations. Alewife Station, the terminus of the Red Line, has a large multi-story parking garage (at a rate of $7 per day ). The Harvard bus tunnel, under Harvard Square, connects to the Red Line underground. This tunnel was originally opened for streetcars in 1912 and served trackless trolleys (trolleybuses) and buses as the routes were converted; four lines of the MBTA trolleybus system continue to use it. The tunnel was partially reconfigured when the Red Line was extended to Alewife in the early 1980s. Besides the state-owned transit agency, the city is also served by the Charles River Transportation Management Agency (CRTMA) shuttles which are supported by some of the largest companies operating in the city, in addition to the municipal government itself. Cycling Cambridge has several bike paths, including one along the Charles River, and the Linear Park connecting the Minuteman Bikeway at Alewife with the Somerville Community Path. A connection to Watertown is under construction. Bike parking is common and there are bike lanes on many streets, although concerns have been expressed regarding the suitability of many of the lanes. On several central MIT streets, bike lanes transfer onto the sidewalk. Cambridge bans cycling on certain sections of sidewalk where pedestrian traffic is heavy. While Bicycling Magazine in 2006 rated Boston as one of the worst cities in the nation for bicycling, it has given Cambridge honorable mention as one of the best and was called by the magazine "Boston's Great Hope". Boston has since then followed the example of Cambridge and made considerable efforts to improve bicycling safety and convenience. Cambridge has an official bicycle committee. The LivableStreets Alliance, headquartered in Cambridge, is an advocacy group for bicyclists, pedestrians, and walkable neighborhoods. Walking Walking is a popular activity in Cambridge. In 2000, among US cities with more than 100,000 residents, Cambridge had the highest percentage of commuters who walked to work. Cambridge's major historic squares have changed into modern walking neighborhoods, including traffic calming features based on the needs of pedestrians rather than of motorists. Intercity The Boston intercity bus and train stations at South Station, Boston, and Logan International Airport in East Boston, are accessible by subway. The Fitchburg Line rail service from Porter Square connects to some western suburbs. Since October 2010, there has also been intercity bus service between Alewife Station (Cambridge) and New York City. Police department In addition to the Cambridge Police Department, the city is patrolled by the Fifth (Brighton) Barracks of Troop H of the Massachusetts State Police. Owing, however, to proximity, the city also practices functional cooperation with the Fourth (Boston) Barracks of Troop H, as well. The campuses of Harvard and MIT are patrolled by the Harvard University Police Department and MIT Police Department, respectively. Fire department The city of Cambridge is protected by the Cambridge Fire Department. Established in 1832, the CFD operates eight engine companies, four ladder companies, one rescue company, and two paramedic squad companies from eight fire stations located throughout the city. The Acting Chief is Gerard Mahoney. Emergency medical services (EMS) The city of Cambridge receives emergency medical services from PRO EMS, a privately contracted ambulance service. Public library services Further educational services are provided at the Cambridge Public Library. The large modern main building was built in 2009, and connects to the restored 1888 Richardson Romanesque building. It was founded as the private Cambridge Athenaeum in 1849 and was acquired by the city in 1858, and became the Dana Library. The 1888 building was a donation of Frederick H. Rindge. Sister cities and twin towns Cambridge's sister cities with active relationships are: Coimbra, Portugal (1982) Gaeta, Italy (1982) Tsukuba, Japan (1983) San José Las Flores, El Salvador (1987) Yerevan, Armenia (1987) Galway, Ireland (1997) Les Cayes, Haiti (2014) Cambridge has ten additional inactive sister city relationships: Dublin, Ireland (1983) Ischia, Italy (1984) Catania, Italy (1987) Kraków, Poland (1989) Florence, Italy (1992) Santo Domingo Oeste, Dominican Republic (2003) Southwark, England (2004) Yuseong (Daejeon), Korea (2005) Haidian (Beijing), China (2005) Cienfuegos, Cuba (2005) Notes References Citations Sources Cambridge article by Rev. Edward Abbott in Volume 1, pages 305–358. Eliot, Samuel Atkins. A History of Cambridge, Massachusetts: 1630–1913. Cambridge, Massachusetts: The Cambridge Tribune, 1913. | land around a 155-acre (627,000 m2) kettle hole lake. This land includes a 2.25-mile walking trail around the reservoir and a public 9-hole golf course. Public parkland includes the esplanade along the Charles River, which mirrors its Boston counterpart; Cambridge Common, a busy and historic public park adjacent to Harvard's campus; Danehy Park, formerly a landfill; and the Alewife Brook Reservation. Government Federal and state representation Cambridge is split between Massachusetts's 5th and 7th U.S. congressional districts. The 5th district seat is held by Democrat Katherine Clark, who replaced now-Senator Ed Markey in a 2013 special election; the 7th is represented by Democrat Ayanna Pressley, elected in 2018. The state's senior United States senator is Democrat Elizabeth Warren, elected in 2012, who lives in Cambridge. The governor of Massachusetts is Republican Charlie Baker, elected in 2014. Cambridge is represented in six districts in the Massachusetts House of Representatives: the 24th Middlesex (which includes parts of Belmont and Arlington), the 25th and 26th Middlesex (the latter of which includes a portion of Somerville), the 29th Middlesex (which includes a small part of Watertown), and the Eighth and Ninth Suffolk (both including parts of the City of Boston). The city is represented in the Massachusetts Senate as a part of the 2nd Middlesex, Middlesex and Suffolk, and 1st Suffolk and Middlesex districts. Politics From 1860 to 1880, Republicans Abraham Lincoln, Ulysses S. Grant, Rutherford B. Hayes, and James Garfield each won Cambridge, Grant doing so by margins of over 20 points in both of his campaigns. Following that, from 1884–1892, Grover Cleveland won Cambridge in all three of his presidential campaigns, by less than ten points each time. Then from 1896 to 1924, Cambridge became something of a "swing" city with a slight Republican lean. GOP nominees carried the city in five of the eight presidential elections during that time frame, with five of the elections resulting in either a plurality or a margin of victory of fewer than ten points. The city of Cambridge is extremely Democratic in modern times, however. In the last 23 presidential elections dating back to the nomination of Al Smith in 1928, the Democratic nominee has carried Cambridge in every election. Every Democratic nominee since Massachusetts native John F. Kennedy in 1960 has received at least 70% of the vote, except for Jimmy Carter in 1976 and 1980. Since 1928, the only Republican nominee to come within ten points of carrying Cambridge is Dwight Eisenhower in his 1956 re-election bid. City government Cambridge has a city government led by a mayor and a nine-member city council. There is also a six-member school committee that functions alongside the superintendent of public schools. The councilors and school committee members are elected every two years using proportional representation. The mayor is elected by the city councilors from among themselves and serves as the chair of city council meetings. The mayor also sits on the school committee. The mayor is not the city's chief executive. Rather, the city manager, who is appointed by the city council, serves in that capacity. Under the city's Plan E form of government, the city council does not have the power to appoint or remove city officials who are under the direction of the city manager. The city council and its members are also forbidden from giving orders to any subordinate of the city manager. Louis DePasquale is the City Manager, having succeeded Lisa C. Peterson, the Acting City Manager and Cambridge's first woman City Manager, on November 14, 2016. Peterson became Acting City Manager on September 30, 2016, after Richard C. Rossi announced that he would opt out of his contract renewal. Rossi succeeded Robert W. Healy, who retired in June 2013 after 32 years in the position. In recent history, the media has highlighted the salary of the city manager as one of the highest for a Massachusetts civic employee. * = current mayor ** = former mayor On March 8, 2021, Cambridge City Council voted to recognize polyamorous domestic partnerships, becoming the second city in the United States following neighboring Somerville, which had done so in 2020. County government Cambridge was a county seat of Middlesex County, along with Lowell, until the abolition of county government. Though the county government was abolished in 1997, the county still exists as a geographical and political region. The employees of Middlesex County courts, jails, registries, and other county agencies now work directly for the state. The county's registrars of Deeds and Probate remain in Cambridge, but the Superior Court and District Attorney have had their operations transferred to Woburn. Third District Court has shifted operations to Medford, and the county Sheriff's office awaits near-term relocation. Education Higher education Cambridge is perhaps best known as an academic and intellectual center. Its colleges and universities include: Cambridge School of Culinary Arts Harvard University Hult International Business School Lesley University Longy School of Music of Bard College Massachusetts Institute of Technology Radcliffe College (now merged with Harvard College) At least 258 of the world's total 962 Nobel Prize winners have at some point in their careers been affiliated with universities in Cambridge. The American Academy of Arts and Sciences is also based in Cambridge. Primary and secondary public education Amigos School Baldwin School (formerly the Agassiz School) Cambridgeport School Fletcher-Maynard Academy Graham and Parks Alternative School Haggerty School Kennedy-Longfellow School King Open School Martin Luther King, Jr. School Morse School (a Core Knowledge school) Peabody School Tobin School (a Montessori school) Five upper schools offer grades 6–8 in some of the same buildings as the elementary schools: Amigos School Cambridge Street Upper School Putnam Avenue Upper School Rindge Avenue Upper School Vassal Lane Upper School Cambridge has three district public high school programs, the principal one being Cambridge Rindge and Latin School (CRLS). Other public charter schools include Benjamin Banneker Charter School, which serves grades K–6; Community Charter School of Cambridge in Kendall Square, which serves grades 7–12; and Prospect Hill Academy, a charter school whose upper school is in Central Square though it is not a part of the Cambridge Public School District. Primary and secondary private education Cambridge also has several private schools, including: Boston Archdiocesan Choir School Buckingham Browne & Nichols School Cambridge Montessori school Cambridge Friends School Fayerweather Street School International School of Boston (formerly École Bilingue) Matignon High School Shady Hill School St. Peter School Media Newspapers Cambridge is served by the Cambridge Chronicle, the oldest surviving weekly paper in the United States. Another popular online newspaper is Cambridge Day. Radio Cambridge is home to the following commercially licensed and student-run radio stations: Television and broadband Cambridge Community Television (CCTV) has served the city since its inception in 1988. CCTV operates Cambridge's public access television facility and three television channels, 8, 9, and 96, on the Cambridge cable system (Comcast). The city has invited tenders from other cable providers, but Comcast remains its only fixed television and broadband utility, though services from American satellite TV providers are available. In October 2014, Cambridge City Manager Richard Rossi appointed a citizen Broadband Task Force to "examine options to increase competition, reduce pricing, and improve speed, reliability and customer service for both residents and businesses." Infrastructure Utilities Cable television service is provided by Comcast Communications. Parts of Cambridge are served by a district heating systems loop for industrial organizations that also cover Boston. Electric service and natural gas are both provided by Eversource Energy. Landline service is provided by Verizon Communication. All phones in Cambridge are connected to Verizon's series of central office locations in the metropolitan area. The city maintains its own Public, educational, and government access (PEG) known as Cambridge Community Television (CCTV). Water department Cambridge obtains water from Hobbs Brook (in Lincoln and Waltham) and [[Stony Brook (Boston)|Stony Brook (Waltham and Weston), as well as an emergency connection to the Massachusetts Water Resources Authority. The city owns over of land in other towns that includes these reservoirs and portions of their watershed. Water from these reservoirs flows by gravity through an aqueduct to Fresh Pond in Cambridge. It is then treated in an adjacent plant and pumped uphill to an elevation of above sea level at the Payson Park Reservoir (Belmont). The water is then redistributed downhill via gravity to individual users in the city. A new water treatment plant opened in 2001. In October 2016, the City of Cambridge announced that, owing to drought conditions, they would begin buying water from the MWRA. On January 3, 2017, Cambridge announced that "As a result of continued rainfall each month since October 2016, we have been able to significantly reduce the need to use MWRA water. We have not purchased any MWRA water since December 12, 2016 and if 'average' rainfall continues this could continue for several months." Sewer service is available in Cambridge. The city is inter-connected with the Massachusetts Water Resources Authority (MWRA)'s sewage network with sewage treatment plant in the Boston Harbor. Transportation Road Several major roads lead to Cambridge, including Route 2, Route 16, and the McGrath Highway (Route 28). The Massachusetts Turnpike does not pass through Cambridge but provides access by an exit in nearby Allston. Both U.S. Route 1 and Interstate 93 also provide additional access on the eastern end of Cambridge at Leverett Circle in Boston. Route 2A runs the length of the city, chiefly along Massachusetts Avenue. The Charles River forms the southern border of Cambridge and is crossed by 11 bridges connecting Cambridge to Boston, including the Longfellow Bridge and the Harvard Bridge, eight of which are open to motorized road traffic. Cambridge has an irregular street network because many of the roads date from the colonial era. Contrary to popular belief, the road system did not evolve from longstanding cow-paths. Roads connected various village settlements with each other and nearby towns and were shaped by geographic features, most notably streams, hills, and swampy areas. Today, the major "squares" are typically connected by long, mostly straight roads, such as Massachusetts Avenue between Harvard Square and Central Square, or Hampshire Street between Kendall Square and Inman Square. Mass transit Cambridge is served by the MBTA, including the Porter Square Station on the regional Commuter Rail; the Lechmere Station on the Green Line; and the Red Line at Alewife, Porter Square, Harvard Square, Central Square, and Kendall Square/MIT Stations. Alewife Station, the terminus of the Red Line, has a large multi-story parking garage (at a rate of $7 per day ). The Harvard bus tunnel, under Harvard Square, connects to the Red Line underground. This tunnel was originally opened for streetcars in 1912 and served trackless trolleys (trolleybuses) and buses as the routes were converted; four lines of the MBTA trolleybus system continue to use it. The tunnel was partially reconfigured when the Red Line was extended to Alewife in the early 1980s. Besides the state-owned transit agency, the city is also served by the Charles River Transportation Management Agency (CRTMA) shuttles which are supported by some of the largest companies operating in the city, in addition to the municipal government itself. Cycling Cambridge has several bike paths, including one along the Charles River, and the Linear Park connecting the Minuteman Bikeway at Alewife with the Somerville Community Path. A connection to Watertown is under construction. Bike parking is common and there are bike lanes on many streets, although concerns have been expressed regarding the suitability of many of the lanes. On several central MIT streets, bike lanes transfer onto the sidewalk. Cambridge bans cycling on certain sections of sidewalk where pedestrian traffic is heavy. While Bicycling Magazine in 2006 rated Boston as one of the worst cities in the nation for bicycling, it has given Cambridge honorable mention as one of the best and was called by the magazine "Boston's Great Hope". Boston has since then followed the example of Cambridge and made considerable efforts to improve bicycling safety and convenience. Cambridge has an official bicycle committee. The LivableStreets Alliance, headquartered in Cambridge, is an advocacy group for bicyclists, pedestrians, and walkable neighborhoods. Walking Walking is a popular activity in Cambridge. In 2000, among US cities with more than 100,000 residents, Cambridge had the highest percentage of commuters who walked to work. Cambridge's major historic squares have changed into modern walking neighborhoods, including traffic calming features based on the needs of pedestrians rather than of motorists. Intercity The Boston intercity bus and train stations at South Station, Boston, and Logan International Airport in East Boston, are accessible by subway. The Fitchburg Line rail service from Porter Square connects to some western suburbs. Since October 2010, there has also been intercity bus service between Alewife Station (Cambridge) and New York City. Police department In addition to the Cambridge Police Department, the city is patrolled by the Fifth (Brighton) Barracks of Troop H of the Massachusetts State Police. Owing, however, to proximity, the city also practices functional cooperation with the Fourth (Boston) Barracks of Troop H, as well. The campuses of Harvard and MIT are patrolled by the Harvard University Police Department and MIT Police Department, respectively. Fire department The city of Cambridge is protected by the Cambridge Fire Department. Established in 1832, the CFD operates eight engine companies, four ladder companies, one rescue company, and two paramedic squad companies from eight fire stations located throughout the city. The Acting Chief is Gerard Mahoney. Emergency medical services (EMS) The city of Cambridge receives emergency medical services from PRO EMS, a privately contracted ambulance service. Public library services Further educational services are provided at the Cambridge Public Library. The large modern main building was built in 2009, and connects to the restored 1888 Richardson |
Evesham, New Jersey Cambridge (town), New York Cambridge (village), New York Cambridge, Ohio Cambridge, Vermont Cambridge (village), Vermont Cambridge, Wisconsin Cambridge City, Indiana Cambridge Springs, Pennsylvania Cambridge Township, Guernsey County, Ohio Cambridge Township, Henry County, Illinois Cambridge Township, Michigan Cambridge Township, Minnesota Cambridge Township, Pennsylvania Extraterrestrial 2531 Cambridge, a stony Main Belt asteroid in the Solar System People Surnames Alice Cambridge (1762–1829), early Irish Methodist preacher Alyson Cambridge (born 1980), American operatic soprano and classical music, jazz, and American popular song singer Asuka Cambridge (born 1993), Japanese sprint athlete Barrington Cambridge (born 1957), Guyanese boxer Godfrey Cambridge (1933–1976), American stand-up comic and actor Richard Owen Cambridge (1717–1802), British poet Titles | Cambridge, Evesham, New Jersey Cambridge (town), New York Cambridge (village), New York Cambridge, Ohio Cambridge, Vermont Cambridge (village), Vermont Cambridge, Wisconsin Cambridge City, Indiana Cambridge Springs, Pennsylvania Cambridge Township, Guernsey County, Ohio Cambridge Township, Henry County, Illinois Cambridge Township, Michigan Cambridge Township, Minnesota Cambridge Township, Pennsylvania Extraterrestrial 2531 Cambridge, a stony Main Belt asteroid in the Solar System People Surnames Alice Cambridge (1762–1829), early Irish Methodist preacher Alyson Cambridge (born 1980), American operatic soprano and classical music, jazz, and American popular song singer Asuka Cambridge (born 1993), Japanese sprint athlete Barrington Cambridge (born 1957), Guyanese boxer Godfrey Cambridge (1933–1976), American stand-up comic and actor Richard Owen Cambridge (1717–1802), British poet Titles Duke of Cambridge Brands and enterprises Cambridge (cigarette) Cambridge Audio, a manufacturer of audio equipment Cambridge Glass, a glass company of Cambridge, Ohio Cambridge Scientific Instrument Company, founded 1881 in England Cambridge SoundWorks, a manufacturer of audio equipment Cambridge Theatre, a theatre in the West End of London Cambridge University Press Educational institutions Cambridge State University, US The Cambridge School (disambiguation) University of Cambridge, UK Other uses Cambridge (1825 |
from teaching and took up the post of senior assistant secretary at the University of Oxford Delegacy of Local Examinations (UODLE) in Oxford, a job he held until his retirement in 1988. In November 2008, Dexter featured prominently in the BBC programme "How to Solve a Cryptic Crossword" as part of the Time Shift series, in which he recounted some of the crossword clues solved by Morse. Writing career The initial books written by Dexter were general studies text books. He began writing mysteries in 1972 during a family holiday. Last Bus to Woodstock was published in 1975 and introduced the character of Inspector Morse, the irascible detective whose penchants for cryptic crosswords, English literature, cask ale, and music by composer Wagner reflect Dexter's own enthusiasms. Dexter's plots used false leads and other red herrings. The success of the 33 two-hour episodes of the ITV television series Inspector Morse, produced between 1987 and 2000, brought further attention to Dexter's writings, featuring both Morse (John Thaw) and his assistant Sergeant Robert Lewis, played by Kevin Whately. In the manner of Alfred Hitchcock, Dexter made a cameo appearance in almost all episodes. From 2006 to 2015, Morse's assistant featured in a 33-episode ITV series titled Lewis (Inspector Lewis in the United States). Lewis is assisted by DS James Hathaway, played by Laurence Fox. A prequel series, Endeavour, featuring a young Morse and starring Shaun Evans and Roger Allam, began airing on the ITV network in 2012. Dexter was a consultant in the first few years of the programme. As with Morse, Dexter occasionally made cameo appearances in Lewis and Endeavour. Endeavour has aired seven series, with the eighth series, delayed by the coronavirus pandemic, airing in September 2021, taking young Morse's career into 1971. Part of the audio in the television episodes includes playing the Morse code for Morse's name. It is part of Dexter's life that his military service was as a Morse code operator in the Royal Corps of Signals. These are false clues for why Dexter named | The Way Through the Woods in 1992; and a Cartier Diamond Dagger for lifetime achievement in 1997. In 1996, Dexter received a Macavity Award for his short story "Evans Tries an O-Level". In 1980, he was elected a member of the by-invitation-only Detection Club. In 2005 Dexter became a Fellow by Special Election of St Cross College, Oxford. In 2000 Dexter was appointed an Officer of the Order of the British Empire for services to literature. In 2001 he was awarded the Freedom of the City of Oxford. In September 2011, the University of Lincoln awarded Dexter an honorary Doctor of Letters degree. Personal life In 1956 he married Dorothy Cooper. They had a daughter, Sally, and a son, Jeremy. Death On 21 March 2017 Dexter's publisher, Macmillan, said in a statement "With immense sadness, Macmillan announces the death of Colin Dexter who died peacefully at his home in Oxford this morning." Bibliography Inspector Morse novels Last Bus to Woodstock (1975) Last Seen Wearing (1976) The Silent World of Nicholas Quinn (1977) Service of All the Dead (1979) The Dead of Jericho (1981) The Riddle of the Third Mile (1983) The Secret of Annexe 3 (1986) The Wench is Dead (1989) The Jewel That Was Ours (1991) The Way Through the Woods (1992) The Daughters of Cain (1994) Death Is Now My Neighbour (1996) The Remorseful Day (1999) Novellas and short story collections The Inside Story (1993) Neighbourhood Watch (1993) Morse's Greatest Mystery (1993); also published as As Good as Gold "As Good as Gold" (Morse) "Morse's Greatest Mystery" (Morse) "Evans Tries an O-Level" "Dead as a Dodo" (Morse) "At the Lulu-Bar Motel" "Neighbourhood Watch" (Morse) "A Case of Mis-Identity" (a Sherlock Holmes pastiche) "The Inside Story" (Morse) "Monty's Revolver" "The Carpet-Bagger" "Last Call" (Morse) Uncollected short stories "The Burglar" in You, The Mail on Sunday (1994) "The Double Crossing" in Mysterious Pleasures (2003) "Between the Lines" in The Detection Collection (2005) "The Case of the Curious Quorum" (featuring Inspector Lewis) in The Verdict of Us All (2006) |
Science and Technology is a state college by classification. Usually, the term "college" is also thought of as a hierarchical demarcation between the term "university", and quite a number of colleges seek to be recognized as universities as a sign of improvement in academic standards (Colegio de San Juan de Letran, San Beda College), and increase in the diversity of the offered degree programs (called "courses"). For private colleges, this may be done through a survey and evaluation by the Commission on Higher Education and accrediting organizations, as was the case of Urios College which is now the Fr. Saturnino Urios University. For state colleges, it is usually done by a legislation by the Congress or Senate. In common usage, "going to college" simply means attending school for an undergraduate degree, whether it's from an institution recognized as a college or a university. When it comes to referring to the level of education, college is the term more used to be synonymous to tertiary or higher education. A student who is or has studied his/her undergraduate degree at either an institution with college or university in its name is considered to be going to or have gone to college. Portugal Presently in Portugal, the term colégio (college) is normally used as a generic reference to a private (non-government) school that provides from basic to secondary education. Many of the private schools include the term colégio in their name. Some special public schools – usually of the boarding school type – also include the term in their name, with a notable example being the Colégio Militar (Military College). The term colégio interno (literally "internal college") is used specifically as a generic reference to a boarding school. Until the 19th century, a colégio was usually a secondary or pre-university school, of public or religious nature, where the students usually lived together. A model for these colleges was the Royal College of Arts and Humanities, founded in Coimbra by King John III of Portugal in 1542. Singapore The term "college" in Singapore is generally only used for pre-university educational institutions called "Junior Colleges", which provide the final two years of secondary education (equivalent to sixth form in British terms or grades 11–12 in the American system). Since 1 January 2005, the term also refers to the three campuses of the Institute of Technical Education with the introduction of the "collegiate system", in which the three institutions are called ITE College East, ITE College Central, and ITE College West respectively. The term "university" is used to describe higher-education institutions offering locally conferred degrees. Institutions offering diplomas are called "polytechnics", while other institutions are often referred to as "institutes" and so forth. South Africa Although the term "college" is hardly used in any context at any university in South Africa, some non-university tertiary institutions call themselves colleges. These include teacher training colleges, business colleges and wildlife management colleges. See: List of universities in South Africa#Private colleges and universities; List of post secondary institutions in South Africa. Sri Lanka There are several professional and vocational institutions that offer post-secondary education without granting degrees that are referred to as "colleges". This includes the Sri Lanka Law College, the many Technical Colleges and Teaching Colleges. Turkey In Turkey, the term "kolej" (college) refers to a private high school, typically preceded by one year of preparatory language education. Notable Turkish colleges include Robert College, Uskudar American Academy, American Collegiate Institute and Tarsus American College. United Kingdom Secondary education and further education Further education (FE) colleges and sixth form colleges are institutions providing further education to students over 16. Some of these also provide higher education courses (see below). In the context of secondary education, 'college' is used in the names of some private schools, e.g. Eton College and Winchester College. Higher education In higher education, a college is normally a provider that does not hold university status, although it can also refer to a constituent part of a collegiate or federal university or a grouping of academic faculties or departments within a university. Traditionally the distinction between colleges and universities was that colleges did not award degrees while universities did, but this is no longer the case with NCG having gained taught degree awarding powers (the same as some universities) on behalf of its colleges, and many of the colleges of the University of London holding full degree awarding powers and being effectively universities. Most colleges, however, do not hold their own degree awarding powers and continue to offer higher education courses that are validated by universities or other institutions that can award degrees. In England, , over 60% of the higher education providers directly funded by HEFCE (208/340) are sixth-form or further education colleges, often termed colleges of further and higher education, along with 17 colleges of the University of London, one university college, 100 universities, and 14 other providers (six of which use 'college' in their name). Overall, this means over two-thirds of state-supported higher education providers in England are colleges of one form or another. Many private providers are also called colleges, e.g. the New College of the Humanities and St Patrick's College, London. Colleges within universities vary immensely in their responsibilities. The large constituent colleges of the University of London are effectively universities in their own right; colleges in some universities, including those of the University of the Arts London and smaller colleges of the University of London, run their own degree courses but do not award degrees; those at the University of Roehampton provide accommodation and pastoral care as well as delivering the teaching on university courses; those at Oxford and Cambridge deliver some teaching on university courses as well as providing accommodation and pastoral care; and those in Durham, Kent, Lancaster and York provide accommodation and pastoral care but do not normally participate in formal teaching. The legal status of these colleges also varies widely, with University of London colleges being independent corporations and recognised bodies, Oxbridge colleges, colleges of the University of the Highlands and Islands (UHI) and some Durham colleges being independent corporations and listed bodies, most Durham colleges being owned by the university but still listed bodies, and those of other collegiate universities not having formal recognition. When applying for undergraduate courses through UCAS, University of London colleges are treated as independent providers, colleges of Oxford, Cambridge, Durham and UHI are treated as locations within the universities that can be selected by specifying a 'campus code' in addition to selecting the university, and colleges of other universities are not recognised. The UHI and the University of Wales Trinity Saint David (UWTSD) both include further education colleges. However, while the UHI colleges integrate FE and HE provision, UWTSD maintains a separation between the university campuses (Lampeter, Carmarthen and Swansea) and the two colleges (Coleg Sir Gâr and Coleg Ceredigion; n.b. coleg is Welsh for college), which although part of the same group are treated as separate institutions rather than colleges within the university. A university college is an independent institution with the power to award taught degrees, but which has not been granted university status. University College is a protected title that can only be used with permission, although note that University College London, University College, Oxford and University College, Durham are colleges within their respective universities and not university colleges (in the case of UCL holding full degree awarding powers that set it above a university college), while University College Birmingham is a university in its own right and also not a university college. United States In the United States, there are over 7021 colleges and universities. A "college" in the US formally denotes a constituent part of a university, but in popular usage, the word "college" is the generic term for any post-secondary undergraduate education. Americans "go to college" after high school, regardless of whether the specific institution is formally a college or a university. Some students choose to dual-enroll, by taking college classes while still in high school. The word and its derivatives are the standard terms used to describe the institutions and experiences associated with American post-secondary undergraduate education. Students must pay for college before taking classes. Some borrow the money via loans, and some students fund their educations with cash, scholarships, grants, or some combination of these payment methods. In 2011, the state or federal government subsidized $8,000 to $100,000 for each undergraduate degree. For state-owned schools (called "public" universities), the subsidy was given to the college, with the student benefiting from lower tuition. The state subsidized on average 50% of public university tuition. Colleges vary in terms of size, degree, and length of stay. Two-year colleges, also known as junior or community colleges, usually offer an associate degree, and four-year colleges usually offer a bachelor's degree. Often, these are entirely undergraduate institutions, although some have graduate school programs. Four-year institutions in the U.S. that emphasize a liberal arts curriculum are known as liberal arts colleges. Until the 20th century, liberal arts, law, medicine, theology, and divinity were about the only form of higher education available in the United States. These schools have traditionally emphasized instruction at the undergraduate level, although advanced research may still occur at these institutions. While there is no national standard in the United States, the term "university" primarily designates institutions that provide undergraduate and graduate education. A university typically has as its core and its largest internal division an undergraduate college teaching a liberal arts curriculum, also culminating in a bachelor's degree. What often distinguishes a university is having, in addition, one or more graduate schools engaged in both teaching graduate classes and in research. Often these would be called a School of Law or School of Medicine, (but may also be called a college of law, or a faculty of law). An exception is Vincennes University, Indiana, which is styled and chartered as a "university" even though almost all of its academic programs lead only to two-year associate degrees. Some institutions, such as Dartmouth College and The College of William & Mary, have retained the term "college" in their names for historical reasons. In one unique case, Boston College and Boston University, the former located in Chestnut Hill, Massachusetts and the latter located in Boston, Massachusetts, are completely separate institutions. Usage of the terms varies among the states. In 1996, for example, Georgia changed all of its four-year institutions previously designated as colleges to universities, and all of its vocational technology schools to technical colleges. The terms "university" and "college" do not exhaust all possible titles for an American institution of higher education. Other options include "institute" (Worcester Polytechnic Institute and Massachusetts Institute of Technology), "academy" (United States Military Academy), "union" (Cooper Union), "conservatory" (New England Conservatory), and "school" (Juilliard School). In colloquial use, they are still referred to as "college" when referring to their undergraduate studies. The term college is also, as in the United Kingdom, used for a constituent semi-autonomous part of a larger university but generally organized on academic rather than residential lines. For example, at many institutions, the undergraduate portion of the university can be briefly referred to as the college (such as The College of the University of Chicago, Harvard College at Harvard, or Columbia College at Columbia) while at others, such as the University of California, Berkeley, each of the faculties may be called a "college" (the "college of engineering", the "college of nursing", and so forth). There exist other variants for historical reasons; for example, Duke University, which was called Trinity College until the 1920s, still | new universities, Dublin City University and University of Limerick, were initially National Institute for Higher Education institutions. These institutions offered university level academic degrees and research from the start of their existence and were awarded university status in 1989 in recognition of this. Third level technical education in the state has been carried out in the Institutes of Technology, which were established from the 1970s as Regional Technical Colleges. These institutions have delegated authority which entitles them to give degrees and diplomas from Quality and Qualifications Ireland (QQI) in their own names. A number of private colleges exist such as Dublin Business School, providing undergraduate and postgraduate courses validated by QQI and in some cases by other universities. Other types of college include colleges of education, such as the Church of Ireland College of Education. These are specialist institutions, often linked to a university, which provide both undergraduate and postgraduate academic degrees for people who want to train as teachers. A number of state-funded further education colleges exist – which offer vocational education and training in a range of areas from business studies and information and communications technology to sports injury therapy. These courses are usually one, two or less often three years in duration and are validated by QQI at Levels 5 or 6, or for the BTEC Higher National Diploma award, which is a Level 6/7 qualification, validated by Edexcel. There are numerous private colleges (particularly in Dublin and Limerick) which offer both further and higher education qualifications. These degrees and diplomas are often certified by foreign universities/international awarding bodies and are aligned to the National Framework of Qualifications at Levels 6, 7 and 8. Israel In Israel, any non-university higher-learning facility is called a college. Institutions accredited by the Council for Higher Education in Israel (CHE) to confer a bachelor's degree are called "Academic Colleges" (; plural ). These colleges (at least 4 for 2012) may also offer master's degrees and act as Research facilities. There are also over twenty teacher training colleges or seminaries, most of which may award only a Bachelor of Education (BEd) degree. Academic colleges: Any educational facility that had been approved to offer at least bachelor's degree is entitled by CHE to use the term academic college in its name. Engineering academic college: Any academic facility that offer at least bachelor's degree and most of it faculties are providing an Engineering degree and Engineering license. Educational academic college: After an educational facility that had been approved for "Teachers seminar" status is then approved to provide a Bachelor of Education, its name is changed to include "Educational Academic college." Technical college: A "Technical college" () is an educational facility that is approved to allow to provide P.E degree (הנדסאי) (14'th class) or technician (טכנאי) (13'th class) diploma and licenses. Training College: A "Training College" ( or ) is an educational facility that provides basic training allowing a person to receive a working permit in a field such as alternative medicine, cooking, Art, Mechanical, Electrical and other professions. A trainee could receive the right to work in certain professions as apprentice (j. mechanic, j. Electrician etc.). After working in the training field for enough time an apprentice could have a license to operate (Mechanic, Electrician). This educational facility is mostly used to provide basic training for low tech jobs and for job seekers without any training that are provided by the nation's Employment Service (שירות התעסוקה). Macau Following the Portuguese usage, the term "college" (colégio) in Macau has traditionally been used in the names for private (and non-governmental) pre-university educational institutions, which correspond to form one to form six level tiers. Such schools are usually run by the Roman Catholic church or missionaries in Macau. Examples include Chan Sui Ki Perpetual Help College, Yuet Wah College, and Sacred Heart Canossian College. Netherlands In the Netherlands there are 3 main educational routes after high school. MBO (middle-level applied education), which is the equivalent of junior college. Designed to prepare students for either skilled trades and technical occupations and workers in support roles in professions such as engineering, accountancy, business administration, nursing, medicine, architecture, and criminology or for additional education at another college with more advanced academic material. HBO (higher professional education), which is the equivalent of college and has a professional orientation. After HBO (typically 4–6 years), pupils can enroll in a (professional) master's program (1–2 years) or enter the job market. The HBO is taught in vocational universities (hogescholen), of which there are over 40 in the Netherlands, each of which offers a broad variety of programs, with the exception of some that specialize in arts or agriculture. Note that the hogescholen are not allowed to name themselves university in Dutch. This also stretches to English and therefore HBO institutions are known as universities of applied sciences. WO (Scientific education), which is the equivalent to university level education and has an academic orientation. HBO graduates can be awarded two titles, which are Baccalaureus (bc.) and Ingenieur (ing.). At a WO institution, many more bachelor's and master's titles can be awarded. Bachelor's degrees: Bachelor of Arts (BA), Bachelor of Science (BSc) and Bachelor of Laws (LLB). Master's degrees: Master of Arts (MA), Master of Laws (LLM) and Master of Science (MSc). The PhD title is a research degree awarded upon completion and defense of a doctoral thesis. New Zealand The constituent colleges of the former University of New Zealand (such as Canterbury University College) have become independent universities. Some halls of residence associated with New Zealand universities retain the name of "college", particularly at the University of Otago (which although brought under the umbrella of the University of New Zealand, already possessed university status and degree awarding powers). The institutions formerly known as "Teacher-training colleges" now style themselves "College of education". Some universities, such as the University of Canterbury, have divided their university into constituent administrative "Colleges" – the College of Arts containing departments that teach Arts, Humanities and Social Sciences, College of Science containing Science departments, and so on. This is largely modelled on the Cambridge model, discussed above. Like the United Kingdom some professional bodies in New Zealand style themselves as "colleges", for example, the Royal Australasian College of Surgeons, the Royal Australasian College of Physicians. In some parts of the country, secondary school is often referred to as college and the term is used interchangeably with high school. This sometimes confuses people from other parts of New Zealand. But in all parts of the country many secondary schools have "College" in their name, such as Rangitoto College, New Zealand's largest secondary. Philippines In the Philippines, colleges usually refer to institutions of learning that grant degrees but whose scholastic fields are not as diverse as that of a university (University of Santo Tomas, University of the Philippines, Ateneo de Manila University, De La Salle University, Far Eastern University, and AMA University), such as the San Beda College which specializes in law, AMA Computer College whose campuses are spread all over the Philippines which specializes in information and computing technologies, and the Mapúa Institute of Technology which specializes in engineering, or to component units within universities that do not grant degrees but rather facilitate the instruction of a particular field, such as a College of Science and College of Engineering, among many other colleges of the University of the Philippines. A state college may not have the word "college" on its name, but may have several component colleges, or departments. Thus, the Eulogio Amang Rodriguez Institute of Science and Technology is a state college by classification. Usually, the term "college" is also thought of as a hierarchical demarcation between the term "university", and quite a number of colleges seek to be recognized as universities as a sign of improvement in academic standards (Colegio de San Juan de Letran, San Beda College), and increase in the diversity of the offered degree programs (called "courses"). For private colleges, this may be done through a survey and evaluation by the Commission on Higher Education and accrediting organizations, as was the case of Urios College which is now the Fr. Saturnino Urios University. For state colleges, it is usually done by a legislation by the Congress or Senate. In common usage, "going to college" simply means attending school for an undergraduate degree, whether it's from an institution recognized as a college or a university. When it comes to referring to the level of education, college is the term more used to be synonymous to tertiary or higher education. A student who is or has studied his/her undergraduate degree at either an institution with college or university in its name is considered to be going to or have gone to college. Portugal Presently in Portugal, the term colégio (college) is normally used as a generic reference to a private (non-government) school that provides from basic to secondary education. Many of the private schools include the term colégio in their name. Some special public schools – usually of the boarding school type – also include the term in their name, with a notable example being the Colégio Militar (Military College). The term colégio interno (literally "internal college") is used specifically as a generic reference to a boarding school. Until the 19th century, a colégio was usually a secondary or pre-university school, of public or religious nature, where the students usually lived together. A model for these colleges was the Royal College of Arts and Humanities, founded in Coimbra by King John III of Portugal in 1542. Singapore The term "college" in Singapore is generally only used for pre-university educational institutions called "Junior Colleges", which provide the final two years of secondary education (equivalent to sixth form in British terms or grades 11–12 in the American system). Since 1 January 2005, the term also refers to the three campuses of the Institute of Technical Education with the introduction of the "collegiate system", in which the three institutions are called ITE College East, ITE College Central, and ITE College West respectively. The term "university" is used to describe higher-education institutions offering locally conferred degrees. Institutions offering diplomas are called "polytechnics", while other institutions are often referred to as "institutes" and so forth. South Africa Although the term "college" is hardly used in any context at any university in South Africa, some non-university tertiary institutions call themselves colleges. These include teacher training colleges, business colleges and wildlife management colleges. See: List of universities in South Africa#Private colleges and universities; List of post secondary institutions in South Africa. Sri Lanka There are several professional and vocational institutions that offer post-secondary education without granting degrees that are referred to as "colleges". This includes the Sri Lanka Law College, the many Technical Colleges and Teaching Colleges. Turkey In Turkey, the term "kolej" (college) refers to a private high school, typically preceded by one year of preparatory language education. Notable Turkish colleges include Robert College, Uskudar American Academy, American Collegiate Institute and Tarsus American College. United Kingdom Secondary education and further education Further education (FE) colleges and sixth form colleges are institutions providing further education to students over 16. Some of these also provide higher education courses (see below). In the context of secondary education, 'college' is used in the names of some private schools, e.g. Eton College and Winchester College. Higher education In higher education, a college is normally a provider that does not hold university status, although it can also refer to a constituent part of a collegiate or federal university or a grouping of academic faculties or departments within a university. Traditionally the distinction between colleges and universities was that colleges did not award degrees while universities did, but this is no longer the case with NCG having gained taught degree awarding powers (the same as some universities) on behalf of its colleges, and many of the colleges of the University of London holding full degree awarding powers and being effectively universities. Most colleges, however, do not hold their own degree awarding powers and continue to offer higher education courses that are validated by universities or other institutions that can award degrees. In England, , over 60% of the higher education providers directly funded by HEFCE (208/340) are sixth-form or further education colleges, often termed colleges of further and higher education, along with 17 colleges of the University of London, one university college, 100 universities, and 14 other providers (six of which use 'college' in their name). Overall, this means over two-thirds of state-supported higher education providers in England are colleges of one form or another. Many private providers are also called colleges, e.g. the New College of the Humanities and St Patrick's College, London. Colleges within universities vary immensely in their responsibilities. The large constituent colleges of the University of London are effectively universities in their own right; colleges in some universities, including those of the University of the Arts London and smaller colleges of the University of London, run their own degree courses but do not award degrees; those at the University of Roehampton provide accommodation and pastoral care as well as delivering the teaching on university courses; those at Oxford and Cambridge deliver some teaching on university courses as well as providing accommodation and pastoral care; and those in Durham, Kent, Lancaster and York provide accommodation and pastoral care but do not normally participate in formal teaching. The legal status of these colleges also varies widely, with University of London colleges being independent corporations and recognised bodies, Oxbridge colleges, colleges of the University of the Highlands and Islands (UHI) and some Durham colleges being independent corporations and listed bodies, most Durham colleges being owned by the university but still listed bodies, and those of other collegiate universities not having formal recognition. When applying for undergraduate courses through UCAS, University of London colleges are treated as independent providers, colleges of Oxford, Cambridge, Durham and UHI are treated as locations within the universities that can be selected by specifying a 'campus code' in addition to selecting the university, and colleges of other universities are not recognised. The UHI and the University of Wales Trinity Saint David (UWTSD) both include further education colleges. However, while the UHI colleges integrate FE and HE provision, UWTSD maintains a separation between the university campuses (Lampeter, Carmarthen and Swansea) and the two colleges (Coleg Sir Gâr and Coleg Ceredigion; n.b. coleg is Welsh for college), which although part of the same group are treated as separate institutions rather than colleges within the university. A university college is an independent institution with the power to award taught degrees, but which has not been granted university status. University College is a protected title that can only be used with permission, although note that University College London, University College, Oxford and University College, Durham are colleges within their respective universities and not university colleges (in the case of UCL holding full degree awarding powers that set it above a university college), while University College Birmingham is a university in its own right and also not a university college. United States In the United States, there are over 7021 colleges and universities. A "college" in the US formally denotes a constituent part of a university, but in popular usage, the word "college" is the generic term for any post-secondary undergraduate education. Americans "go to college" after high school, regardless of whether the specific institution is formally |
to society and credibility in media. In 2018, a benchmarking report from MIT ranked Chalmers top 10 in the world of engineering education while in 2019, the European Commission recognized Chalmers as one of Europe's top universities, based on the U-Multirank rankings. Furthermore, in 2020, the World University Research Rankings placed Chalmers 12th in the world based on the evaluation of three key research aspects, namely research multi-disciplinarity, research impact, and research cooperativeness, while the QS World University Rankings, placed Chalmers 81st in the world in graduate employability. Additionally, in 2021, the Academic Ranking of World Universities, placed Chalmers 51–75 in the world in the field of electrical & electronic engineering, the QS World University Rankings, placed Chalmers 79th in the world in the field of engineering & technology, the Times Higher Education World University Rankings, ranked Chalmers 68th in the world for engineering & technology and the U.S. News & World Report Best Global University Ranking placed Chalmers 84th in the world for engineering. In the 2011 International Professional Ranking of Higher Education Institutions, which is established on the basis of the number of alumni holding a post of Chief Executive Officer (CEO) or equivalent in one of the Fortune Global 500 companies, Chalmers ranked 38th in the world, ranking 1st in Sweden and 15th in Europe. Ties and partnerships Chalmers has partnerships with major industries mostly in the Gothenburg region such as Ericsson, Volvo, and SKF. The University has general exchange agreements with many European and U.S. universities and maintains a special exchange program agreement with National Chiao Tung University (NCTU) in Taiwan where the exchange students from the two universities maintain offices for, among other things, helping local students with applying and preparing for an exchange year as well as acting as representatives. It contributes also to the Top Industrial Managers for Europe (TIME) network. A close collaboration between the Department of Computer Science and Engineering at Chalmers and ICVR at ETH Zurich is being established. As of 2014, Chalmers University of Technology is a member of the IDEA League network. Students Approximately 40% of Sweden's graduate engineers and architects are educated at Chalmers. Each year, around 250 postgraduate degrees are awarded as well as 850 graduate degrees. About 1,000 post-graduate students attend programmes at the university, and many students are taking Master of Science engineering programmes and the Master of Architecture programme. Since 2007, all master's programmes are taught in English for both national and international students. This was a result of the adaptation to the Bologna process that started in 2004 at Chalmers (as the first technical university in Sweden). Currently, about 10% of all students at Chalmers come from countries outside Sweden to enrol in a master's or PhD program. Around 2,700 students also attend Bachelor of Science engineering programmes, merchant marine and other undergraduate courses at Campus Lindholmen. Chalmers also shares some students with Gothenburg University in the joint IT University project. The IT University focuses exclusively on information technology and offers bachelor's and master's programmes with degrees issued from either Chalmers or Gothenburg University, depending on the programme. Chalmers confers honorary doctoral degrees to people outside the university who have shown great merit in their research or in society. Organization Chalmers is an aktiebolag with 100 shares à 1,000 SEK, all of which are owned by the Chalmers University of Technology Foundation, a private foundation, which appoints the university board and the president. The foundation has its members appointed by the Swedish government (4 | in 1829 following a donation by William Chalmers, a director of the Swedish East India Company. He donated part of his fortune for the establishment of an "industrial school". Chalmers was run as a private institution until 1937 when the institute became a state-owned university. In 1994, the school was incorporated as an aktiebolag under the control of the Swedish Government, the faculty and the Student Union. Chalmers is one of only three universities in Sweden which are named after a person, the other two being Karolinska Institutet and Linnaeus University. Departments Beginning 1 May 2017, Chalmers has 13 departments. Architecture and Civil Engineering Biology and Biological Engineering Chemistry and Chemical Engineering Communication and Learning in Science Computer Science and Engineering Electrical Engineering Industrial and Materials Science Mathematical Sciences Mechanics and Maritime Sciences Microtechnology and Nanoscience Physics Space, Earth and Environment Technology Management and Economics Furthermore, Chalmers is home to eight Areas of Advance and six national competence centers in key fields such as materials, mathematical modelling, environmental science, and vehicle safety. Research infrastructure Chalmers University of Technology's research infrastructure includes everything from advanced real or virtual labs to large databases, computer capacity for large-scale calculations and research facilities. Chalmers AI Research Centre, CHAIR Chalmers Centre for Computational Science and Engineering, C3SE Chalmers Mass Spectrometry Infrastructure, CMSI Chalmers Power Central Chalmers Materials Analysis Laboratory Chalmers Simulator Centre Chemical Imaging Infrastructure Facility for Computational Systems Biology HSB Living Lab Nanofabrication Laboratory Onsala Space Observatory Revere – Chalmers Resource for Vehicle Research The National laboratory in terahertz characterisation SAFER - Vehicle and Traffic Safety Centre at Chalmers Rankings and reputation Since 2012, Chalmers has been achieved the highest reputation for Swedish Universities by the Kantar Sifo's Reputation Index. According to the survey, Chalmers is the most well-known university in Sweden regarded as a successful and competitive high-class institution with a large contribution to society and credibility in media. In 2018, a benchmarking report from MIT ranked Chalmers top 10 in the world of engineering education while in 2019, the European Commission recognized Chalmers as one of Europe's top universities, based on the U-Multirank rankings. Furthermore, in 2020, the World University Research Rankings placed Chalmers 12th in the world based on the evaluation of three key research aspects, namely research multi-disciplinarity, research impact, and research cooperativeness, while the QS World University Rankings, placed Chalmers 81st in the world in graduate employability. Additionally, in 2021, the Academic Ranking of World Universities, placed Chalmers 51–75 in the world in the field of electrical & electronic engineering, the QS World University Rankings, placed Chalmers 79th in the world in the field of engineering & technology, the Times Higher Education World University Rankings, ranked Chalmers 68th in the world for engineering & technology and the U.S. News & World Report Best Global University Ranking placed Chalmers 84th in the world for engineering. In the 2011 International Professional Ranking of Higher Education Institutions, which is established on the basis of the number of alumni holding a post of Chief Executive Officer |
this might be the first recorded known case of an entire edition of a literary work (not just a single copy) being published in codex form, though it was likely an isolated case and was not a common practice until a much later time. In his discussion of one of the earliest parchment codices to survive from Oxyrhynchus in Egypt, Eric Turner seems to challenge Skeat's notion when stating, "its mere existence is evidence that this book form had a prehistory", and that "early experiments with this book form may well have taken place outside of Egypt." Early codices of parchment or papyrus appear to have been widely used as personal notebooks, for instance in recording copies of letters sent (Cicero Fam. 9.26.1). The parchment notebook pages were "more durable, and could withstand being folded and stitched to other sheets". Parchments whose writing was no longer needed were commonly washed or scraped for re-use, creating a palimpsest; the erased text, which can often be recovered, is older and usually more interesting than the newer text which replaced it. Consequently, writings in a codex were often considered informal and impermanent. Parchment (animal skin) was expensive, and therefore it was used primarily by the wealthy and powerful, who were also able to pay for textual design and color. "Official documents and deluxe manuscripts [in the late Middle Ages] were written in gold and silver ink on parchment...dyed or painted with costly purple pigments as an expression of imperial power and wealth." As early as the early 2nd century, there is evidence that a codex—usually of papyrus—was the preferred format among Christians. In the library of the Villa of the Papyri, Herculaneum (buried in AD 79), all the texts (of Greek literature) are scrolls (see Herculaneum papyri). However, in the Nag Hammadi library, hidden about AD 390, all texts (Gnostic) are codices. Despite this comparison, a fragment of a non-Christian parchment codex of Demosthenes' De Falsa Legatione from Oxyrhynchus in Egypt demonstrates that the surviving evidence is insufficient to conclude whether Christians played a major or central role in the development of early codices—or if they simply adopted the format to distinguish themselves from Jews. The earliest surviving fragments from codices come from Egypt, and are variously dated (always tentatively) towards the end of the 1st century or in the first half of the 2nd. This group includes the Rylands Library Papyrus P52, containing part of St John's Gospel, and perhaps dating from between 125 and 160. In Western culture, the codex gradually replaced the scroll. Between the 4th century, when the codex gained wide acceptance, and the Carolingian Renaissance in the 8th century, many works that were not converted from scroll to codex were lost. The codex improved on the scroll in several ways. It could be opened flat at any page for easier reading, pages could be written on both front and back (recto and verso), and the protection of durable covers made it more compact and easier to transport. The ancients stored codices with spines facing inward, and not always vertically. The spine could be used for the incipit, before the concept of a proper title developed in medieval times. Though most early codices were made of papyrus, papyrus was fragile and supplied from Egypt, the only place where papyrus grew. The more durable parchment and vellum gained favor, despite the cost. The codices of pre-Columbian Mesoamerica (Mexico and Central America) had a similar appearance when closed to the European codex, but were instead made with long folded strips of either fig bark (amatl) or plant fibers, often with a layer of whitewash applied before writing. New World codices were written as late as the 16th century (see Maya codices and Aztec codices). Those written before the Spanish conquests seem all to have been single long sheets folded concertina-style, sometimes written on both sides of the amatl paper. There are significant codices produced in the colonial era, with pictorial and alphabetic texts in Spanish or an indigenous language such as Nahuatl. In East Asia, the scroll remained standard for far longer than in the Mediterranean world. There | role in the development of early codices—or if they simply adopted the format to distinguish themselves from Jews. The earliest surviving fragments from codices come from Egypt, and are variously dated (always tentatively) towards the end of the 1st century or in the first half of the 2nd. This group includes the Rylands Library Papyrus P52, containing part of St John's Gospel, and perhaps dating from between 125 and 160. In Western culture, the codex gradually replaced the scroll. Between the 4th century, when the codex gained wide acceptance, and the Carolingian Renaissance in the 8th century, many works that were not converted from scroll to codex were lost. The codex improved on the scroll in several ways. It could be opened flat at any page for easier reading, pages could be written on both front and back (recto and verso), and the protection of durable covers made it more compact and easier to transport. The ancients stored codices with spines facing inward, and not always vertically. The spine could be used for the incipit, before the concept of a proper title developed in medieval times. Though most early codices were made of papyrus, papyrus was fragile and supplied from Egypt, the only place where papyrus grew. The more durable parchment and vellum gained favor, despite the cost. The codices of pre-Columbian Mesoamerica (Mexico and Central America) had a similar appearance when closed to the European codex, but were instead made with long folded strips of either fig bark (amatl) or plant fibers, often with a layer of whitewash applied before writing. New World codices were written as late as the 16th century (see Maya codices and Aztec codices). Those written before the Spanish conquests seem all to have been single long sheets folded concertina-style, sometimes written on both sides of the amatl paper. There are significant codices produced in the colonial era, with pictorial and alphabetic texts in Spanish or an indigenous language such as Nahuatl. In East Asia, the scroll remained standard for far longer than in the Mediterranean world. There were intermediate stages, such as scrolls folded concertina-style and pasted together at the back and books that were printed only on one side of the paper. This replaced traditional Chinese writing mediums such as bamboo and wooden slips, as well as silk and paper scrolls. The evolution of the codex in China began with folded-leaf pamphlets in the 9th century, during the late Tang Dynasty (618-907), improved by the 'butterfly' bindings of the Song dynasty (960-1279), the wrapped back binding of the Yuan dynasty (1271-1368), the stitched binding of the Ming (1368-1644) and Qing dynasties (1644-1912), and finally the adoption of Western-style bookbinding in the 20th century. The initial phase of this evolution, the accordion-folded palm-leaf-style book, most likely came from India and was introduced to China via Buddhist missionaries and scriptures. Judaism still retains the Torah scroll, at least for ceremonial use. From scrolls to codex Among the experiments of earlier centuries, scrolls were sometimes unrolled horizontally, as a succession of columns. (The Dead Sea Scrolls are a famous example of this format.) This made it possible to fold the scroll as an accordion. The next evolutionary step was to cut the folios and sew and glue them at their centers, making it easier to use the papyrus or vellum recto-verso as with a modern book. Traditional bookbinders would call one of these assembled, trimmed and bound folios (that is, the "pages" of the book as a whole, comprising the front matter and contents) a codex in contradistinction to the cover or case, producing the format of book now colloquially known as a hardcover. In the hardcover bookbinding process, the procedure of binding the codex is very different to that of producing and attaching the case. Preparation The first stage in creating a codex is to prepare the animal skin. The skin is washed with water and lime but not together. The skin is soaked in the lime for a couple of days. The hair is removed, and the skin is dried by attaching it to a frame, called a herse. The parchment maker attaches the skin at points around the circumference. The skin attaches to the herse by cords. To prevent it from being torn, the maker wraps the area of the skin attached to the cord around a pebble called a pippin. After completing that, the maker uses a crescent shaped knife called a lunarium or lunellum to remove any remaining hairs. Once the skin completely dries, the maker gives it a deep clean and processes it into sheets. The number of sheets from a piece of skin depends on the size of the skin and the final product dimensions. For example, the average calfskin can provide three-and-a-half medium sheets of writing material, which can be doubled when they |
may be paddock weaned, often next to their mothers, or weaned in stockyards. The latter system is preferred by some as it accustoms the weaners to the presence of people and they are trained to take feed other than grass. Small numbers may also be weaned with their dams with the use of weaning nose rings or nosebands which results in the mothers rejecting the calves' attempts to suckle. Many calves are also weaned when they are taken to the large weaner auction sales that are conducted in the south eastern states of Australia. Victoria and New South Wales have yardings of up to 8,000 weaners (calves) for auction sale in one day. The best of these weaners may go to the butchers. Others will be purchased by re-stockers to grow out and fatten on grass or as potential breeders. In the United States these weaners may be known as feeders and would be placed directly into feedlots. At about 12 months old a beef heifer reaches puberty if she is well grown. Diseases Calves suffer from few congenital abnormalities but the Akabane virus is widely distributed in temperate to tropical regions of the world. The virus is a teratogenic pathogen which causes abortions, stillbirths, premature births and congenital abnormalities, but occurs only during some years. Uses Calf meat for human consumption is called veal, and is usually produced from the male calves of Dairy cattle. Also eaten are calf's brains and calf liver. The hide is used to make calfskin, or tanned into leather and called calf leather, or sometimes in the US "novillo", the Spanish term. The fourth compartment of the stomach of slaughtered milk-fed calves is the source of rennet. The intestine is used to make Goldbeater's skin, and is the source of Calf Intestinal Alkaline Phosphatase (CIP). Dairy cows can only produce milk after having calved, and dairy cows need to produce one calf each year in order to remain in production. Female calves will become a replacement dairy cow. Male dairy calves are generally reared for beef or veal; relatively few are kept for breeding purposes. Other animals In English the term "calf" is used by extension for the young of various other large species of mammal. In addition to other bovid species (such as bison, yak and water buffalo), these include the young of camels, dolphins, elephants, giraffes, | an hour. However, for the first few days they are not easily able to keep up with the rest of the herd, so young calves are often left hidden by their mothers, who visit them several times a day to suckle them. By a week old the calf is able to follow the mother all the time. Some calves are ear tagged soon after birth, especially those that are stud cattle in order to correctly identify their dams (mothers), or in areas (such as the EU) where tagging is a legal requirement for cattle. Typically when the calves are about two months old they are branded, ear marked, castrated and vaccinated. Calf rearing systems The single suckler system of rearing calves is similar to that occurring naturally in wild cattle, where each calf is suckled by its own mother until it is weaned at about nine months old. This system is commonly used for rearing beef cattle throughout the world. Cows kept on poor forage (as is typical in subsistence farming) produce a limited amount of milk. A calf left with such a mother all the time can easily drink all the milk, leaving none for human consumption. For dairy production under such circumstances, the calf's access to the cow must be limited, for example by penning the calf and bringing the mother to it once a day after partly milking her. The small amount of milk available for the calf under such systems may mean that it takes a longer time to rear, and in subsistence farming it is therefore common for cows to calve only in alternate years. In more intensive dairy farming, cows can easily be bred and fed to produce far more milk than one calf can drink. In the multi-suckler system, several calves are fostered onto one cow in addition to her own, and these calves' mothers can then be used wholly for milk production. More commonly, calves of dairy cows are fed formula milk from soon after birth, usually from a bottle or bucket. Purebred female calves of dairy cows are reared as replacement dairy cows. Most purebred dairy calves are produced by artificial insemination (AI). By this method each bull can serve many cows, so only a very few of the purebred dairy male calves are needed to provide bulls for breeding. The remainder of the male calves may be reared for beef or veal; however, some extreme dairy breeds carry so little muscle that rearing the purebred male calves may be uneconomic, and in this case they are often killed soon after birth and disposed of. Only a proportion of purebred heifers are needed to provide replacement cows, so often some of the cows in dairy herds are put to a beef bull to produce crossbred calves suitable for rearing as beef. Veal calves may be reared entirely on milk formula and killed at about 18 or 20 weeks as "white" veal, or fed on grain and hay and killed at 22 to 35 weeks to produce red or pink veal. Growth A commercial steer or bull calf is expected to put on about per month. A nine-month-old steer or bull is therefore expected to weigh about . Heifers will weigh at least at eight months of age. Calves are usually weaned at about eight to nine months of age, but depending on the season and condition of the dam, they might be weaned |
was programmed to search until it reached a known location and then it would proceed to the target, adding the new knowledge to its memory and learning new behavior. Shannon's mouse appears to have been the first artificial learning device of its kind. Shannon's estimate for the complexity of chess In 1949 Shannon completed a paper (published in March 1950) which estimates the game-tree complexity of chess, which is approximately 10120. This number is now often referred to as the "Shannon number", and is still regarded today as an accurate estimate of the game's complexity. The number is often cited as one of the barriers to solving the game of chess using an exhaustive analysis (i.e. brute force analysis). Shannon's computer chess program On March 9, 1949, Shannon presented a paper called "Programming a Computer for playing Chess". The paper was presented at the National Institute for Radio Engineers Convention in New York. He described how to program a computer to play chess based on position scoring and move selection. He proposed basic strategies for restricting the number of possibilities to be considered in a game of chess. In March 1950 it was published in Philosophical Magazine, and is considered one of the first articles published on the topic of programming a computer for playing chess, and using a computer to solve the game. His process for having the computer decide on which move to make was a minimax procedure, based on an evaluation function of a given chess position. Shannon gave a rough example of an evaluation function in which the value of the black position was subtracted from that of the white position. Material was counted according to the usual chess piece relative value (1 point for a pawn, 3 points for a knight or bishop, 5 points for a rook, and 9 points for a queen). He considered some positional factors, subtracting ½ point for each doubled pawn, backward pawn, and isolated pawn; mobility was incorporated by adding 0.1 point for each legal move available. Shannon's maxim Shannon formulated a version of Kerckhoffs' principle as "The enemy knows the system". In this form it is known as "Shannon's maxim". Commemorations Shannon centenary The Shannon centenary, 2016, marked the life and influence of Claude Elwood Shannon on the hundredth anniversary of his birth on April 30, 1916. It was inspired in part by the Alan Turing Year. An ad hoc committee of the IEEE Information Theory Society including Christina Fragouli, Rüdiger Urbanke, Michelle Effros, Lav Varshney and Sergio Verdú, coordinated worldwide events. The initiative was announced in the History Panel at the 2015 IEEE Information Theory Workshop Jerusalem and the IEEE Information Theory Society Newsletter. A detailed listing of confirmed events was available on the website of the IEEE Information Theory Society. Some of the planned activities included: Bell Labs hosted the First Shannon Conference on the Future of the Information Age on April 28–29, 2016, in Murray Hill, New Jersey, to celebrate Claude Shannon and the continued impact of his legacy on society. The event includes keynote speeches by global luminaries and visionaries of the information age who will explore the impact of information theory on society and our digital future, informal recollections, and leading technical presentations on subsequent related work in other areas such as bioinformatics, economic systems, and social networks. There is also a student competition Bell Labs launched a Web exhibit on April 30, 2016, chronicling Shannon's hiring at Bell Labs (under an NDRC contract with US Government), his subsequent work there from 1942 through 1957, and details of Mathematics Department. The exhibit also displayed bios of colleagues and managers during his tenure, as well as original versions of some of the technical memoranda which subsequently became well known in published form. The Republic of Macedonia is planning a commemorative stamp. A USPS commemorative stamp is being proposed, with an active petition. A documentary on Claude Shannon and on the impact of information theory, The Bit Player, is being produced by Sergio Verdú and Mark Levinson. A trans-Atlantic celebration of both George Boole's bicentenary and Claude Shannon's centenary that is being led by University College Cork and the Massachusetts Institute of Technology. A first event was a workshop in Cork, When Boole Meets Shannon, and will continue with exhibits at the Boston Museum of Science and at the MIT Museum. Many organizations around the world are holding observance events, including the Boston Museum of Science, the Heinz-Nixdorf Museum, the Institute for Advanced Study, Technische Universität Berlin, University of South Australia (UniSA), Unicamp (Universidade Estadual de Campinas), University of Toronto, Chinese University of Hong Kong, Cairo University, Telecom ParisTech, National Technical University of Athens, Indian Institute of Science, Indian Institute of Technology Bombay, Indian Institute of Technology Kanpur, Nanyang Technological University of Singapore, University of Maryland, University of Illinois at Chicago, École Polytechnique Federale de Lausanne, The Pennsylvania State University (Penn State), University of California Los Angeles, Massachusetts Institute of Technology, Chongqing University of Posts and Telecommunications, and University of Illinois at Urbana-Champaign. A logo that appears on this page was crowdsourced on Crowdspring. The Math Encounters presentation of May 4, 2016, at the National Museum of Mathematics in New York, titled Saving Face: Information Tricks for Love and Life, focused on Shannon's work in Information Theory. A video recording and other material are available. Awards and honors list The Claude E. Shannon Award was established in his honor; he was also its first recipient, in 1972. Stuart Ballantine Medal of the Franklin Institute, 1955 Harvey Prize, the | device containing a microprocessor or microcontroller is a conceptual descendant of Shannon's publication in 1948: "He's one of the great men of the century. Without him, none of the things we know today would exist. The whole digital revolution started with him." The cryptocurrency unit shannon (a synonym for gwei) is named after him. A Mind at Play, a biography of Shannon written by Jimmy Soni and Rob Goodman, was published in 2017. On April 30, 2016, Shannon was honored with a Google Doodle to celebrate his life on what would have been his 100th birthday. The Bit Player, a feature film about Shannon directed by Mark Levinson premiered at the World Science Festival in 2019. Drawn from interviews conducted with Shannon in his house in the 1980s, the film was released on Amazon Prime in August 2020. Other work Shannon's mouse "Theseus", created in 1950, was a mechanical mouse controlled by an electromechanical relay circuit that enabled it to move around a labyrinth of 25 squares. The maze configuration was flexible and it could be modified arbitrarily by rearranging movable partitions. The mouse was designed to search through the corridors until it found the target. Having travelled through the maze, the mouse could then be placed anywhere it had been before, and because of its prior experience it could go directly to the target. If placed in unfamiliar territory, it was programmed to search until it reached a known location and then it would proceed to the target, adding the new knowledge to its memory and learning new behavior. Shannon's mouse appears to have been the first artificial learning device of its kind. Shannon's estimate for the complexity of chess In 1949 Shannon completed a paper (published in March 1950) which estimates the game-tree complexity of chess, which is approximately 10120. This number is now often referred to as the "Shannon number", and is still regarded today as an accurate estimate of the game's complexity. The number is often cited as one of the barriers to solving the game of chess using an exhaustive analysis (i.e. brute force analysis). Shannon's computer chess program On March 9, 1949, Shannon presented a paper called "Programming a Computer for playing Chess". The paper was presented at the National Institute for Radio Engineers Convention in New York. He described how to program a computer to play chess based on position scoring and move selection. He proposed basic strategies for restricting the number of possibilities to be considered in a game of chess. In March 1950 it was published in Philosophical Magazine, and is considered one of the first articles published on the topic of programming a computer for playing chess, and using a computer to solve the game. His process for having the computer decide on which move to make was a minimax procedure, based on an evaluation function of a given chess position. Shannon gave a rough example of an evaluation function in which the value of the black position was subtracted from that of the white position. Material was counted according to the usual chess piece relative value (1 point for a pawn, 3 points for a knight or bishop, 5 points for a rook, and 9 points for a queen). He considered some positional factors, subtracting ½ point for each doubled pawn, backward pawn, and isolated pawn; mobility was incorporated by adding 0.1 point for each legal move available. Shannon's maxim Shannon formulated a version of Kerckhoffs' principle as "The enemy knows the system". In this form it is known as "Shannon's maxim". Commemorations Shannon centenary The Shannon centenary, 2016, marked the life and influence of Claude Elwood Shannon on the hundredth anniversary of his birth on April 30, 1916. It was inspired in part by the Alan Turing Year. An ad hoc committee of the IEEE Information Theory Society including Christina Fragouli, Rüdiger Urbanke, Michelle Effros, Lav Varshney and Sergio Verdú, coordinated worldwide events. The initiative was announced in the History Panel at the 2015 IEEE Information Theory Workshop Jerusalem and the IEEE Information Theory Society Newsletter. A detailed listing of confirmed events was available on the website of the IEEE Information Theory Society. Some of the planned activities included: Bell Labs hosted the First Shannon Conference on the Future of the Information Age on April 28–29, 2016, in Murray Hill, New Jersey, to celebrate Claude Shannon and the continued impact of his legacy on society. The event includes keynote speeches by global luminaries and visionaries of the information age who will explore the impact of information theory on society and our digital future, informal recollections, and leading technical presentations on subsequent related work in other areas such as bioinformatics, economic systems, and social networks. There is also a student competition Bell Labs launched a Web exhibit on April 30, 2016, chronicling Shannon's hiring at Bell Labs (under an NDRC contract with US Government), his subsequent work there from 1942 through 1957, and details of Mathematics Department. The exhibit also displayed bios of colleagues and managers during his tenure, as well as original versions of some of the technical memoranda which subsequently became well known in published form. The Republic of Macedonia is planning a commemorative stamp. A USPS commemorative stamp is being proposed, with an active petition. A documentary on Claude Shannon and on the impact of information theory, The Bit Player, is being produced by Sergio Verdú and Mark Levinson. A trans-Atlantic celebration of both George Boole's bicentenary and Claude Shannon's centenary that is being led by University College Cork and the Massachusetts Institute of Technology. A first event was a workshop in Cork, When Boole Meets Shannon, and will continue with exhibits at the Boston Museum of Science and at the MIT Museum. Many organizations around the world are holding observance events, including the Boston Museum of Science, the Heinz-Nixdorf Museum, the Institute for Advanced Study, Technische Universität Berlin, University of South Australia (UniSA), Unicamp (Universidade Estadual de Campinas), University of Toronto, Chinese University of Hong Kong, Cairo University, Telecom ParisTech, National Technical University of Athens, Indian Institute of Science, Indian Institute of Technology Bombay, Indian Institute of Technology Kanpur, Nanyang Technological University of Singapore, University of Maryland, University of Illinois at Chicago, École Polytechnique Federale de Lausanne, The Pennsylvania State University (Penn State), University of California Los Angeles, Massachusetts Institute of Technology, Chongqing University of Posts and Telecommunications, and University of Illinois at Urbana-Champaign. A logo that appears on this page was crowdsourced on Crowdspring. The Math Encounters presentation of May 4, 2016, at the National Museum of Mathematics in New York, titled Saving Face: Information Tricks for Love and Life, focused on |
decomposition of complex organic molecules into smaller ones Cracking joints, the practice of manipulating one's bone joints to make a sharp sound Cracking codes, see cryptanalysis Whip cracking Safe cracking Crackin, band featuring Lester Abrams In computing': Another name for security hacking; the practice of defeating computer security. Password cracking, the process of discovering the plaintext of an encrypted computer password. Software | codes, see cryptanalysis Whip cracking Safe cracking Crackin, band featuring Lester Abrams In computing': Another name for security hacking; the practice of defeating computer security. Password cracking, the process of discovering the |
community and spatial subdivisions of cities and other large settlements may have formed communities. Archaeologists typically use similarities in material culture—from house types to styles of pottery—to reconstruct communities in the past. This classification method relies on the assumption that people or households will share more similarities in the types and styles of their material goods with other members of a social community than they will with outsiders. Sociology Ecology In ecology, a community is an assemblage of populations - potentially of different species - interacting with one another. Community ecology is the branch of ecology that studies interactions between and among species. It considers how such interactions, along with interactions between species and the abiotic environment, affect social structure and species richness, diversity and patterns of abundance. Species interact in three ways: competition, predation and mutualism: Competition typically results in a double negative—that is both species lose in the interaction. Predation involves a win/lose situation, with one species winning. Mutualism sees both species co-operating in some way, with both winning. The two main types of ecological communities are major communities, which are self-sustaining and self-regulating (such as a forest or a lake), and minor communities, which rely on other communities (like fungi decomposing a log) and are the building blocks of major communities. Semantics The concept of "community" often has a positive semantic connotation, exploited rhetorically by populist politicians and by advertisers to promote feelings and associations of mutual well-being, happiness and togetherness - veering towards an almost-achievable utopian community, in fact. In contrast, the epidemiological term "community transmission" can have negative implications; and instead of a "criminal community" one often speaks of a "criminal underworld" or of the "criminal fraternity". Key concepts Gemeinschaft and Gesellschaft In Gemeinschaft und Gesellschaft (1887), German sociologist Ferdinand Tönnies described two types of human association: Gemeinschaft (usually translated as "community") and Gesellschaft ("society" or "association"). Tönnies proposed the Gemeinschaft–Gesellschaft dichotomy as a way to think about social ties. No group is exclusively one or the other. Gemeinschaft stress personal social interactions, and the roles, values, and beliefs based on such interactions. Gesellschaft stress indirect interactions, impersonal roles, formal values, and beliefs based on such interactions. Sense of community In a seminal 1986 study, McMillan and Chavis identify four elements of "sense of community": membership: feeling of belonging or of sharing a sense of personal relatedness, influence: mattering, making a difference to a group and of the group mattering to its members reinforcement: integration and fulfillment of needs, shared emotional connection. A "sense of community index (SCI) was developed by Chavis and colleagues, and revised and adapted by others. Although originally designed to assess sense of community in neighborhoods, the index has been adapted for use in schools, the workplace, and a variety of types of communities. Studies conducted by the APPA indicate that young adults who feel a sense of belonging in a community, particularly small communities, develop fewer psychiatric and depressive disorders than those who do not have the feeling of love and belonging. Socialization The process of learning to adopt the behavior patterns of the community is called socialization. The most fertile time of socialization is usually the early stages of life, during which individuals develop the skills and knowledge and learn the roles necessary to function within their culture and social environment. For some psychologists, especially those in the psychodynamic tradition, the most important period of socialization is between the ages of one and ten. But socialization also includes adults moving into a significantly different environment where they must learn a new set of behaviors. Socialization is influenced primarily by the family, through which children first learn community norms. Other important influences include schools, peer groups, people, mass media, the workplace, and government. The degree to which the norms of a particular society or community are adopted determines one's willingness to engage with others. The norms of tolerance, reciprocity, and trust are important "habits of the heart," as de Tocqueville put it, in an individual's involvement in community. Community development Community development is often linked with community work or community planning, and may involve stakeholders, foundations, governments, or contracted entities including non-government organisations (NGOs), universities or government agencies to progress the social well-being of local, regional and, sometimes, national communities. More grassroots efforts, called community building or community organizing, seek to empower individuals and groups of people by providing them with the skills they need to effect change in their own communities. These skills often assist in building political power through the formation of large social groups working for a common agenda. Community development practitioners must understand both how to work with individuals and how to affect communities' positions within the context of larger social institutions. Public administrators, in contrast, need to understand community development in the context of rural and urban development, housing and economic development, and community, organizational and business development. Formal accredited programs conducted by universities, as part of degree granting institutions, are often used to build a knowledge base to drive curricula in public administration, sociology and community studies. The General Social Survey from the National Opinion Research Center at the University of Chicago and the Saguaro Seminar at the John F. Kennedy School of Government at Harvard University are examples of national community development in the United States. The Maxwell School of Citizenship and Public Affairs at Syracuse University in New York State offers core courses in community and economic development, and in areas ranging from non-profit development to US budgeting (federal to local, community funds). In the United Kingdom, the University of Oxford has led in providing extensive research in the field through its Community Development Journal, used worldwide by sociologists and community development practitioners. At the intersection between community development and community building are a number of programs and organizations with community development tools. One example of this is the program of the Asset Based Community Development Institute of Northwestern University. The institute makes available downloadable tools to assess community assets and make connections between non-profit groups and other | of their material goods with other members of a social community than they will with outsiders. Sociology Ecology In ecology, a community is an assemblage of populations - potentially of different species - interacting with one another. Community ecology is the branch of ecology that studies interactions between and among species. It considers how such interactions, along with interactions between species and the abiotic environment, affect social structure and species richness, diversity and patterns of abundance. Species interact in three ways: competition, predation and mutualism: Competition typically results in a double negative—that is both species lose in the interaction. Predation involves a win/lose situation, with one species winning. Mutualism sees both species co-operating in some way, with both winning. The two main types of ecological communities are major communities, which are self-sustaining and self-regulating (such as a forest or a lake), and minor communities, which rely on other communities (like fungi decomposing a log) and are the building blocks of major communities. Semantics The concept of "community" often has a positive semantic connotation, exploited rhetorically by populist politicians and by advertisers to promote feelings and associations of mutual well-being, happiness and togetherness - veering towards an almost-achievable utopian community, in fact. In contrast, the epidemiological term "community transmission" can have negative implications; and instead of a "criminal community" one often speaks of a "criminal underworld" or of the "criminal fraternity". Key concepts Gemeinschaft and Gesellschaft In Gemeinschaft und Gesellschaft (1887), German sociologist Ferdinand Tönnies described two types of human association: Gemeinschaft (usually translated as "community") and Gesellschaft ("society" or "association"). Tönnies proposed the Gemeinschaft–Gesellschaft dichotomy as a way to think about social ties. No group is exclusively one or the other. Gemeinschaft stress personal social interactions, and the roles, values, and beliefs based on such interactions. Gesellschaft stress indirect interactions, impersonal roles, formal values, and beliefs based on such interactions. Sense of community In a seminal 1986 study, McMillan and Chavis identify four elements of "sense of community": membership: feeling of belonging or of sharing a sense of personal relatedness, influence: mattering, making a difference to a group and of the group mattering to its members reinforcement: integration and fulfillment of needs, shared emotional connection. A "sense of community index (SCI) was developed by Chavis and colleagues, and revised and adapted by others. Although originally designed to assess sense of community in neighborhoods, the index has been adapted for use in schools, the workplace, and a variety of types of communities. Studies conducted by the APPA indicate that young adults who feel a sense of belonging in a community, particularly small communities, develop fewer psychiatric and depressive disorders than those who do not have the feeling of love and belonging. Socialization The process of learning to adopt the behavior patterns of the community is called socialization. The most fertile time of socialization is usually the early stages of life, during which individuals develop the skills and knowledge and learn the roles necessary to function within their culture and social environment. For some psychologists, especially those in the psychodynamic tradition, the most important period of socialization is between the ages of one and ten. But socialization also includes adults moving into a significantly different environment where they must learn a new set of behaviors. Socialization is influenced primarily by the family, through which children first learn community norms. Other important influences include schools, peer groups, people, mass media, the workplace, and government. The degree to which the norms of a particular society or community are adopted determines one's willingness to engage with others. The norms of tolerance, reciprocity, and trust are important "habits of the heart," as de Tocqueville put it, in an individual's involvement in community. Community development Community development is often linked with community work or community planning, and may involve stakeholders, foundations, governments, or contracted entities including non-government organisations (NGOs), universities or government agencies to progress the social well-being of local, regional and, sometimes, national communities. More grassroots efforts, called community building or community organizing, seek to empower individuals and groups of people by providing them with the skills they need to effect change in their own communities. These skills often assist in building political power through the formation of large social groups working for a common agenda. Community development practitioners must understand both how to work with individuals and how to affect communities' positions within the context of larger social institutions. Public administrators, in contrast, need to understand community development in the context of rural and urban development, housing and economic development, and community, organizational and business development. Formal accredited programs conducted by universities, as part of degree granting institutions, are often used to build a knowledge base to drive curricula in public administration, sociology and community studies. The General Social Survey from the National Opinion Research Center at the University of Chicago and the Saguaro Seminar at the John F. Kennedy School of Government at Harvard University are examples of national community development in the United States. The Maxwell School of Citizenship and Public Affairs at Syracuse University in New York State offers core courses in community and economic development, and in areas ranging from non-profit development to US budgeting (federal to local, community funds). In the United Kingdom, the University of Oxford has led in providing extensive research in the field through its Community Development Journal, used worldwide by sociologists and community development practitioners. At the intersection between community development and community building are a number of programs and organizations with community development tools. One example of this is the program of the Asset Based Community Development Institute of Northwestern University. The institute makes available downloadable tools to assess community assets and make connections between non-profit groups and other organizations that can help in community building. The Institute focuses on helping communities develop by "mobilizing neighborhood assets" – building from the inside out rather than the outside in. In the disability field, community building was prevalent in the 1980s and 1990s with roots in John McKnight's approaches.McKnight, J. (1989). Beyond Community Services. Evanston, IL: Northwestern University, Center of Urban Affairs and Policy Research. Community building and organizing In The Different Drum: Community-Making and Peace (1987) Scott Peck argues that the almost accidental sense of community that exists at times of crisis can be consciously built. Peck believes that conscious community building is a process of deliberate design based on the knowledge and application of certain rules. He states that this process goes through four stages: Pseudocommunity: When people first come together, they try to be "nice" and present what they feel are their most personable and friendly characteristics. Chaos: People move beyond the inauthenticity of pseudo-community and feel safe enough to present their "shadow" selves. Emptiness: Moves |
24 colleges of applied arts and technology have been mandated to offer their own stand-alone degrees as well as to offer joint degrees with universities through "articulation agreements" that often result in students emerging with both a diploma and a degree. Thus, for example, the University of Guelph "twins" with Humber College and York University does the same with Seneca College. More recently, however, colleges have been offering a variety of their own degrees, often in business, technology, science, and other technical fields. Each province has its own educational system, as prescribed by the Canadian federalism model of governance. In the mid-1960s and early 1970s, most Canadian colleges began to provide practical education and training for the emerging and booming generation, and for immigrants from around the world who were entering Canada in increasing numbers at that time. A formative trend was the merging of the then separate vocational training and adult education (night school) institutions. Canadian colleges are either publicly funded or private post-secondary institutions (run for profit). In terms of academic pathways, Canadian colleges and universities collaborate with each other with the purpose of providing college students the opportunity to academically upgrade their education. Students can transfer their diplomas and earn transfer credits through their completed college credits towards undergraduate university degrees. The term associate degree is used in western Canada to refer to a two-year college arts or science degree, similar to how the term is used in the United States. In other parts of Canada, the term advanced degree is used to indicate a three- or four-year college program. In Quebec, three years is the norm for a university degree because a year of credit is earned in the CÉGEP (college) system. Even when speaking in English, people often refer to all colleges as Cégeps; however, the term is an acronym more correctly applied specifically to the French-language public system: Collège d'enseignement général et professionnel (CEGEP); in English: College of General and Vocational Education. The word "college" can also refer to a private high school in Quebec. Canadian community college systems List of colleges in Canada Colleges and Institutes Canada (CICan) – publicly funded educational institutions; formerly the Association of Canadian Community Colleges (ACCC) National Association of Career Colleges – privately funded educational institutions; formerly the Association of Canadian Career Colleges India In India, 98 community colleges are recognized by the University Grants Commission. The courses offered by these colleges are diplomas, advance diplomas and certificate courses. The duration of these courses usually ranges from six months to two years. Malaysia Community colleges in Malaysia are a network of educational institutions whereby vocational and technical skills training could be provided at all levels for school leavers before they entered the workforce. The community colleges also provide an infrastructure for rural communities to gain skills training through short courses as well as providing access to a post-secondary education. At the moment, most community colleges award qualifications up to Level 3 in the Malaysian Qualifications Framework (Certificate 3) in both the Skills sector (Sijil Kemahiran Malaysia or the Malaysian Skills Certificate) as well as the Vocational and Training sector but the number of community colleges that are starting to award Level 4 qualifications (Diploma) are increasing. This is two levels below a bachelor's degree (Level 6 in the MQF) and students within the system who intend to further their studies to that level will usually seek entry into Advanced Diploma programs in public universities, polytechnics or accredited private providers. Philippines In the Philippines, a community school functions as elementary or secondary school at daytime and towards the end of the day convert into a community college. This type of institution offers night classes under the supervision of the same principal, and the same faculty members who are given part-time college teaching load. The concept of community college dates back to the time of the former | Such TAFES are located in metropolitan, regional and rural locations of Australia. Education offered by TAFEs and colleges has changed over the years. By the 1980s, many colleges had recognised a community need for computer training. Since then thousands of people have increased skills through IT courses. The majority of colleges by the late 20th century had also become Registered Training Organisations. They offer individuals a nurturing, non-traditional education venue to gain skills that better prepare them for the workplace and potential job openings. TAFEs and colleges have not traditionally offered bachelor's degrees, instead providing pathway arrangements with universities to continue towards degrees. The American innovation of the associate degree is being developed at some institutions. Certificate courses I to IV, diplomas and advanced diplomas are typically offered, the latter deemed equivalent to an undergraduate qualification, albeit typically in more vocational areas. Recently, some TAFE institutes (and private providers) have also become higher education providers in their own right and are now starting to offer bachelor's degree programs. Canada In Canada, colleges are adult educational institutions that provide higher education and tertiary education, and grant certificates and diplomas. Alternatively, Canadian colleges are often called “institutes” or “polytechnic institutes”. As well, in Ontario, the 24 colleges of applied arts and technology have been mandated to offer their own stand-alone degrees as well as to offer joint degrees with universities through "articulation agreements" that often result in students emerging with both a diploma and a degree. Thus, for example, the University of Guelph "twins" with Humber College and York University does the same with Seneca College. More recently, however, colleges have been offering a variety of their own degrees, often in business, technology, science, and other technical fields. Each province has its own educational system, as prescribed by the Canadian federalism model of governance. In the mid-1960s and early 1970s, most Canadian colleges began to provide practical education and training for the emerging and booming generation, and for immigrants from around the world who were entering Canada in increasing numbers at that time. A formative trend was the merging of the then separate vocational training and adult education (night school) institutions. Canadian colleges are either publicly funded or private post-secondary institutions (run for profit). In terms of academic pathways, Canadian colleges and universities collaborate with each other with the purpose of providing college students the opportunity to academically upgrade their education. Students can transfer their diplomas and earn transfer credits through their completed college credits towards undergraduate university degrees. The term associate degree is used in western Canada to refer to a two-year college arts or science degree, similar to how the term is used in the United States. In other parts of Canada, the term advanced degree is used to indicate a three- or four-year college program. In Quebec, three years is the norm for a university degree because a year of credit is earned in the CÉGEP (college) system. Even when speaking in English, people often refer to all colleges as Cégeps; however, the term is an acronym more correctly applied specifically to the French-language public system: Collège d'enseignement général et professionnel (CEGEP); in English: College of General and Vocational Education. The word "college" can also refer to a private high school in Quebec. Canadian community college systems List of colleges in Canada Colleges and Institutes Canada (CICan) – publicly funded educational institutions; formerly the Association of Canadian Community Colleges (ACCC) National Association of Career Colleges – privately funded educational institutions; formerly the Association of Canadian Career Colleges India In India, 98 community colleges are recognized by the University Grants Commission. The courses offered by these colleges are diplomas, advance diplomas and certificate courses. The duration of these courses usually ranges from six months to two years. Malaysia Community colleges in Malaysia are a network of educational institutions whereby vocational and technical skills training could be provided at all levels for school leavers before they entered the workforce. The community colleges also provide an infrastructure for rural communities to gain skills training through short courses as well as providing access to a post-secondary education. At the moment, most community colleges award qualifications up to Level 3 in the Malaysian Qualifications Framework (Certificate 3) in both the Skills sector (Sijil Kemahiran Malaysia or the Malaysian Skills Certificate) as well as the Vocational and Training sector but the number of community colleges that are starting to award Level 4 qualifications (Diploma) are increasing. This is two levels below a bachelor's degree (Level 6 in the MQF) and students within the system who intend to further their studies to that level will usually seek entry into Advanced Diploma programs in public universities, polytechnics or accredited private providers. Philippines In the Philippines, a community school functions as elementary or secondary school at daytime and towards the end of the day convert into a |
on the soothing and healing effect of water. It was inspired by a passage from King's "I Have a Dream" speech "...we will not be satisfied "until justice rolls down like waters and righteousness like a mighty stream..." The quotation in the passage, which is inscribed on the memorial, is a direct paraphrase of Amos 5:24, as translated in the American Standard Version of the Bible. The memorial is a fountain in the form of an asymmetric inverted stone cone. A film of water flows over the base of the cone, which contains the 41 names included. It is possible to touch the smooth film of water and to alter it temporarily, which quickly returns to smoothness. As such, the memorial represents the aspirations of the civil rights movement to end legal racial segregation. Tours and location The memorial is in downtown Montgomery, at 400 Washington Avenue, in an open plaza in front of the Civil Rights Memorial Center, which was the offices of the Southern Poverty Law Center until it moved across the street into a new building in 2001. The memorial may be visited freely 24 hours a day, 7 days a week. The Civil Rights Memorial Center offers guided group tours, lasting approximately one hour. Tours are available by appointment, Monday to Saturday. The memorial is only a few blocks from other historic sites, including the Dexter Avenue King Memorial Baptist Church, the Alabama State Capitol, the Alabama Department of Archives and History, the corners where Claudette Colvin and Rosa Parks boarded buses in 1955 on which they would later refuse to give up their seats, and the Rosa Parks Library and Museum. Names included "Civil Rights Martyrs" The 41 names included in the Civil Rights Memorial are those of: Louis Allen Willie Brewster Benjamin Brown Johnnie Mae Chappell James Chaney Addie Mae Collins Vernon Dahmer Jonathan Daniels Henry Hezekiah Dee Roman Ducksworth Jr. Willie Edwards Medgar Evers Andrew Goodman Paul Guihard Samuel Hammond Jr. Jimmie Lee Jackson Wharlest Jackson Martin Luther King Jr. Bruce W. Klunder George W. Lee Herbert Lee Viola Liuzzo Denise McNair Delano Herman Middleton Charles Eddie Moore Oneal Moore William Lewis Moore Mack Charles Parker Lemuel Penn James Reeb John Earl Reese Carole Robertson Michael Schwerner Henry Ezekial Smith Lamar Smith Emmett Till Clarence Triggs Virgil Lamar Ware Cynthia Wesley Ben Chester White Sammy Younge Jr. "The Forgotten" "The Forgotten" are 74 people who are identified in a display at the Civil | King Memorial Baptist Church, the Alabama State Capitol, the Alabama Department of Archives and History, the corners where Claudette Colvin and Rosa Parks boarded buses in 1955 on which they would later refuse to give up their seats, and the Rosa Parks Library and Museum. Names included "Civil Rights Martyrs" The 41 names included in the Civil Rights Memorial are those of: Louis Allen Willie Brewster Benjamin Brown Johnnie Mae Chappell James Chaney Addie Mae Collins Vernon Dahmer Jonathan Daniels Henry Hezekiah Dee Roman Ducksworth Jr. Willie Edwards Medgar Evers Andrew Goodman Paul Guihard Samuel Hammond Jr. Jimmie Lee Jackson Wharlest Jackson Martin Luther King Jr. Bruce W. Klunder George W. Lee Herbert Lee Viola Liuzzo Denise McNair Delano Herman Middleton Charles Eddie Moore Oneal Moore William Lewis Moore Mack Charles Parker Lemuel Penn James Reeb John Earl Reese Carole Robertson Michael Schwerner Henry Ezekial Smith Lamar Smith Emmett Till Clarence Triggs Virgil Lamar Ware Cynthia Wesley Ben Chester White Sammy Younge Jr. "The Forgotten" "The Forgotten" are 74 people who are identified in a display at the Civil Rights Memorial Center. These names were not inscribed on the Memorial because there was insufficient information about their deaths at the time the Memorial was created. However, it is thought that these people were killed as a result of racially motivated violence between 1952 and 1968. Andrew Lee Anderson Frank Andrews Isadore Banks Larry Bolden James Brazier Thomas Brewer Hilliard Brooks Charles Brown Jessie Brown Carrie Brumfield Eli Brumfield Silas (Ernest) Caston Clarence Cloninger Willie Countryman Vincent Dahmon Woodrow Wilson Daniels Joseph Hill Dumas Pheld Evans J. E. Evanston Mattie Greene Jasper Greenwood Jimmie Lee Griffith A. C. Hall Rogers Hamilton Collie Hampton Alphonso Harris Izell Henry Arthur James Hill Ernest Hunter Luther Jackson Ernest Jells Joe Franklin Jeter Marshall Johnson John Lee Willie Henry Lee Richard Lillard George Love Robert McNair Maybelle Mahone Sylvester Maxwell Clinton Melton James Andrew Miller Booker T. Mixon Nehemiah Montgomery Frank Morris James Earl Motley Sam O'Quinn Hubert Orsby Larry Payne C. H. Pickett Albert Pitts David Pitts Ernest McPharland Jimmy Powell William Roy Prather Johnny Queen Donald Rasberry Fred Robinson Johnny Robinson Willie |
the fields of functional equations (including the difference equations fundamental to the difference engine) and operator (D-module) methods for differential equations. The analogy of difference and differential equations was notationally changing Δ to D, as a "finite" difference becomes "infinitesimal". These symbolic directions became popular, as operational calculus, and pushed to the point of diminishing returns. The Cauchy concept of limit was kept at bay. Woodhouse had already founded this second "British Lagrangian School" with its treatment of Taylor series as formal. In this context function composition is complicated to express, because the chain rule is not simply applied to second and higher derivatives. This matter was known to Woodhouse by 1803, who took from Louis François Antoine Arbogast what is now called Faà di Bruno's formula. In essence it was known to Abraham De Moivre (1697). Herschel found the method impressive, Babbage knew of it, and it was later noted by Ada Lovelace as compatible with the analytical engine. In the period to 1820 Babbage worked intensively on functional equations in general, and resisted both conventional finite differences and Arbogast's approach (in which Δ and D were related by the simple additive case of the exponential map). But via Herschel he was influenced by Arbogast's ideas in the matter of iteration, i.e. composing a function with itself, possibly many times. Writing in a major paper on functional equations in the Philosophical Transactions (1815/6), Babbage said his starting point was work of Gaspard Monge. Academic From 1828 to 1839, Babbage was Lucasian Professor of Mathematics at Cambridge. Not a conventional resident don, and inattentive to his teaching responsibilities, he wrote three topical books during this period of his life. He was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1832. Babbage was out of sympathy with colleagues: George Biddell Airy, his predecessor as Lucasian Professor of Mathematics at Trinity College, Cambridge, thought an issue should be made of his lack of interest in lecturing. Babbage planned to lecture in 1831 on political economy. Babbage's reforming direction looked to see university education more inclusive, universities doing more for research, a broader syllabus and more interest in applications; but William Whewell found the programme unacceptable. A controversy Babbage had with Richard Jones lasted for six years. He never did give a lecture. It was during this period that Babbage tried to enter politics. Simon Schaffer writes that his views of the 1830s included disestablishment of the Church of England, a broader political franchise, and inclusion of manufacturers as stakeholders. He twice stood for Parliament as a candidate for the borough of Finsbury. In 1832 he came in third among five candidates, missing out by some 500 votes in the two-member constituency when two other reformist candidates, Thomas Wakley and Christopher Temple, split the vote. In his memoirs Babbage related how this election brought him the friendship of Samuel Rogers: his brother Henry Rogers wished to support Babbage again, but died within days. In 1834 Babbage finished last among four. In 1832, Babbage, Herschel and Ivory were appointed Knights of the Royal Guelphic Order, however they were not subsequently made knights bachelor to entitle them to the prefix Sir, which often came with appointments to that foreign order (though Herschel was later created a baronet). "Declinarians", learned societies and the BAAS Babbage now emerged as a polemicist. One of his biographers notes that all his books contain a "campaigning element". His Reflections on the Decline of Science and some of its Causes (1830) stands out, however, for its sharp attacks. It aimed to improve British science, and more particularly to oust Davies Gilbert as President of the Royal Society, which Babbage wished to reform. It was written out of pique, when Babbage hoped to become the junior secretary of the Royal Society, as Herschel was the senior, but failed because of his antagonism to Humphry Davy. Michael Faraday had a reply written, by Gerrit Moll, as On the Alleged Decline of Science in England (1831). On the front of the Royal Society Babbage had no impact, with the bland election of the Duke of Sussex to succeed Gilbert the same year. As a broad manifesto, on the other hand, his Decline led promptly to the formation in 1831 of the British Association for the Advancement of Science (BAAS). The Mechanics' Magazine in 1831 identified as Declinarians the followers of Babbage. In an unsympathetic tone it pointed out David Brewster writing in the Quarterly Review as another leader; with the barb that both Babbage and Brewster had received public money. In the debate of the period on statistics (qua data collection) and what is now statistical inference, the BAAS in its Statistical Section (which owed something also to Whewell) opted for data collection. This Section was the sixth, established in 1833 with Babbage as chairman and John Elliot Drinkwater as secretary. The foundation of the Statistical Society followed. Babbage was its public face, backed by Richard Jones and Robert Malthus. On the Economy of Machinery and Manufactures Babbage published On the Economy of Machinery and Manufactures (1832), on the organisation of industrial production. It was an influential early work of operational research. John Rennie the Younger in addressing the Institution of Civil Engineers on manufacturing in 1846 mentioned mostly surveys in encyclopaedias, and Babbage's book was first an article in the Encyclopædia Metropolitana, the form in which Rennie noted it, in the company of related works by John Farey Jr., Peter Barlow and Andrew Ure. From An essay on the general principles which regulate the application of machinery to manufactures and the mechanical arts (1827), which became the Encyclopædia Metropolitana article of 1829, Babbage developed the schematic classification of machines that, combined with discussion of factories, made up the first part of the book. The second part considered the "domestic and political economy" of manufactures. The book sold well, and quickly went to a fourth edition (1836). Babbage represented his work as largely a result of actual observations in factories, British and abroad. It was not, in its first edition, intended to address deeper questions of political economy; the second (late 1832) did, with three further chapters including one on piece rate. The book also contained ideas on rational design in factories, and profit sharing. "Babbage principle" In Economy of Machinery was described what is now called the "Babbage principle". It pointed out commercial advantages available with more careful division of labour. As Babbage himself noted, it had already appeared in the work of Melchiorre Gioia in 1815. The term was introduced in 1974 by Harry Braverman. Related formulations are the "principle of multiples" of Philip Sargant Florence, and the "balance of processes". What Babbage remarked is that skilled workers typically spend parts of their time performing tasks that are below their skill level. If the labour process can be divided among several workers, labour costs may be cut by assigning only high-skill tasks to high-cost workers, restricting other tasks to lower-paid workers. He also pointed out that training or apprenticeship can be taken as fixed costs; but that returns to scale are available by his approach of standardisation of tasks, therefore again favouring the factory system. His view of human capital was restricted to minimising the time period for recovery of training costs. Publishing Another aspect of the work was its detailed breakdown of the cost structure of book publishing. Babbage took the unpopular line, from the publishers' perspective, of exposing the trade's profitability. He went as far as to name the organisers of the trade's restrictive practices. Twenty years later he attended a meeting hosted by John Chapman to campaign against the Booksellers Association, still a cartel. Influence It has been written that "what Arthur Young was to agriculture, Charles Babbage was to the factory visit and machinery". Babbage's theories are said to have influenced the layout of the 1851 Great Exhibition, and his views had a strong effect on his contemporary George Julius Poulett Scrope. Karl Marx argued that the source of the productivity of the factory system was exactly the combination of the division of labour with machinery, building on Adam Smith, Babbage and Ure. Where Marx picked up on Babbage and disagreed with Smith was on the motivation for division of labour by the manufacturer: as Babbage did, he wrote that it was for the sake of profitability, rather than productivity, and identified an impact on the concept of a trade. John Ruskin went further, to oppose completely what manufacturing in Babbage's sense stood for. Babbage also affected the economic thinking of John Stuart Mill. George Holyoake saw Babbage's detailed discussion of profit sharing as substantive, in the tradition of Robert Owen and Charles Fourier, if requiring the attentions of a benevolent captain of industry, and ignored at the time. Works by Babbage and Ure were published in French translation in 1830; On the Economy of Machinery was translated in 1833 into French by Édouard Biot, and into German the same year by Gottfried Friedenberg. The French engineer and writer on industrial organisation Léon Lalanne was influenced by Babbage, but also by the economist Claude Lucien Bergery, in reducing the issues to "technology". William Jevons connected Babbage's "economy of labour" with his own labour experiments of 1870. The Babbage principle is an inherent assumption in Frederick Winslow Taylor's scientific management. Mary Everest Boole claimed that there was profound influence – via her uncle George Everest – of Indian thought in general and Indian logic, in particular, on Babbage and on her husband George Boole, as well as on Augustus De Morgan: Think what must have been the effect of the intense Hinduizing of three such men as Babbage, De Morgan, and George Boole on the mathematical atmosphere of 1830–65. What share had it in generating the Vector Analysis and the mathematics by which investigations in physical science are now conducted? Natural theology In 1837, responding to the series of eight Bridgewater Treatises, Babbage published his Ninth Bridgewater Treatise, under the title On the Power, Wisdom and Goodness of God, as manifested in the Creation. In this work Babbage weighed in on the side of uniformitarianism in a current debate. He preferred the conception of creation in which a God-given natural law dominated, removing the need for continuous "contrivance". The book is a work of natural theology, and incorporates extracts from related correspondence of Herschel with Charles Lyell. Babbage put forward the thesis that God had the omnipotence and foresight to create as a divine legislator. In this book, Babbage dealt with relating interpretations between science and religion; on the one hand, he insisted that "there exists no fatal collision between the words of Scripture and the facts of nature;" on the one hand, he wrote the Book of Genesis was not meant to be read literally in relation to scientific terms. Against those who said these were in conflict, he wrote "that the contradiction they have imagined can have no real existence, and that whilst the testimony of Moses remains unimpeached, we may also be permitted to confide in the testimony of our senses." The Ninth Bridgewater Treatise was quoted extensively in Vestiges of the Natural History of Creation. The parallel with Babbage's computing machines is made explicit, as allowing plausibility to the theory that transmutation of species could be pre-programmed. Jonar Ganeri, author of Indian Logic, believes Babbage may have been influenced by Indian thought; one possible route would be through Henry Thomas Colebrooke. Mary Everest Boole argues that Babbage was introduced to Indian thought in the 1820s by her uncle George Everest: Some time about 1825, [Everest] came to England for two or three years, and made a fast and lifelong friendship with Herschel and with Babbage, who was then quite young. I would ask any fair-minded mathematician to read Babbage's Ninth Bridgewater Treatise and compare it with the works of his contemporaries in England; and then ask himself whence came the peculiar conception of the nature of miracle which underlies Babbage's ideas of Singular Points on Curves (Chap, viii) – from European Theology or Hindu Metaphysic? Oh! how the English clergy of that day hated Babbage's book! Religious views Babbage was raised in the Protestant form of the Christian faith, his family having inculcated in him an orthodox form of worship. He explained: Rejecting the Athanasian Creed as a "direct contradiction in terms", in his youth he looked to Samuel Clarke's works on religion, of which Being and Attributes of God (1704) exerted a particularly strong influence on him. Later in life, Babbage concluded that "the true value of the Christian religion rested, not on speculative [theology] … but … upon those doctrines of kindness and benevolence which that religion claims and enforces, not merely in favour of man himself but of every creature susceptible of pain or of happiness." In his autobiography Passages from the Life of a Philosopher (1864), Babbage wrote a whole chapter on the topic of religion, where he identified three sources of divine knowledge: A priori or mystical experience From Revelation From the examination of the works of the Creator He stated, on the basis of the design argument, that studying the works of nature had been the more appealing evidence, and the one which led him to actively profess the existence of God. Advocating for natural theology, he wrote: Like Samuel Vince, Babbage also wrote a defence of the belief in divine miracles. Against objections previously posed by David Hume, Babbage advocated for the belief of divine agency, stating "we must not measure the credibility or incredibility of an event by the narrow sphere of our own experience, nor forget that there is a Divine energy which overrides what we familiarly call the laws of nature." He alluded to the limits of human experience, expressing: "all that we see in a miracle is an effect which is new to our observation, and whose cause is concealed. The cause may be beyond the sphere of our observation, and would be thus beyond the familiar sphere of nature; but this does not make the event a violation of any law of nature. The limits of man's observation lie within very narrow boundaries, and it would be arrogance to suppose that the reach of man's power is to form the limits of the natural world." Later life The British Association was consciously modelled on the Deutsche Naturforscher-Versammlung, founded in 1822. It rejected romantic science as well as metaphysics, and started to entrench the divisions of science from literature, and professionals from amateurs. Belonging as he did to the "Wattite" faction in the BAAS, represented in particular by James Watt the younger, Babbage identified closely with industrialists. He wanted to go faster in the same directions, and had little time for the more gentlemanly component of its membership. Indeed, he subscribed to a version of conjectural history that placed industrial society as the culmination of human development (and shared this view with Herschel). A clash with Roderick Murchison led in 1838 to his withdrawal from further involvement. At the end of the same year he sent in his resignation as Lucasian professor, walking away also from the Cambridge struggle with Whewell. His interests became more focussed, on computation and metrology, and on international contacts. Metrology programme A project announced by Babbage was to tabulate all physical constants (referred to as "constants of nature", a phrase in itself a neologism), and then to compile an encyclopaedic work of numerical information. He was a pioneer in the field of "absolute measurement". His ideas followed on from those of Johann Christian Poggendorff, and were mentioned to Brewster in 1832. There were to be 19 categories of constants, and Ian Hacking sees these as reflecting in part Babbage's "eccentric enthusiasms". Babbage's paper On Tables of the Constants of Nature and Art was reprinted by the Smithsonian Institution in 1856, with an added note that the physical tables of Arnold Henry Guyot "will form a part of the important work proposed in this article". Exact measurement was also key to the development of machine tools. Here again Babbage is considered a pioneer, with Henry Maudslay, William Sellers, and Joseph Whitworth. Engineer and inventor Through the Royal Society Babbage acquired the friendship of the engineer Marc Brunel. It was through Brunel that Babbage knew of Joseph Clement, and so came to encounter the artisans whom he observed in his work on manufactures. Babbage provided an introduction for Isambard Kingdom Brunel in 1830, for a contact with the proposed Bristol & Birmingham Railway. He carried out studies, around 1838, to show the superiority of the broad gauge for railways, used by Brunel's Great Western Railway. In 1838, Babbage invented the pilot (also called a cow-catcher), the metal frame attached to the front of locomotives that clears the tracks of obstacles; he also constructed a dynamometer car. His eldest son, Benjamin Herschel Babbage, worked as an engineer for Brunel on the railways before emigrating to Australia in the 1850s. Babbage also invented an ophthalmoscope, which he gave to Thomas Wharton Jones for testing. Jones, however, ignored it. The device only came into use after being independently invented by Hermann von Helmholtz. Cryptography | Babbage transferred to Peterhouse, Cambridge. He was the top mathematician there, but did not graduate with honours. He instead received a degree without examination in 1814. He had defended a thesis that was considered blasphemous in the preliminary public disputation, but it is not known whether this fact is related to his not sitting the examination. After Cambridge Considering his reputation, Babbage quickly made progress. He lectured to the Royal Institution on astronomy in 1815, and was elected a Fellow of the Royal Society in 1816. After graduation, on the other hand, he applied for positions unsuccessfully, and had little in the way of career. In 1816 he was a candidate for a teaching job at Haileybury College; he had recommendations from James Ivory and John Playfair, but lost out to Henry Walter. In 1819, Babbage and Herschel visited Paris and the Society of Arcueil, meeting leading French mathematicians and physicists. That year Babbage applied to be professor at the University of Edinburgh, with the recommendation of Pierre Simon Laplace; the post went to William Wallace. With Herschel, Babbage worked on the electrodynamics of Arago's rotations, publishing in 1825. Their explanations were only transitional, being picked up and broadened by Michael Faraday. The phenomena are now part of the theory of eddy currents, and Babbage and Herschel missed some of the clues to unification of electromagnetic theory, staying close to Ampère's force law. Babbage purchased the actuarial tables of George Barrett, who died in 1821 leaving unpublished work, and surveyed the field in 1826 in Comparative View of the Various Institutions for the Assurance of Lives. This interest followed a project to set up an insurance company, prompted by Francis Baily and mooted in 1824, but not carried out. Babbage did calculate actuarial tables for that scheme, using Equitable Society mortality data from 1762 onwards. During this whole period, Babbage depended awkwardly on his father's support, given his father's attitude to his early marriage, of 1814: he and Edward Ryan wedded the Whitmore sisters. He made a home in Marylebone in London and established a large family. On his father's death in 1827, Babbage inherited a large estate (value around £100,000, equivalent to £ or $ today), making him independently wealthy. After his wife's death in the same year he spent time travelling. In Italy he met Leopold II, Grand Duke of Tuscany, foreshadowing a later visit to Piedmont. In April 1828 he was in Rome, and relying on Herschel to manage the difference engine project, when he heard that he had become a professor at Cambridge, a position he had three times failed to obtain (in 1820, 1823 and 1826). Royal Astronomical Society Babbage was instrumental in founding the Royal Astronomical Society in 1820, initially known as the Astronomical Society of London. Its original aims were to reduce astronomical calculations to a more standard form, and to circulate data. These directions were closely connected with Babbage's ideas on computation, and in 1824 he won its Gold Medal, cited "for his invention of an engine for calculating mathematical and astronomical tables". Babbage's motivation to overcome errors in tables by mechanisation had been a commonplace since Dionysius Lardner wrote about it in 1834 in the Edinburgh Review (under Babbage's guidance). The context of these developments is still debated. Babbage's own account of the origin of the difference engine begins with the Astronomical Society's wish to improve The Nautical Almanac. Babbage and Herschel were asked to oversee a trial project, to recalculate some part of those tables. With the results to hand, discrepancies were found. This was in 1821 or 1822, and was the occasion on which Babbage formulated his idea for mechanical computation. The issue of the Nautical Almanac is now described as a legacy of a polarisation in British science caused by attitudes to Sir Joseph Banks, who had died in 1820. Babbage studied the requirements to establish a modern postal system, with his friend Thomas Frederick Colby, concluding there should be a uniform rate that was put into effect with the introduction of the Uniform Fourpenny Post supplanted by the Uniform Penny Post in 1839 and 1840. Colby was another of the founding group of the Society. He was also in charge of the Survey of Ireland. Herschel and Babbage were present at a celebrated operation of that survey, the remeasuring of the Lough Foyle baseline. British Lagrangian School The Analytical Society had initially been no more than an undergraduate provocation. During this period it had some more substantial achievements. In 1816 Babbage, Herschel and Peacock published a translation from French of the lectures of Sylvestre Lacroix, which was then the state-of-the-art calculus textbook. Reference to Lagrange in calculus terms marks out the application of what are now called formal power series. British mathematicians had used them from about 1730 to 1760. As re-introduced, they were not simply applied as notations in differential calculus. They opened up the fields of functional equations (including the difference equations fundamental to the difference engine) and operator (D-module) methods for differential equations. The analogy of difference and differential equations was notationally changing Δ to D, as a "finite" difference becomes "infinitesimal". These symbolic directions became popular, as operational calculus, and pushed to the point of diminishing returns. The Cauchy concept of limit was kept at bay. Woodhouse had already founded this second "British Lagrangian School" with its treatment of Taylor series as formal. In this context function composition is complicated to express, because the chain rule is not simply applied to second and higher derivatives. This matter was known to Woodhouse by 1803, who took from Louis François Antoine Arbogast what is now called Faà di Bruno's formula. In essence it was known to Abraham De Moivre (1697). Herschel found the method impressive, Babbage knew of it, and it was later noted by Ada Lovelace as compatible with the analytical engine. In the period to 1820 Babbage worked intensively on functional equations in general, and resisted both conventional finite differences and Arbogast's approach (in which Δ and D were related by the simple additive case of the exponential map). But via Herschel he was influenced by Arbogast's ideas in the matter of iteration, i.e. composing a function with itself, possibly many times. Writing in a major paper on functional equations in the Philosophical Transactions (1815/6), Babbage said his starting point was work of Gaspard Monge. Academic From 1828 to 1839, Babbage was Lucasian Professor of Mathematics at Cambridge. Not a conventional resident don, and inattentive to his teaching responsibilities, he wrote three topical books during this period of his life. He was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1832. Babbage was out of sympathy with colleagues: George Biddell Airy, his predecessor as Lucasian Professor of Mathematics at Trinity College, Cambridge, thought an issue should be made of his lack of interest in lecturing. Babbage planned to lecture in 1831 on political economy. Babbage's reforming direction looked to see university education more inclusive, universities doing more for research, a broader syllabus and more interest in applications; but William Whewell found the programme unacceptable. A controversy Babbage had with Richard Jones lasted for six years. He never did give a lecture. It was during this period that Babbage tried to enter politics. Simon Schaffer writes that his views of the 1830s included disestablishment of the Church of England, a broader political franchise, and inclusion of manufacturers as stakeholders. He twice stood for Parliament as a candidate for the borough of Finsbury. In 1832 he came in third among five candidates, missing out by some 500 votes in the two-member constituency when two other reformist candidates, Thomas Wakley and Christopher Temple, split the vote. In his memoirs Babbage related how this election brought him the friendship of Samuel Rogers: his brother Henry Rogers wished to support Babbage again, but died within days. In 1834 Babbage finished last among four. In 1832, Babbage, Herschel and Ivory were appointed Knights of the Royal Guelphic Order, however they were not subsequently made knights bachelor to entitle them to the prefix Sir, which often came with appointments to that foreign order (though Herschel was later created a baronet). "Declinarians", learned societies and the BAAS Babbage now emerged as a polemicist. One of his biographers notes that all his books contain a "campaigning element". His Reflections on the Decline of Science and some of its Causes (1830) stands out, however, for its sharp attacks. It aimed to improve British science, and more particularly to oust Davies Gilbert as President of the Royal Society, which Babbage wished to reform. It was written out of pique, when Babbage hoped to become the junior secretary of the Royal Society, as Herschel was the senior, but failed because of his antagonism to Humphry Davy. Michael Faraday had a reply written, by Gerrit Moll, as On the Alleged Decline of Science in England (1831). On the front of the Royal Society Babbage had no impact, with the bland election of the Duke of Sussex to succeed Gilbert the same year. As a broad manifesto, on the other hand, his Decline led promptly to the formation in 1831 of the British Association for the Advancement of Science (BAAS). The Mechanics' Magazine in 1831 identified as Declinarians the followers of Babbage. In an unsympathetic tone it pointed out David Brewster writing in the Quarterly Review as another leader; with the barb that both Babbage and Brewster had received public money. In the debate of the period on statistics (qua data collection) and what is now statistical inference, the BAAS in its Statistical Section (which owed something also to Whewell) opted for data collection. This Section was the sixth, established in 1833 with Babbage as chairman and John Elliot Drinkwater as secretary. The foundation of the Statistical Society followed. Babbage was its public face, backed by Richard Jones and Robert Malthus. On the Economy of Machinery and Manufactures Babbage published On the Economy of Machinery and Manufactures (1832), on the organisation of industrial production. It was an influential early work of operational research. John Rennie the Younger in addressing the Institution of Civil Engineers on manufacturing in 1846 mentioned mostly surveys in encyclopaedias, and Babbage's book was first an article in the Encyclopædia Metropolitana, the form in which Rennie noted it, in the company of related works by John Farey Jr., Peter Barlow and Andrew Ure. From An essay on the general principles which regulate the application of machinery to manufactures and the mechanical arts (1827), which became the Encyclopædia Metropolitana article of 1829, Babbage developed the schematic classification of machines that, combined with discussion of factories, made up the first part of the book. The second part considered the "domestic and political economy" of manufactures. The book sold well, and quickly went to a fourth edition (1836). Babbage represented his work as largely a result of actual observations in factories, British and abroad. It was not, in its first edition, intended to address deeper questions of political economy; the second (late 1832) did, with three further chapters including one on piece rate. The book also contained ideas on rational design in factories, and profit sharing. "Babbage principle" In Economy of Machinery was described what is now called the "Babbage principle". It pointed out commercial advantages available with more careful division of labour. As Babbage himself noted, it had already appeared in the work of Melchiorre Gioia in 1815. The term was introduced in 1974 by Harry Braverman. Related formulations are the "principle of multiples" of Philip Sargant Florence, and the "balance of processes". What Babbage remarked is that skilled workers typically spend parts of their time performing tasks that are below their skill level. If the labour process can be divided among several workers, labour costs may be cut by assigning only high-skill tasks to high-cost workers, restricting other tasks to lower-paid workers. He also pointed out that training or apprenticeship can be taken as fixed costs; but that returns to scale are available by his approach of standardisation of tasks, therefore again favouring the factory system. His view of human capital was restricted to minimising the time period for recovery of training costs. Publishing Another aspect of the work was its detailed breakdown of the cost structure of book publishing. Babbage took the unpopular line, from the publishers' perspective, of exposing the trade's profitability. He went as far as to name the organisers of the trade's restrictive practices. Twenty years later he attended a meeting hosted by John Chapman to campaign against the Booksellers Association, still a cartel. Influence It has been written that "what Arthur Young was to agriculture, Charles Babbage was to the factory visit and machinery". Babbage's theories are said to have influenced the layout of the 1851 Great Exhibition, and his views had a strong effect on his contemporary George Julius Poulett Scrope. Karl Marx argued that the source of the productivity of the factory system was exactly the combination of the division of labour with machinery, building on Adam Smith, Babbage and Ure. Where Marx picked up on Babbage and disagreed with Smith was on the motivation for division of labour by the manufacturer: as Babbage did, he wrote that it was for the sake of profitability, rather than productivity, and identified an impact on the concept of a trade. John Ruskin went further, to oppose completely what manufacturing in Babbage's sense stood for. Babbage also affected the economic thinking of John Stuart Mill. George Holyoake saw Babbage's detailed discussion of profit sharing as substantive, in the tradition of Robert Owen and Charles Fourier, if requiring the attentions of a benevolent captain of industry, and ignored at the time. Works by Babbage and Ure were published in French translation in 1830; On the Economy of Machinery was translated in 1833 into French by Édouard Biot, and into German the same year by Gottfried Friedenberg. The French engineer and writer on industrial organisation Léon Lalanne was influenced by Babbage, but also by the economist Claude Lucien Bergery, in reducing the issues to "technology". William Jevons connected Babbage's "economy of labour" with his own labour experiments of 1870. The Babbage principle is an inherent assumption in Frederick Winslow Taylor's scientific management. Mary Everest Boole claimed that there was profound influence – via her uncle George Everest – of Indian thought in general and Indian logic, in particular, on Babbage and on her husband George Boole, as well as on Augustus De Morgan: Think what must have been the effect of the intense Hinduizing of three such men as Babbage, De Morgan, and George Boole on the mathematical atmosphere of 1830–65. What share had it in generating the Vector Analysis and the mathematics by which investigations in physical science are now conducted? Natural theology In 1837, responding to the series of eight Bridgewater Treatises, Babbage published his Ninth Bridgewater Treatise, under the title On the Power, Wisdom and Goodness of God, as manifested in the Creation. In this work Babbage weighed in on the side of uniformitarianism in a current debate. He preferred the conception of creation in which a God-given natural law dominated, removing the need for continuous "contrivance". The book is a work of natural theology, and incorporates extracts from related correspondence of Herschel with Charles Lyell. Babbage put forward the thesis that God had the omnipotence and foresight to create as a divine legislator. In this book, Babbage dealt with relating interpretations between science and religion; on the one hand, he insisted that "there exists no fatal |
largely socially constructed. For example, in Western society, trousers have long been adopted for usage by women, and it is no longer regarded as cross-dressing. In cultures where men have traditionally worn skirt-like garments such as the kilt or sarong, these are not seen as women's clothing, and wearing them is not seen as cross-dressing for men. As societies are becoming more global in nature, both men's and women's clothing are adopting styles of dress associated with other cultures. Cosplaying may also involve cross-dressing, for some females may wish to dress as a male, and vice versa (see Crossplay (cosplay)). Breast binding (for females) is not uncommon and is one of the things likely needed to cosplay a male character. In most parts of the world it remains socially disapproved for men to wear clothes traditionally associated with women. Attempts are occasionally made, e.g. by fashion designers, to promote the acceptance of skirts as everyday wear for men. Cross-dressers have complained that society permits women to wear pants or jeans and other masculine clothing, while condemning any man who wants to wear clothing sold for women. While creating a more feminine figure, male cross-dressers will often utilize different types and styles of breast forms, which are silicone prostheses traditionally used by women who have undergone mastectomies to recreate the visual appearance of a breast. While most male cross-dressers utilize clothing associated with modern women, some are involved in subcultures that involve dressing as little girls or in vintage clothing. Some such men have written that they enjoy dressing as femininely as possible, so they wear frilly dresses with lace and ribbons, bridal gowns complete with veils, as well as multiple petticoats, corsets, girdles and/or garter belts with nylon stockings. The term underdressing is used by male cross-dressers to describe wearing female undergarments such as panties under their male clothes. The famous low-budget film-maker Edward D. Wood, Jr. said he often wore women's underwear under his military uniform as a Marine during World War II. Female masking is a form of cross-dressing in which men wear masks that present them as female. Social issues Cross-dressers may begin wearing clothing associated with the opposite sex in childhood, using the clothes of a sibling, parent, or friend. Some parents have said they allowed their children to cross-dress and, in many cases, the child stopped when they became older. The same pattern often continues into adulthood, where there may be confrontations with a spouse, partner, family member or friend. Married cross-dressers can experience considerable anxiety and guilt if their spouse objects to their behavior. Sometimes because of guilt or other reasons cross-dressers dispose of all their clothing, a practice called "purging", only to start collecting other gender's clothing again. Festivals Celebrations of cross-dressing occur in widespread cultures. The Abissa festival in Côte d'Ivoire, Ofudamaki in Japan, and Kottankulangara Festival in India are all examples of this. Analysis Advocacy for social change has done much to relax the constrictions of gender roles on men and women, but they are still subject to prejudice from some people. It is noticeable that as being transgender becomes more socially accepted as a normal human condition, the prejudices against cross-dressing are changing quite quickly, just as the similar prejudices against homosexuals have changed rapidly in recent decades. The reason it is so hard to have statistics for female-assigned cross-dressers is that the line where cross-dressing stops and cross-dressing begins has become blurred, whereas the same line for men is as well defined as ever. This is one of the many issues being addressed by third wave feminism as well as the modern-day masculist movement. The general culture has very mixed views about cross-dressing. A woman who wears her husband's shirt to bed is considered attractive, while a man who wears his wife's nightgown to bed may be considered transgressive. Marlene Dietrich in a tuxedo was considered very erotic; Jack Lemmon in a dress was considered ridiculous. All this may result from an overall gender role rigidity for males; that is, because of the prevalent gender dynamic throughout the world, men frequently encounter discrimination when deviating from masculine gender norms, particularly violations of heteronormativity. A man's adoption of feminine clothing is often considered a going down in the gendered social order whereas a woman's adoption of what are traditionally men's clothing (at least in the English-speaking world) has less of an impact because | as 'drag kings'. The modern activity of battle reenactments has raised the question of women passing as male soldiers. In 1989, Lauren Burgess dressed as a male soldier in a U.S. National Park Service reenactment of the Battle of Antietam, and was ejected after she was discovered to be a woman. Burgess sued the Park Service for sexual discrimination. The case spurred spirited debate among Civil War buffs. In 1993, a federal judge ruled in Burgess's favor. "Wigging" refers to the practice of male stunt doubles taking the place of an actress, parallel to "paint downs", where white stunt doubles are made up to resemble black actors. Female stunt doubles have begun to protest this norm of "historical sexism", saying that it restricts their already limited job possibilities. Sexual fetishes A transvestic fetishist is a person who cross-dresses as part of a sexual fetish. According to the fourth edition of Diagnostic and Statistical Manual of Mental Disorders, this fetishism was limited to heterosexual men; however, DSM-5 does not have this restriction, and opens it to women and men, regardless of their sexual orientation. Sometimes either member of a heterosexual couple will cross-dress in order to arouse the other. For example, the male might wear skirts or lingerie and/or the female will wear boxers or other male clothing. (See also forced feminization) Passing Some people who cross-dress may endeavor to project a complete impression of belonging to another gender, including mannerisms, speech patterns, and emulation of sexual characteristics. This is referred to as passing or "trying to pass," depending how successful the person is. An observer who sees through the cross-dresser's attempt to pass is said to have "read" or "clocked" them. There are videos, books, and magazines on how a man may look more like a woman. Others may choose to take a mixed approach, adopting some feminine traits and some masculine traits in their appearance. For instance, a man might wear both a dress and a beard. This is sometimes known as "genderfuck". In a broader context, cross-dressing may also refer to other actions undertaken to pass as a particular sex, such as packing (accentuating the male crotch bulge) or, the opposite, tucking (concealing the male crotch bulge). Clothes The actual determination of cross-dressing is largely socially constructed. For example, in Western society, trousers have long been adopted for usage by women, and it is no longer regarded as cross-dressing. In cultures where men have traditionally worn skirt-like garments such as the kilt or sarong, these are not seen as women's clothing, and wearing them is not seen as cross-dressing for men. As societies are becoming more global in nature, both men's and women's clothing are adopting styles of dress associated with other cultures. Cosplaying may also involve cross-dressing, for some females may wish to dress as a male, and vice versa (see Crossplay (cosplay)). Breast binding (for females) is not uncommon and is one of the things likely needed to cosplay a male character. In most parts of the world it remains socially disapproved for men to wear clothes traditionally associated with women. Attempts are occasionally made, e.g. by fashion designers, to promote the acceptance of skirts as everyday wear for men. Cross-dressers have complained that society permits women to wear pants or jeans and other masculine clothing, while condemning any man who wants to wear clothing sold for women. While creating a more feminine figure, male cross-dressers will often utilize different types and styles of breast forms, which are silicone prostheses traditionally used by women who have undergone mastectomies to recreate the visual appearance of a breast. While most male cross-dressers utilize clothing associated with modern women, some are involved in subcultures that involve dressing as little girls or in vintage clothing. Some such men have written that they enjoy dressing as femininely as possible, so they wear frilly dresses with lace and ribbons, bridal gowns complete with veils, as well as multiple petticoats, corsets, girdles and/or garter belts with nylon stockings. The term underdressing is used by male cross-dressers to describe wearing female undergarments such as panties under their male clothes. The famous low-budget film-maker Edward D. Wood, Jr. said he often wore women's underwear under his military uniform as a Marine during World War II. Female masking is a form of cross-dressing in which men wear masks that present them as female. Social issues Cross-dressers may begin wearing clothing associated with the opposite sex in childhood, using the clothes of a sibling, parent, or friend. Some parents have said they allowed their children to cross-dress and, in many cases, the child stopped when they became older. The same pattern often continues into adulthood, where there may be confrontations with a spouse, partner, family member or friend. Married cross-dressers can experience considerable anxiety and guilt if their spouse objects to their behavior. Sometimes because of guilt or other reasons cross-dressers dispose of all their clothing, a practice called "purging", only to start collecting other gender's clothing again. Festivals Celebrations of cross-dressing occur in widespread cultures. The Abissa festival in Côte d'Ivoire, Ofudamaki in Japan, and Kottankulangara Festival in India are all examples of this. Analysis Advocacy for social change has done much to relax the constrictions of gender roles on men |
of the boots add a degree of isolation of horizontal wheel-rail vibrations, and are insulators of the track signal circuit in the humid tunnel environment. UIC60 (60 kg/m) rails of 900A grade rest on rail pads, which fit the RN/Sonneville bolted dual leaf-springs. The rails, LVT-blocks and their boots with pads were assembled outside the tunnel, in a fully automated process developed by the LVT inventor, Mr. Roger Sonneville. About 334,000 Sonneville blocks were made on the Sangatte site. Maintenance activities are less than projected. Initially the rails were ground on a yearly basis or after approximately 100MGT of traffic. Ride quality continues to be noticeably smooth and of low noise. Maintenance is facilitated by the existence of two tunnel junctions or crossover facilities, allowing for two-way operation in each of the six tunnel segments thereby created, and thus providing safe access for maintenance of one isolated tunnel segment at a time. The two crossovers are the largest artificial undersea caverns ever built; 150 m long, 10 m high and 18 m wide. The English crossover is from Shakespeare Cliff, and the French crossover is from Sangatte. Ventilation, cooling and drainage The ventilation system maintains the air pressure in the service tunnel higher than in the rail tunnels, so that in the event of a fire, smoke does not enter the service tunnel from the rail tunnels. Two cooling water pipes in each rail tunnel circulate chilled water to remove heat generated by the rail traffic. Pumping stations remove water in the tunnels from rain, seepage, and so on. During the design stage of the tunnel, engineers found that its aerodynamic properties and the heat generated by high-speed trains as they passed through it would raise the temperature inside the tunnel to . As well as making the trains "unbearably warm" for passengers this also presented a risk of equipment failure and track distortion. To cool the tunnel to below , engineers installed of diameter cooling pipes carrying of water. The network—Europe's largest cooling system—was supplied by eight York Titan chillers running on R22, a Hydrochlorofluorocarbon (HCFC) refrigerant gas. Due to R22's ozone depletion potential (ODP) and high global warming potential (GWP), its use is being phased out in developed countries, and since 1 January 2015 it has been illegal in Europe to use HCFCs to service air-conditioning equipment—broken equipment that used HCFCs must instead be replaced with equipment that does not use it. In 2016, Trane was selected to provide replacement chillers for the tunnel's cooling network. The York chillers were decommissioned and four "next generation" Trane Series E CenTraVac large-capacity (2600 kW to 14,000 kW) chillers were installed—two located in Sangatte, France, and two at Shakespeare Cliff, UK. The energy-efficient chillers, using Honeywell's non-flammable, ultra-low GWP R1233zd(E) refrigerant, maintain temperatures at , and in their first year of operation generated savings of 4.8 GWh—approximately 33%, equating to €500,000 ($585,000)—for tunnel operator Getlink. Rolling stock Rolling stock used previously Operators Eurotunnel Shuttle Initially 38 Le Shuttle locomotives were commissioned, with one at each end of a shuttle train. Car shuttle sets have two separate halves: single and double deck. Each half has two loading/unloading wagons and 12 carrier wagons. Eurotunnel's original order was for nine car shuttle sets. Heavy goods vehicle (HGV) shuttle sets also have two halves, with each half containing one loading wagon, one unloading wagon and 14 carrier wagons. There is a club car behind the leading locomotive, where drivers must stay during the journey. Eurotunnel originally ordered six HGV shuttle sets. Freight locomotives Forty-six Class 92 locomotives for hauling freight trains and overnight passenger trains (the Nightstar project, which was abandoned) were commissioned, running on both overhead AC and third-rail DC power. However, RFF does not let these run on French railways, so there are plans to certify Alstom Prima II locomotives for use in the tunnel. International passenger Thirty-one Eurostar trains, based on the French TGV, built to UK loading gauge with many modifications for safety within the tunnel, were commissioned, with ownership split between British Rail, French national railways (SNCF) and Belgian national railways (SNCB). British Rail ordered seven more for services north of London. Around 2010, Eurostar ordered ten trains from Siemens based on its Velaro product. The Class 374 entered service in 2016 and have been operating through the Channel Tunnel ever since alongside the current Class 373. Germany (DB) has since around 2005 tried to get permission to run train services to London. At the end of 2009, extensive fire-proofing requirements were dropped and DB received permission to run German Intercity-Express (ICE) test trains through the tunnel. In June 2013 DB was granted access to the tunnel, but these plans were ultimately dropped. In October 2021, Renfe, the Spanish state railway company, expressed interest in operating a cross-Channel route between Paris and London using some of their existing trains with the intention of competing with Eurostar. No details have been revealed as to which trains would be used. Service locomotives Diesel locomotives for rescue and shunting work are Eurotunnel Class 0001 and Eurotunnel Class 0031. Operation The following chart presents the estimated number of passengers and tonnes of freight, respectively, annually transported through the Channel Tunnel since 1994, in millions: Usage and services Transport services offered by the tunnel are as follows: Eurotunnel Le Shuttle roll-on roll-off shuttle service for road vehicles and their drivers and passengers, Eurostar passenger trains, through freight trains. Both the freight and passenger traffic forecasts that led to the construction of the tunnel were overestimated; in particular, Eurotunnel's commissioned forecasts were over-predictions. Although the captured share of Channel crossings was forecast correctly, high competition (especially from budget airlines which expanded rapidly in the 1990s and 2000s) and reduced tariffs led to low revenue. Overall cross-Channel traffic was overestimated. With the EU's liberalisation of international rail services, the tunnel and High Speed 1 have been open to competition since 2010. There have been a number of operators interested in running trains through the tunnel and along High Speed 1 to London. In June 2013, after several years, DB obtained a licence to operate Frankfurt – London trains, not expected to run before 2016 because of delivery delays of the custom-made trains. Plans for the service to Frankfurt seem to have been shelved in 2018. Passenger traffic volumes Cross-tunnel passenger traffic volumes peaked at 18.4 million in 1998, dropped to 14.9 million in 2003 and has increased substantially since then. At the time of the decision about building the tunnel, 15.9 million passengers were predicted for Eurostar trains in the opening year. In 1995, the first full year, actual numbers were a little over 2.9 million, growing to 7.1 million in 2000, then dropping to 6.3 million in 2003. Eurostar was initially limited by the lack of a high-speed connection on the British side. After the completion of High Speed 1 in two stages in 2003 and 2007, traffic increased. In 2008, Eurostar carried 9,113,371 passengers, a 10% increase over the previous year, despite traffic limitations due to the 2008 Channel Tunnel fire. Eurostar passenger numbers continued to increase. Freight traffic volumes Freight volumes have been erratic, with a major decrease during 1997 due to a closure caused by a fire in a freight shuttle. Freight crossings increased over the period, indicating the substitutability of the tunnel by sea crossings. The tunnel has achieved a market share close to or above Eurotunnel's 1980s predictions but Eurotunnel's 1990 and 1994 predictions were overestimates. For through freight trains, the first year prediction was 7.2 million tonnes; the actual 1995 figure was 1.3M tonnes. Through freight volumes peaked in 1998 at 3.1M tonnes. This fell back to 1.21M tonnes in 2007, increasing slightly to 1.24M tonnes in 2008. Together with that carried on freight shuttles, freight growth has occurred since opening, with 6.4M tonnes carried in 1995, 18.4M tonnes recorded in 2003 and 19.6M tonnes in 2007. Numbers fell back in the wake of the 2008 fire. Eurotunnel's freight subsidiary is Europorte 2. In September 2006 EWS, the UK's largest rail freight operator, announced that owing to cessation of UK-French government subsidies of £52 million per annum to cover the tunnel "Minimum User Charge" (a subsidy of around £13,000 per train, at a traffic level of 4,000 trains per annum), freight trains would stop running after 30 November. Economic performance Shares in Eurotunnel were issued at £3.50 per share on 9 December 1987. By mid-1989 the price had risen to £11.00. Delays and cost overruns led to the price dropping; during demonstration runs in October 1994 it reached an all-time low. Eurotunnel suspended payment on its debt in September 1995 to avoid bankruptcy. In December 1997 the British and French governments extended Eurotunnel's operating concession by 34 years, to 2086. Financial restructuring of Eurotunnel occurred in mid-1998, reducing debt and financial charges. Despite the restructuring, The Economist reported in 1998 that to break even Eurotunnel would have to increase fares, traffic and market share for sustainability. A cost benefit analysis of the tunnel indicated that there were few impacts on the wider economy and few developments associated with the project, and that the British economy would have been better off if it had not been constructed. Under the terms of the Concession, Eurotunnel was obliged to investigate a cross-Channel road tunnel. In December 1999 road and rail tunnel proposals were presented to the British and French governments, but it was stressed that there was not enough demand for a second tunnel. A three-way treaty between the United Kingdom, France and Belgium governs border controls, with the establishment of control zones wherein the officers of the other nation may exercise limited customs and law enforcement powers. For most purposes these are at either end of the tunnel, with the French border controls on the UK side of the tunnel and vice versa. For some city-to-city trains, the train is a control zone. A binational emergency plan coordinates UK and French emergency activities. In 1999 Eurostar posted its first net profit, having made a loss of £925m in 1995. In 2005 Eurotunnel was described as being in a serious situation. In 2013, operating profits rose 4 percent from 2012, to £54 million. Security There is a need for full passport controls, since this is the border between the Schengen Area and the Common Travel Area. There are juxtaposed controls, meaning that passports are checked before boarding first by officials belonging to departing country and then officials of the destination country. These are placed only at the main Eurostar stations: French officials operate at London St Pancras, Ebbsfleet International and Ashford International, while British officials operate at Calais-Fréthun, Lille-Europe, Marne-la-Vallée–Chessy, Brussels-South and Paris-Gare du Nord. There are security checks before boarding as well. For the shuttle road-vehicle trains, there are juxtaposed passport controls before boarding the trains. For Eurostar trains travelling from places south of Paris, there is no passport and security check before departure, and those trains must stop in Lille at least 30 minutes to allow all passengers to be checked. No checks are done on board. There have been plans for services from Amsterdam, Frankfurt and Cologne to London, but a major reason to cancel them was the need for a stop in Lille. A direct service from London to Amsterdam started on 4 April 2018; following the building of check-in terminals at Amsterdam and Rotterdam and intergovernmental agreement, a direct service from the two Dutch cities to London will start on 30 April 2020. Terminals The terminals' sites are at Cheriton (near Folkestone in the United Kingdom) and Coquelles (near Calais in France). The UK site uses the M20 motorway for access. The terminals are organised with the frontier controls juxtaposed with the entry to the system to allow travellers to go onto the motorway at the destination country immediately after leaving the shuttle. To achieve design output at the French terminal, the shuttles accept cars on double-deck wagons; for flexibility, ramps were placed inside the shuttles to provide access to the top decks. At Folkestone there are of main-line track, 45 turnouts and eight platforms. At Calais there are of track and 44 turnouts. At the terminals the shuttle trains traverse a figure eight to reduce uneven wear on the wheels. There is a freight marshalling yard west of Cheriton at Dollands Moor Freight Yard. Regional impact A 1996 report from the European Commission predicted that Kent and Nord-Pas de Calais had to face increased traffic volumes due to general growth of cross-Channel traffic and traffic attracted by the tunnel. In Kent, a high-speed rail line to London would transfer traffic from road to rail. Kent's regional development would benefit from the tunnel, but being so close to London restricts the benefits. Gains are in the traditional industries and are largely dependent on the development of Ashford International railway station, without which Kent would be totally dependent on London's expansion. Nord-Pas-de-Calais enjoys a strong internal symbolic effect of the Tunnel which results in significant gains in manufacturing. The removal of a bottleneck by means like the tunnel does not necessarily induce economic gains in all adjacent regions. The image of a region being connected to the European high-speed transport and active political response are more important for regional economic development. Some small-medium enterprises located in the immediate vicinity of the terminal have used the opportunity to re-brand the profile of their business with positive effect, such as The New Inn at Etchinghill which was able to commercially exploit its unique selling point as being 'the closest pub to the Channel Tunnel'. Tunnel-induced regional development is small compared to general economic growth. The South East of England is likely to benefit developmentally and socially from faster and cheaper transport to continental Europe, but the benefits are unlikely to be equally distributed throughout the region. The overall environmental impact is almost certainly negative. Since the opening of the tunnel, small positive impacts on the wider economy have been felt, but it is difficult to identify major economic successes directly attributed to the tunnel. The Eurotunnel does operate profitably, offering an alternative transportation mode unaffected by poor weather. High costs of construction did delay profitability, however, and companies involved in the tunnel's construction and operation early in operation relied on government aid to deal with debts amounted. Illegal immigration Illegal immigrants and would-be asylum seekers have used the tunnel to attempt to enter Britain. By 1997, the problem had attracted international press attention, and by 1999, the French Red Cross opened the first migrant centre at Sangatte, using a warehouse once used for tunnel construction; by 2002, it housed up to 1,500 people at a time, most of them trying to get to the UK. In 2001, most came from Afghanistan, Iraq, and Iran, but African countries were also represented. Eurotunnel, the company that operates the crossing, said that more than 37,000 migrants were intercepted between January and July 2015. Approximately 3,000 migrants, mainly from Ethiopia, Eritrea, Sudan and Afghanistan, were living in the temporary camps erected in Calais at the time of an official count in July 2015. An estimated 3,000 to 5,000 migrants were waiting in Calais for a chance to get to England. Britain and France operate a system of juxtaposed controls on immigration and customs, where investigations happen before travel. France is part of the Schengen immigration zone, removing border checks in normal times between most EU member states; Britain and the Republic of Ireland form their own separate Common Travel Area immigration zone. Most illegal immigrants and would-be asylum seekers who got into Britain found some way to ride a freight train. Trucks are loaded onto freight trains. In a few instances, migrants stowed away in a liquid chocolate tanker and managed to survive, spread across several attempts. Although the facilities were fenced, airtight security was deemed impossible; migrants would even jump from bridges onto moving trains. In several incidents people were injured during the crossing; others tampered with railway equipment, causing delays and requiring repairs. Eurotunnel said it was losing £5m per month because of the problem. In 2001 and 2002, several riots broke out at Sangatte, and groups of migrants (up to 550 in a December 2001 incident) stormed the fences and attempted to enter en masse. Other migrants seeking permanent UK settlement use the Eurostar passenger train. They may purport to be visitors (whether to be issued with a required visit visa or deny and falsify their true intentions to obtain a maximum of 6-months-in-a-year at-port stamp); purport to be someone else whose documents they hold or used forged or counterfeit passports. Such breaches will result in refusal of permission to enter the UK, effected by Border Force after such a person's identity is fully established assuming they persist in their application to enter the UK. Diplomatic efforts Local authorities in both France and the UK called for the closure of the Sangatte migrant camp, and Eurotunnel twice sought an injunction against the centre. As at 2006 the United Kingdom blamed France for allowing Sangatte to open, and France blamed both the UK for its then lax asylum rules/law, and the EU for not having a uniform immigration policy. The cause célèbre nature of the problem even included journalists detained as they followed migrants onto railway property. In 2002, after the European Commission told France that it was in breach of European Union rules on the free transfer of goods because of the delays and closures as a result of its poor security, a double fence was built at a cost of £5 million, reducing the numbers of migrants detected each week reaching Britain on goods trains from 250 to almost none. Other measures included CCTV cameras and increased police patrols. At the end of 2002, the Sangatte centre was closed after the UK agreed to absorb some migrants. On 23 and 30 June 2015, striking workers associated with MyFerryLink damaged the sections of track by burning car tires, leading to all trains being cancelled and a backlog of vehicles. Hundreds seeking to reach Britain made use of the situation to attempt to stow away inside and underneath transport trucks destined for the United Kingdom. Extra security measures included a £2 million upgrade of detection technology, £1 million extra for dog searches, and £12 million (over three years) towards a joint fund with France for security surrounding the Port of Calais. Illegal attempts to cross and deaths In 2002, a dozen migrants died in crossing attempts. In the two months from June to July 2015, ten migrants died near the French tunnel terminal, during a period when 1,500 attempts to evade security precautions were being made each day. On 6 July 2015, a migrant died while attempting to climb onto a freight train while trying to reach Britain from the French side of the Channel. The previous month an Eritrean man was killed under similar circumstances. During the night of 28 July 2015, one person, aged 25–30, was found dead after a night in which 1,500–2,000 migrants had attempted to enter the Eurotunnel terminal. On 4 August 2015, a Sudanese migrant walked nearly the entire length of one of the tunnels. He was arrested close to the British side, after having walked about through the tunnel. Mechanical incidents Fires There have been three fires in the tunnel, all on the heavy goods vehicle (HGV) shuttles, that were significant enough to close the tunnel, as well as other more minor incidents. On 9 December 1994, during an "invitation only" testing phase, a fire broke out in a Ford Escort car while its owner was loading it onto the upper deck of a tourist shuttle. The fire started at about 10:00, with the shuttle train stationary in the Folkestone terminal and was put out about 40 minutes later with no passenger injuries. On 18 November 1996, a fire broke out on an HGV shuttle wagon in the tunnel, but nobody was seriously hurt. The exact cause is unknown, although it was neither a Eurotunnel equipment nor rolling stock problem; it may have been due to arson of a heavy goods vehicle. It is estimated that the heart of the fire reached , with the tunnel severely damaged over , with some affected to some extent. Full operation recommenced six months after the fire. On 21 August 2006, the tunnel was closed for several hours when a truck on an HGV shuttle train caught fire. On 11 September 2008, a fire occurred in the Channel Tunnel at 13:57 GMT. The incident started on an HGV shuttle train travelling towards France. The event occurred from the French entrance to the tunnel. No one was killed but several people were taken to hospitals suffering from smoke inhalation, and minor cuts and bruises. The tunnel was closed to all traffic, with the undamaged South Tunnel reopening for limited services two days later. Full service resumed on 9 February 2009 after repairs costing €60 million. On 29 November 2012, the tunnel was closed for several hours after a truck on an HGV shuttle caught fire. On 17 January 2015, both tunnels were closed following a lorry fire which filled the midsection of Running Tunnel North with smoke. Eurostar cancelled all services. The shuttle train had been heading from Folkestone to Coquelles and stopped adjacent to cross-passage CP 4418 just before 12:30 UTC. Thirty-eight passengers and four members of Eurotunnel staff were evacuated into the service tunnel, and then transported to France using special STTS road vehicles in the Service Tunnel. The passengers and crew were taken to the Eurotunnel Fire/Emergency Management Centre close to the French portal. Train failures On the night of 19/20 February 1996, about 1,000 passengers became trapped in the Channel Tunnel when Eurostar trains from London broke down owing to failures of electronic circuits caused by snow and ice being deposited and then melting on the circuit boards. On 3 August 2007, an electrical failure lasting six hours caused passengers to be trapped in the tunnel on a shuttle. On the evening of 18 December 2009, during the December 2009 European snowfall, five London-bound Eurostar trains failed inside the tunnel, trapping 2,000 passengers for approximately 16 hours, during the coldest temperatures in eight years. A Eurotunnel spokesperson explained that snow had evaded the train's winterisation shields, and the transition from cold air outside to the tunnel's warm atmosphere had melted the snow, resulting in electrical failures. One train was turned back before reaching the tunnel; two trains were hauled out of the tunnel by Eurotunnel Class 0001 diesel locomotives. The blocking of the tunnel led to the implementation of Operation Stack, the transformation of the M20 motorway into a linear car park. The occasion was the first time that a Eurostar train was evacuated inside the tunnel; the failing of four at once was described as "unprecedented". The Channel Tunnel reopened the following morning. Nirj Deva, Member of the European Parliament for South East England, had called for Eurostar chief executive Richard Brown to resign over the incidents. An independent report by Christopher Garnett (former CEO of Great North Eastern Railway) and Claude Gressier (a French transport expert) on the 18/19 December 2009 incidents was issued in February 2010, making 21 recommendations. On 7 January 2010, a Brussels–London Eurostar broke down in the tunnel. The train had | best protected against terrorism, and was the most likely to attract sufficient private finance. Arrangement The British Channel Tunnel Group consisted of two banks and five construction companies, while their French counterparts, France–Manche, consisted of three banks and five construction companies. The banks' role was to advise on financing and secure loan commitments. On 2 July 1985, the groups formed Channel Tunnel Group/France–Manche (CTG/F–M). Their submission to the British and French governments was drawn from the 1975 project, including 11 volumes and a substantial environmental impact statement. The Anglo-French Treaty on the Channel Tunnel was signed by both governments in Canterbury Cathedral. The Treaty of Canterbury (1986) prepared the Concession for the construction and operation of the Fixed Link by privately owned companies, and outlined arbitration methods to be used in the event of disputes. It set up the Intergovernmental Commission (IGC), responsible for monitoring all matters associated with the Tunnel's construction and operation on behalf of the British and French governments, and a Safety Authority to advise the IGC. It drew a land frontier between the two countries in the middle of the Channel tunnel—the first of its kind. Design and construction was done by the ten construction companies in the CTG/F-M group. The French terminal and boring from Sangatte was done by the five French construction companies in the joint venture group GIE Transmanche Construction. The English Terminal and boring from Shakespeare Cliff was done by the five British construction companies in the Translink Joint Venture. The two partnerships were linked by a bi-national project organisation, TransManche Link (TML). The Maître d'Oeuvre was a supervisory engineering body employed by Eurotunnel under the terms of the concession that monitored the project and reported to the governments and banks. In France, with its long tradition of infrastructure investment, the project had widespread approval. The French National Assembly approved it unanimously in April 1987, and after a public inquiry, the Senate approved it unanimously in June. In Britain, select committees examined the proposal, making history by holding hearings away from Westminster, in Kent. In February 1987, the third reading of the Channel Tunnel Bill took place in the House of Commons, and passed by 94 votes to 22. The Channel Tunnel Act gained Royal assent and passed into law in July. Parliamentary support for the project came partly from provincial members of Parliament on the basis of promises of regional Eurostar through train services that never materialised; the promises were repeated in 1996 when the contract for construction of the Channel Tunnel Rail Link was awarded. Cost The tunnel is a build-own-operate-transfer (BOOT) project with a concession. TML would design and build the tunnel, but financing was through a separate legal entity, Eurotunnel. Eurotunnel absorbed CTG/F-M and signed a construction contract with TML, but the British and French governments controlled final engineering and safety decisions, now in the hands of the Channel Tunnel Safety Authority. The British and French governments gave Eurotunnel a 55-year operating concession (from 1987; extended by 10 years to 65 years in 1993) to repay loans and pay dividends. A Railway Usage Agreement was signed between Eurotunnel, British Rail and SNCF guaranteeing future revenue in exchange for the railways obtaining half of the tunnel's capacity. Private funding for such a complex infrastructure project was of unprecedented scale. An initial equity of £45 million was raised by CTG/F-M, increased by £206 million private institutional placement, £770 million was raised in a public share offer that included press and television advertisements, a syndicated bank loan and letter of credit arranged £5 billion. Privately financed, the total investment costs at 1985 prices were £2.6 billion. At the 1994 completion actual costs were, in 1985 prices, £4.65 billion: an 80% cost overrun. The cost overrun was partly due to enhanced safety, security, and environmental demands. Financing costs were 140% higher than forecast. Construction Working from both the English and French sides of the Channel, eleven tunnel boring machines or TBMs cut through chalk marl to construct two rail tunnels and a service tunnel. The vehicle shuttle terminals are at Cheriton (part of Folkestone) and Coquelles, and are connected to the English M20 and French A16 motorways respectively. Tunnelling commenced in 1988, and the tunnel began operating in 1994. In 1985 prices, the total construction cost was £4.65 billion (equivalent to £ billion in 2015), an 80% cost overrun. At the peak of construction 15,000 people were employed with daily expenditure over £3 million. Ten workers, eight of them British, were killed during construction between 1987 and 1993, most in the first few months of boring. Completion A 50 mm (2 in) diameter pilot hole allowed the service tunnel to break through without ceremony on 30 October 1990. On 1 December 1990, Englishman Graham Fagg and Frenchman Phillippe Cozette broke through the service tunnel with the media watching. Eurotunnel completed the tunnel on time. (A BBC TV television commentator called Graham Fagg "the first man to cross the Channel by land for 8000 years".) The two tunnelling efforts met each other with an offset of only 36.2 cm. The tunnel was officially opened, one year later than originally planned, by Queen Elizabeth II and the French president, François Mitterrand, in a ceremony held in Calais on 6 May 1994. The Queen travelled through the tunnel to Calais on a Eurostar train, which stopped nose to nose with the train that carried President Mitterrand from Paris. Following the ceremony President Mitterrand and the Queen travelled on Le Shuttle to a similar ceremony in Folkestone. A full public service did not start for several months. The first freight train, however, ran on 1 June 1994 and carried Rover and Mini cars being exported to Italy. The Channel Tunnel Rail Link (CTRL), now called High Speed 1, runs from St Pancras railway station in London to the tunnel portal at Folkestone in Kent. It cost £5.8 billion. On 16 September 2003 the prime minister, Tony Blair, opened the first section of High Speed 1, from Folkestone to north Kent. On 6 November 2007 the Queen officially opened High Speed 1 and St Pancras International station, replacing the original slower link to Waterloo International railway station. High Speed 1 trains travel at up to , the journey from London to Paris taking 2 hours 15 minutes, to Brussels 1 hour 51 minutes. In 1994, the American Society of Civil Engineers elected the tunnel as one of the seven modern Wonders of the World. In 1995, the American magazine Popular Mechanics published the results. Opening dates Opening was phased for various services offered as the Channel Tunnel Safety Authority, the IGC, gave permission for various services to begin at several dates over the period 1994/1995 but start up dates were a few days later. Engineering Surveying undertaken in the 20 years before construction confirmed earlier speculations that a tunnel could be bored through a chalk marl stratum. The chalk marl is conducive to tunnelling, with impermeability, ease of excavation and strength. The chalk marl runs along the entire length of the English side of the tunnel, but on the French side a length of has variable and difficult geology. The tunnel consists of three bores: two diameter rail tunnels, apart, in length with a diameter service tunnel in between. The three bores are connected by cross-passages and piston relief ducts. The service tunnel was used as a pilot tunnel, boring ahead of the main tunnels to determine the conditions. English access was provided at Shakespeare Cliff, French access from a shaft at Sangatte. The French side used five tunnel boring machines (TBMs), the English side six. The service tunnel uses Service Tunnel Transport System (STTS) and Light Service Tunnel Vehicles (LADOGS). Fire safety was a critical design issue. Between the portals at Beussingue and Castle Hill the tunnel is long, with under land on the French side and on the UK side, and under sea. It is the third-longest rail tunnel in the world, behind the Gotthard Base Tunnel in Switzerland and the Seikan Tunnel in Japan, but with the longest under-sea section. The average depth is below the seabed. On the UK side, of the expected of spoil approximately was used for fill at the terminal site, and the remainder was deposited at Lower Shakespeare Cliff behind a seawall, reclaiming of land. This land was then made into the Samphire Hoe Country Park. Environmental impact assessment did not identify any major risks for the project, and further studies into safety, noise, and air pollution were overall positive. However, environmental objections were raised over a high-speed link to London. Geology Successful tunnelling required a sound understanding of the topography and geology and the selection of the best rock strata through which to dig. The geology of this site generally consists of northeasterly dipping Cretaceous strata, part of the northern limb of the Wealden-Boulonnais dome. Characteristics include: Continuous chalk on the cliffs on either side of the Channel containing no major faulting, as observed by Verstegan in 1605. Four geological strata, marine sediments laid down 90–100 million years ago; pervious upper and middle chalk above slightly pervious lower chalk and finally impermeable Gault Clay. A sandy stratum, glauconitic marl (tortia), is in between the chalk marl and gault clay. A layer of chalk marl (French: craie bleue) in the lower third of the lower chalk appeared to present the best tunnelling medium. The chalk has a clay content of 30–40% providing impermeability to groundwater yet relatively easy excavation with strength allowing minimal support. Ideally the tunnel would be bored in the bottom of the chalk marl, allowing water inflow from fractures and joints to be minimised, but above the gault clay that would increase stress on the tunnel lining and swell and soften when wet. On the English side, the stratum dip is less than 5°; on the French side this increases to 20°. Jointing and faulting are present on both sides. On the English side, only minor faults of displacement less than exist; on the French side, displacements of up to are present owing to the Quenocs anticlinal fold. The faults are of limited width, filled with calcite, pyrite and remoulded clay. The increased dip and faulting restricted the selection of route on the French side. To avoid confusion, microfossil assemblages were used to classify the chalk marl. On the French side, particularly near the coast, the chalk was harder, more brittle and more fractured than on the English side. This led to the adoption of different tunnelling techniques on the two sides. The Quaternary undersea valley Fosse Dangaered, and Castle Hill landslip at the English portal, caused concerns. Identified by the 1964–65 geophysical survey, the Fosse Dangaered is an infilled valley system extending below the seabed, south of the tunnel route in mid-channel. A 1986 survey showed that a tributary crossed the path of the tunnel, and so the tunnel route was made as far north and deep as possible. The English terminal had to be located in the Castle Hill landslip, which consists of displaced and tipping blocks of lower chalk, glauconitic marl and gault debris. Thus the area was stabilised by buttressing and inserting drainage adits. The service tunnel acted as a pilot preceding the main ones, so that the geology, areas of crushed rock, and zones of high water inflow could be predicted. Exploratory probing took place in the service tunnel, in the form of extensive forward probing, vertical downward probes and sideways probing. Surveying Marine soundings and samplings by Thomé de Gamond were carried out during 1833–67, establishing the seabed depth at a maximum of and the continuity of geological strata (layers). Surveying continued over many years, with 166 marine and 70 land-deep boreholes being drilled and over 4,000-line-kilometres of marine geophysical survey completed. Surveys were undertaken in 1958–1959, 1964–1965, 1972–1974 and 1986–1988. The surveying in 1958–59 catered for immersed tube and bridge designs as well as a bored tunnel, and thus a wide area was investigated. At this time, marine geophysics surveying for engineering projects was in its infancy, with poor positioning and resolution from seismic profiling. The 1964–65 surveys concentrated on a northerly route that left the English coast at Dover harbour; using 70 boreholes, an area of deeply weathered rock with high permeability was located just south of Dover harbour. Given the previous survey results and access constraints, a more southerly route was investigated in the 1972–73 survey, and the route was confirmed to be feasible. Information for the tunnelling project also came from work before the 1975 cancellation. On the French side at Sangatte, a deep shaft with adits was made. On the English side at Shakespeare Cliff, the government allowed of diameter tunnel to be driven. The actual tunnel alignment, method of excavation and support were essentially the same as the 1975 attempt. In the 1986–87 survey, previous findings were reinforced, and the characteristics of the gault clay and the tunnelling medium (chalk marl that made up 85% of the route) were investigated. Geophysical techniques from the oil industry were employed. Tunnelling Tunnelling was a major engineering challenge, with the only precedent being the undersea Seikan Tunnel in Japan, which opened in 1988. A serious health and safety risk with building tunnels underwater is major water inflow due to the high hydrostatic pressure from the sea above, under weak ground conditions. The tunnel also had the challenge of time: being privately funded, early financial return was paramount. The objective was to construct two rail tunnels, apart, in length; a service tunnel between the two main ones; pairs of cross-passages linking the rail tunnels to the service one at spacing; piston relief ducts in diameter connecting the rail tunnels apart; two undersea crossover caverns to connect the rail tunnels, with the service tunnel always preceding the main ones by at least to ascertain the ground conditions. There was plenty of experience with excavating through chalk in the mining industry, while the undersea crossover caverns were a complex engineering problem. The French one was based on the Mount Baker Ridge freeway tunnel in Seattle; the UK cavern was dug from the service tunnel ahead of the main ones, to avoid delay. Precast segmental linings in the main TBM drives were used, but two different solutions were used. On the French side, neoprene and grout sealed bolted linings made of cast iron or high-strength reinforced concrete were used; on the English side, the main requirement was for speed so bolting of cast-iron lining segments was only carried out in areas of poor geology. In the UK rail tunnels, eight lining segments plus a key segment were used; in the French side, five segments plus a key. On the French side, a diameter deep grout-curtained shaft at Sangatte was used for access. On the English side, a marshalling area was below the top of Shakespeare Cliff, the New Austrian Tunnelling method (NATM) was first applied in the chalk marl here. On the English side, the land tunnels were driven from Shakespeare Cliff—same place as the marine tunnels—not from Folkestone. The platform at the base of the cliff was not large enough for all of the drives and, despite environmental objections, tunnel spoil was placed behind a reinforced concrete seawall, on condition of placing the chalk in an enclosed lagoon, to avoid wide dispersal of chalk fines. Owing to limited space, the precast lining factory was on the Isle of Grain in the Thames estuary, which used Scottish granite aggregate delivered by ship from the Foster Yeoman coastal super quarry at Glensanda in Loch Linnhe on the west coast of Scotland. On the French side, owing to the greater permeability to water, earth pressure balance TBMs with open and closed modes were used. The TBMs were of a closed nature during the initial , but then operated as open, boring through the chalk marl stratum. This minimised the impact to the ground, allowed high water pressures to be withstood and it also alleviated the need to grout ahead of the tunnel. The French effort required five TBMs: two main marine machines, one main land machine (the short land drives of allowed one TBM to complete the first drive then reverse direction and complete the other), and two service tunnel machines. On the English side, the simpler geology allowed faster open-faced TBMs. Six machines were used; all commenced digging from Shakespeare Cliff, three marine-bound and three for the land tunnels. Towards the completion of the undersea drives, the UK TBMs were driven steeply downwards and buried clear of the tunnel. These buried TBMs were then used to provide an electrical earth. The French TBMs then completed the tunnel and were dismantled. A gauge railway was used on the English side during construction. In contrast to the English machines, which were given technical names, the French tunnelling machines were all named after women: Brigitte, Europa, Catherine, Virginie, Pascaline, Séverine. At the end of the tunnelling, one machine was on display at the side of the M20 motorway in Folkestone until Eurotunnel sold it on eBay for £39,999 to a scrap metal merchant. Another machine (T4 "Virginie") still survives on the French side, adjacent to Junction 41 on the A16, in the middle of the D243E3/D243E4 roundabout. On it are the words "hommage aux bâtisseurs du tunnel", meaning "tribute to the builders of the tunnel". Tunnel boring machines The eleven tunnel boring machines were designed and manufactured through a joint venture between the Robbins Company of Kent, Washington, United States; Markham & Co. of Chesterfield, England; and Kawasaki Heavy Industries of Japan. The TBMs for the service tunnels and main tunnels on the UK side were designed and manufactured by James Howden & Company Ltd, Scotland. Railway design Loading gauge The loading gauge height is . Communications There are three communication systems: concession radio (CR) for mobile vehicles and personnel within Eurotunnel's Concession (terminals, tunnels, coastal shafts); track-to-train radio (TTR) for secure speech and data between trains and the railway control centre; Shuttle internal radio (SIR) for communication between shuttle crew and to passengers over car radios. Power supply Power is delivered to the locomotives via an overhead line (catenary) at . with a normal overhead clearance of . All tunnel services run on electricity, shared equally from English and French sources. There are two sub-stations fed at 400 kV at each terminal, but in an emergency the tunnel's lighting (about 20,000 light fittings) and plant can be powered solely from either England or France. The traditional railway south of London uses a 750 V DC third rail to deliver electricity, but since the opening of High Speed 1 there is no longer any need for tunnel trains to use the third rail system. High Speed 1, the tunnel and the LGV Nord all have power provided via overhead catenary at 25 kV 50 Hz. The railways on "classic" lines in Belgium are also electrified by overhead wires, but at 3000 V DC. Signalling A cab signalling system gives information directly to train drivers on a display. There is a train protection system that stops the train if the speed exceeds that indicated on the in-cab display. TVM430, as used on LGV Nord and High Speed 1, is used in the tunnel. The TVM signalling is interconnected with the signalling on the high-speed lines either side, allowing trains to enter and exit the tunnel system without stopping. The maximum speed is . Signalling in the tunnel is coordinated from two control centres: The main control centre at the Folkestone terminal, and a backup at the Calais terminal, which is staffed at all times and can take over all operations in the event of a breakdown or emergency. Track system Conventional ballasted tunnel-track was ruled out owing to the difficulty of maintenance and lack of stability and precision. The Sonneville International Corporation's track system was chosen based on reliability and cost-effectiveness based on good performance in Swiss tunnels and worldwide. The type of track used is known as Low Vibration Track (LVT). Like ballasted track the LVT is of the free floating type, held in place by gravity and friction. Reinforced concrete blocks of 100 kg support the rails every 60 cm and are held by 12 mm thick closed cell polymer foam pads placed at the bottom of rubber boots. The latter separate the blocks' mass movements from the lean encasement concrete. Ballastless track provides extra overhead clearance necessary for the passage of larger trains. The corrugated rubber walls of the boots add a degree of isolation of horizontal wheel-rail vibrations, and are insulators of the track signal circuit in the humid tunnel environment. UIC60 (60 kg/m) rails of 900A grade rest on rail pads, which fit the RN/Sonneville bolted dual leaf-springs. The rails, LVT-blocks and their boots with pads were assembled outside the tunnel, in a fully automated process developed by the LVT inventor, Mr. Roger Sonneville. About 334,000 Sonneville blocks were made on the Sangatte site. Maintenance activities are less than projected. Initially the rails were ground on a yearly basis or after approximately 100MGT of traffic. Ride quality continues to be noticeably smooth and of low noise. Maintenance is facilitated by the existence of two tunnel junctions or crossover facilities, allowing for two-way operation in each of the six tunnel segments thereby created, and thus providing safe access for maintenance of one isolated tunnel segment at a time. The two crossovers are the largest artificial undersea caverns ever built; 150 m long, 10 m high and 18 m wide. The English crossover is from Shakespeare Cliff, and the French crossover is from Sangatte. Ventilation, cooling and drainage The ventilation system maintains the air pressure in the service tunnel higher than in the rail tunnels, so that in the event of a fire, smoke does not enter the service tunnel from the rail tunnels. Two cooling water pipes in each rail tunnel circulate chilled water to remove heat generated by the rail traffic. Pumping stations remove water in the tunnels from rain, seepage, and so on. During the design stage of the tunnel, engineers found that its aerodynamic properties and the heat generated by high-speed trains as they passed through it would raise the temperature inside the tunnel to . As well as making the trains "unbearably warm" for passengers this also presented a risk of equipment failure and track distortion. To cool the tunnel to below , engineers installed of diameter cooling pipes carrying of water. The network—Europe's largest cooling system—was supplied by eight York Titan chillers running on R22, a Hydrochlorofluorocarbon (HCFC) refrigerant gas. Due to R22's ozone depletion potential (ODP) and high global warming potential (GWP), its use is being phased out in developed countries, and since 1 January 2015 it has been illegal in Europe to use HCFCs to service air-conditioning equipment—broken equipment that used HCFCs must instead be replaced with equipment that does not use it. In 2016, Trane was selected to provide replacement chillers for the tunnel's cooling network. The York chillers were decommissioned and four "next generation" Trane Series E CenTraVac large-capacity (2600 kW to 14,000 kW) chillers were installed—two located in Sangatte, France, and two at Shakespeare Cliff, UK. The energy-efficient chillers, using Honeywell's non-flammable, ultra-low GWP R1233zd(E) refrigerant, maintain temperatures at , and in their first year of operation generated savings of 4.8 GWh—approximately 33%, equating to €500,000 ($585,000)—for tunnel operator Getlink. Rolling stock Rolling stock used previously Operators Eurotunnel Shuttle Initially 38 Le Shuttle locomotives were commissioned, with one at each end of a shuttle train. Car shuttle sets have two separate halves: single and double deck. Each half has two loading/unloading wagons and 12 carrier wagons. Eurotunnel's original order was for nine car shuttle sets. Heavy goods vehicle (HGV) shuttle sets also have two halves, with each half containing one loading wagon, one unloading wagon and 14 carrier wagons. There is a club car behind the leading locomotive, where drivers must stay during the journey. Eurotunnel originally ordered six HGV shuttle sets. Freight locomotives Forty-six Class 92 locomotives for hauling freight trains and overnight passenger trains (the Nightstar project, which was abandoned) were commissioned, running on both overhead AC and third-rail DC power. However, RFF does not let these run on French railways, so there are plans to certify Alstom Prima II locomotives for use in the tunnel. International passenger Thirty-one Eurostar trains, based on the French TGV, built to UK loading gauge with many modifications for safety within the tunnel, were commissioned, with ownership split between British Rail, French national railways (SNCF) and Belgian national railways (SNCB). British Rail ordered seven more for services north of London. Around 2010, Eurostar ordered ten trains from Siemens based on its Velaro product. The Class 374 entered service in 2016 and have been operating through the Channel Tunnel ever since alongside the current Class 373. Germany (DB) has since around 2005 tried to get permission to run train services to London. At the end of 2009, extensive fire-proofing requirements were dropped and DB received permission to run German Intercity-Express (ICE) test trains through the tunnel. In June 2013 DB was granted access to the tunnel, but these plans were ultimately dropped. In October 2021, Renfe, the Spanish state railway company, expressed interest in operating a cross-Channel route between Paris and London using some of their existing trains with the intention of competing with Eurostar. No details have been revealed as to which trains would be used. Service locomotives Diesel locomotives for rescue and shunting work are Eurotunnel Class 0001 and Eurotunnel Class 0031. Operation The following chart presents the estimated number of passengers and tonnes of freight, respectively, annually transported through the Channel Tunnel since 1994, in millions: Usage and services Transport services offered by the tunnel are as follows: Eurotunnel Le Shuttle roll-on |
The Glass Hammer (1985) and Death Arms (1987). Jeter wrote other standalone cyberpunk novels before going on to write three authorized sequels to Do Androids Dream of electric sheep, named Blade Runner 2: The Edge of Human (1995), Blade Runner 3: Replicant Night (1996), and Blade Runner 4: Eye and Talon. Do Androids Dream of Electric Sheep was made into the seminal movie Blade Runner, released in 1982. This was one year after William Gibson's story, "Johnny Mnemonic" helped move proto-cyberpunk concepts into the mainstream. That story, which also became a film years later in 1995, involves another dystopian future, where human couriers deliver computer data, stored cybernetically in their own minds. The term cyberpunk first appeared as the title of a short story written by Bruce Bethke, written in 1980 and published in Amazing Stories in 1983. It was picked up by Gardner Dozois, editor of Isaac Asimov's Science Fiction Magazine and popularized in his editorials. Bethke says he made two lists of words, one for technology, one for troublemakers, and experimented with combining them variously into compound words, consciously attempting to coin a term that encompassed both punk attitudes and high technology. He described the idea thus: Afterward, Dozois began using this term in his own writing, most notably in a Washington Post article where he said "About the closest thing here to a self-willed esthetic 'school' would be the purveyors of bizarre hard-edged, high-tech stuff, who have on occasion been referred to as 'cyberpunks' — Sterling, Gibson, Shiner, Cadigan, Bear." About that time in 1984, William Gibson's novel Neuromancer was published, delivering a glimpse of a future encompassed by what became an archetype of cyberpunk "virtual reality", with the human mind being fed light-based worldscapes through a computer interface. Some, perhaps ironically including Bethke himself, argued at the time that the writers whose style Gibson's books epitomized should be called "Neuromantics", a pun on the name of the novel plus "New Romantics", a term used for a New Wave pop music movement that had just occurred in Britain, but this term did not catch on. Bethke later paraphrased Michael Swanwick's argument for the term: "the movement writers should properly be termed neuromantics, since so much of what they were doing was clearly imitating Neuromancer". Sterling was another writer who played a central role, often consciously, in the cyberpunk genre, variously seen as either keeping it on track, or distorting its natural path into a stagnant formula. In 1986 he edited a volume of cyberpunk stories called Mirrorshades: The Cyberpunk Anthology, an attempt to establish what cyberpunk was, from Sterling's perspective. In the subsequent decade, the motifs of Gibson's Neuromancer became formulaic, climaxing in the satirical extremes of Neal Stephenson's Snow Crash in 1992. Bookending the cyberpunk era, Bethke himself published a novel in 1995 called Headcrash, like Snow Crash a satirical attack on the genre's excesses. Fittingly, it won an honor named after cyberpunk's spiritual founder, the Philip K. Dick Award. It satirized the genre in this way: The impact of cyberpunk, though, has been long-lasting. Elements of both the setting and storytelling have become normal in science fiction in general, and a slew of sub-genres now have -punk tacked onto their names, most obviously steampunk, but also a host of other cyberpunk derivatives. Style and ethos Primary figures in the cyberpunk movement include William Gibson, Neal Stephenson, Bruce Sterling, Bruce Bethke, Pat Cadigan, Rudy Rucker, and John Shirley. Philip K. Dick (author of Do Androids Dream of Electric Sheep?, from which the film Blade Runner was adapted) is also seen by some as prefiguring the movement. Blade Runner can be seen as a quintessential example of the cyberpunk style and theme. Video games, board games, and tabletop role-playing games, such as Cyberpunk 2020 and Shadowrun, often feature storylines that are heavily influenced by cyberpunk writing and movies. Beginning in the early 1990s, some trends in fashion and music were also labeled as cyberpunk. Cyberpunk is also featured prominently in anime and manga (Japanese cyberpunk), with Akira, Ghost in the Shell and Cowboy Bebop being among the most notable. Setting Cyberpunk writers tend to use elements from crime fiction—particularly hardboiled detective fiction and film noir—and postmodernist prose to describe an often nihilistic underground side of an electronic society. The genre's vision of a troubled future is often called the antithesis of the generally utopian visions of the future popular in the 1940s and 1950s. Gibson defined cyberpunk's antipathy towards utopian SF in his 1981 short story "The Gernsback Continuum," which pokes fun at and, to a certain extent, condemns utopian science fiction. In some cyberpunk writing, much of the action takes place online, in cyberspace, blurring the line between actual and virtual reality. A typical trope in such work is a direct connection between the human brain and computer systems. Cyberpunk settings are dystopias with corruption, computers and internet connectivity. Giant, multinational corporations have for the most part replaced governments as centers of political, economic, and even military power. The economic and technological state of Japan is a regular theme in the cyberpunk literature of the 1980s. Of Japan's influence on the genre, William Gibson said, "Modern Japan simply was cyberpunk." Cyberpunk is often set in urbanized, artificial landscapes, and "city lights, receding" was used by Gibson as one of the genre's first metaphors for cyberspace and virtual reality. The cityscapes of Hong Kong has had major influences in the urban backgrounds, ambiance and settings in many cyberpunk works such as Blade Runner and Shadowrun. Ridley Scott envisioned the landscape of cyberpunk Los Angeles in Blade Runner to be "Hong Kong on a very bad day". The streetscapes of the Ghost in the Shell film were based on Hong Kong. Its director Mamoru Oshii felt that Hong Kong's strange and chaotic streets where "old and new exist in confusing relationships", fit the theme of the film well. Hong Kong's Kowloon Walled City is particularly notable for its disorganized hyper-urbanization and breakdown in traditional urban planning to be an inspiration to cyberpunk landscapes. Portrayals of East Asia and Asians in Western cyberpunk have been criticized as Orientalist and promoting racist tropes playing on American and European fears of East Asian dominance; this has been referred to as "techno-Orientalism". Protagonists One of the cyberpunk genre's prototype characters is Case, from Gibson's Neuromancer. Case is a "console cowboy," a brilliant hacker who has betrayed his organized criminal partners. Robbed of his talent through a crippling injury inflicted by the vengeful partners, Case unexpectedly receives a once-in-a-lifetime opportunity to be healed by expert medical care but only if he participates in another criminal enterprise with a new crew. Like Case, many cyberpunk protagonists are manipulated, placed in situations where they have little or no choice, and although they might see things through, they do not necessarily come out any further ahead than they previously were. This emphasis on the misfits and the malcontents is the "punk" component of cyberpunk. Society and government Cyberpunk can be intended to disquiet readers and call them to action. It often expresses a sense of rebellion, suggesting that one could describe it as a type of cultural revolution in science fiction. In the words of author and critic David Brin: ...a closer look [at cyberpunk authors] reveals that they nearly always portray future societies in which governments have become wimpy and pathetic ...Popular science fiction tales by Gibson, Williams, Cadigan and others do depict Orwellian accumulations of power in the next century, but nearly always clutched in the secretive hands of a wealthy or corporate elite. Cyberpunk stories have also been seen as fictional forecasts of the evolution of the Internet. The earliest descriptions of a global communications network came long before the World Wide Web entered popular awareness, though not before traditional science-fiction writers such as Arthur C. Clarke and some social commentators such as James Burke began predicting that such networks would eventually form. Some observers cite that cyberpunk tends to marginalize sectors of society such as women and Africans. It is claimed that, for instance, cyberpunk depicts fantasies that ultimately empower masculinity using fragmentary and decentered aesthetic that culminate in a masculine genre populated by male outlaws. Critics also note the absence of any reference to Africa or an African-American character in the quintessential cyberpunk film Blade Runner while other films reinforce stereotypes. Media Literature Minnesota writer Bruce Bethke coined the term in 1983 for his short story "Cyberpunk," which was published in an issue of Amazing Science Fiction Stories. The term was quickly appropriated as a label to be applied to the works of William Gibson, Bruce Sterling, Pat Cadigan and others. Of these, Sterling became the movement's chief ideologue, thanks to his fanzine Cheap Truth. John Shirley wrote articles on Sterling and Rucker's significance. John Brunner's 1975 novel The Shockwave Rider is considered by many to be the first cyberpunk novel with many of the tropes commonly associated with the genre, some five years before the term was popularized by Dozois. William Gibson with his novel Neuromancer (1984) is arguably the most famous writer connected with the term cyberpunk. He emphasized style, a fascination with surfaces, and atmosphere over traditional science-fiction tropes. Regarded as ground-breaking and sometimes as "the archetypal cyberpunk work," Neuromancer was awarded the Hugo, Nebula, and Philip K. Dick Awards. Count Zero (1986) and Mona Lisa Overdrive (1988) followed after Gibson's popular debut novel. According to the Jargon File, "Gibson's near-total ignorance of computers and the present-day hacker culture enabled him to speculate about the role of computers and hackers in the future in ways hackers have since found both irritatingly naïve and tremendously stimulating." Early on, cyberpunk was hailed as a radical departure from science-fiction standards and a new manifestation of vitality. Shortly thereafter, however, some critics arose to challenge its status as a revolutionary movement. These critics said that the SF New Wave of the 1960s was much more innovative as far as narrative techniques and styles were concerned. Furthermore, while Neuromancer's narrator may have had an unusual "voice" for science fiction, much older examples can be found: Gibson's narrative voice, for example, resembles that of an updated Raymond Chandler, as in his novel The Big Sleep (1939). Others noted that almost all traits claimed to be uniquely cyberpunk could in fact be found in older writers' works—often citing J. G. Ballard, Philip K. Dick, Harlan Ellison, Stanisław Lem, Samuel R. Delany, and even William S. Burroughs. For example, Philip K. Dick's works contain recurring themes of social decay, artificial intelligence, paranoia, and blurred lines between objective and subjective realities. The influential cyberpunk movie Blade Runner (1982) is based on his book, Do Androids Dream of Electric Sheep?. Humans linked to machines are found in Pohl and Kornbluth's Wolfbane (1959) and Roger Zelazny's Creatures of Light and Darkness (1968). In 1994, scholar Brian Stonehill suggested that Thomas Pynchon's 1973 novel Gravity's Rainbow "not only curses but precurses what we now glibly dub cyberspace." Other important predecessors include Alfred Bester's two most celebrated novels, The Demolished Man and The Stars My Destination, as well as Vernor Vinge's novella True Names. Reception and impact Science-fiction writer David Brin describes cyberpunk as "the finest free promotion campaign ever waged on behalf of science fiction." It may not have attracted the "real punks," but it did ensnare many new readers, and it provided the sort of movement that postmodern literary critics found alluring. Cyberpunk made science fiction more attractive to academics, argues Brin; in addition, it made science fiction more profitable to Hollywood and to the visual arts generally. Although the "self-important rhetoric and whines of persecution" on the part of cyberpunk fans were irritating at worst and humorous at best, Brin declares that the "rebels did shake things up. We owe them a debt." Fredric Jameson considers cyberpunk the "supreme literary expression if not of postmodernism, then of late capitalism itself". Cyberpunk further inspired many professional writers who were not among the "original" cyberpunks to incorporate cyberpunk ideas into their own works, such as George Alec Effinger's When Gravity Fails. Wired magazine, created by Louis Rossetto and Jane Metcalfe, mixes new technology, art, literature, and current topics in order to interest today's cyberpunk fans, which Paula Yoo claims "proves that hardcore hackers, multimedia junkies, cyberpunks and cellular freaks are poised to take over the world." Film and television The film Blade Runner (1982)—adapted from Philip K. Dick's Do Androids Dream of Electric Sheep?—is set in 2019 in a dystopian future in which manufactured beings called replicants are slaves used on space colonies and are legal prey on Earth to various bounty hunters who "retire" (kill) them. Although Blade Runner was largely unsuccessful in its first theatrical release, it found a viewership in the home video market and became a cult film. Since the movie omits the religious and mythical elements of Dick's original novel (e.g. empathy boxes and Wilbur Mercer), it falls more strictly within the cyberpunk genre than the novel does. William Gibson would later reveal that upon first viewing the film, he was surprised at how the look of this film matched his vision for Neuromancer, a book he was then working on. The film's tone has since been the staple of many cyberpunk movies, such as The Matrix trilogy (1999-2003), which uses a wide variety of cyberpunk elements. The number of films in the genre or at least using a few genre elements has grown steadily since Blade Runner. Several of Philip K. Dick's works have been adapted to the silver screen. The films Johnny Mnemonic and New Rose Hotel, both based upon short stories by William Gibson, flopped commercially and critically. These box offices misses significantly slowed the development of cyberpunk as a literary or cultural form although a sequel to the 1982 film Blade Runner was released in | stories by William Gibson, flopped commercially and critically, while The Matrix trilogy (1999–2003) and Judge Dredd (1995) were some of the most successful cyberpunk films. Newer cyberpunk media includes Blade Runner 2049 (2017), a sequel to the original 1982 film, as well as Upgrade (2018), Dredd (2012) which was not a sequel to the original movie, Alita: Battle Angel (2019) based on the 1990s Japanese manga Battle Angel Alita, the 2018 Netflix TV series Altered Carbon based on Richard K. Morgan's 2002 novel of the same name, the 2020 remake of 1997 role-playing video game Final Fantasy VII, and the video game Cyberpunk 2077 (2020) based on R. Talsorian Games's 1988 tabletop role-playing game Cyberpunk. Background Lawrence Person has attempted to define the content and ethos of the cyberpunk literary movement stating: Cyberpunk plots often center on conflict among artificial intelligences, hackers, and megacorporations, and tend to be set in a near-future Earth, rather than in the far-future settings or galactic vistas found in novels such as Isaac Asimov's Foundation or Frank Herbert's Dune. The settings are usually post-industrial dystopias but tend to feature extraordinary cultural ferment and the use of technology in ways never anticipated by its original inventors ("the street finds its own uses for things"). Much of the genre's atmosphere echoes film noir, and written works in the genre often use techniques from detective fiction. There are sources who view that cyberpunk has shifted from a literary movement to a mode of science fiction due to the limited number of writers and its transition to a more generalized cultural formation. History and origins The origins of cyberpunk are rooted in the New Wave science fiction movement of the 1960s and 1970s, where New Worlds, under the editorship of Michael Moorcock, began inviting and encouraging stories that examined new writing styles, techniques, and archetypes. Reacting to conventional storytelling, New Wave authors attempted to present a world where society coped with a constant upheaval of new technology and culture, generally with dystopian outcomes. Writers like Roger Zelazny, J.G. Ballard, Philip Jose Farmer, Samuel R. Delany, and Harlan Ellison often examined the impact of drug culture, technology, and the sexual revolution with an avant-garde style influenced by the Beat Generation (especially William S. Burroughs' own SF), Dadaism, and their own ideas. Ballard attacked the idea that stories should follow the "archetypes" popular since the time of Ancient Greece, and the assumption that these would somehow be the same ones that would call to modern readers, as Joseph Campbell argued in The Hero with a Thousand Faces. Instead, Ballard wanted to write a new myth for the modern reader, a style with "more psycho-literary ideas, more meta-biological and meta-chemical concepts, private time systems, synthetic psychologies and space-times, more of the sombre half-worlds one glimpses in the paintings of schizophrenics." This had a profound influence on a new generation of writers, some of whom would come to call their movement "cyberpunk". One, Bruce Sterling, later said: Ballard, Zelazny, and the rest of New Wave was seen by the subsequent generation as delivering more "realism" to science fiction, and they attempted to build on this. Samuel R. Delany's 1968 novel Nova is also considered one of the major forerunners of the cyberpunk movement. It prefigures, for instance, cyberpunk's staple trope of human interfacing with computers via implants. Writer William Gibson claimed to be greatly influenced by Delany, and his novel Neuromancer includes allusions to Nova. Similarly influential, and generally cited as proto-cyberpunk , is the Philip K. Dick novel Do Androids Dream of Electric Sheep, first published in 1968. Presenting precisely the general feeling of dystopian post-economic-apocalyptic future as Gibson and Sterling later deliver, it examines ethical and moral problems with cybernetic, artificial intelligence in a way more "realist" than the Isaac Asimov Robot series that laid its philosophical foundation. Dick's protege and friend K. W. Jeter wrote a novel called Dr. Adder in 1972 that, Dick lamented, might have been more influential in the field had it been able to find a publisher at that time. It was not published until 1984, after which Jeter made it the first book in a trilogy, followed by The Glass Hammer (1985) and Death Arms (1987). Jeter wrote other standalone cyberpunk novels before going on to write three authorized sequels to Do Androids Dream of electric sheep, named Blade Runner 2: The Edge of Human (1995), Blade Runner 3: Replicant Night (1996), and Blade Runner 4: Eye and Talon. Do Androids Dream of Electric Sheep was made into the seminal movie Blade Runner, released in 1982. This was one year after William Gibson's story, "Johnny Mnemonic" helped move proto-cyberpunk concepts into the mainstream. That story, which also became a film years later in 1995, involves another dystopian future, where human couriers deliver computer data, stored cybernetically in their own minds. The term cyberpunk first appeared as the title of a short story written by Bruce Bethke, written in 1980 and published in Amazing Stories in 1983. It was picked up by Gardner Dozois, editor of Isaac Asimov's Science Fiction Magazine and popularized in his editorials. Bethke says he made two lists of words, one for technology, one for troublemakers, and experimented with combining them variously into compound words, consciously attempting to coin a term that encompassed both punk attitudes and high technology. He described the idea thus: Afterward, Dozois began using this term in his own writing, most notably in a Washington Post article where he said "About the closest thing here to a self-willed esthetic 'school' would be the purveyors of bizarre hard-edged, high-tech stuff, who have on occasion been referred to as 'cyberpunks' — Sterling, Gibson, Shiner, Cadigan, Bear." About that time in 1984, William Gibson's novel Neuromancer was published, delivering a glimpse of a future encompassed by what became an archetype of cyberpunk "virtual reality", with the human mind being fed light-based worldscapes through a computer interface. Some, perhaps ironically including Bethke himself, argued at the time that the writers whose style Gibson's books epitomized should be called "Neuromantics", a pun on the name of the novel plus "New Romantics", a term used for a New Wave pop music movement that had just occurred in Britain, but this term did not catch on. Bethke later paraphrased Michael Swanwick's argument for the term: "the movement writers should properly be termed neuromantics, since so much of what they were doing was clearly imitating Neuromancer". Sterling was another writer who played a central role, often consciously, in the cyberpunk genre, variously seen as either keeping it on track, or distorting its natural path into a stagnant formula. In 1986 he edited a volume of cyberpunk stories called Mirrorshades: The Cyberpunk Anthology, an attempt to establish what cyberpunk was, from Sterling's perspective. In the subsequent decade, the motifs of Gibson's Neuromancer became formulaic, climaxing in the satirical extremes of Neal Stephenson's Snow Crash in 1992. Bookending the cyberpunk era, Bethke himself published a novel in 1995 called Headcrash, like Snow Crash a satirical attack on the genre's excesses. Fittingly, it won an honor named after cyberpunk's spiritual founder, the Philip K. Dick Award. It satirized the genre in this way: The impact of cyberpunk, though, has been long-lasting. Elements of both the setting and storytelling have become normal in science fiction in general, and a slew of sub-genres now have -punk tacked onto their names, most obviously steampunk, but also a host of other cyberpunk derivatives. Style and ethos Primary figures in the cyberpunk movement include William Gibson, Neal Stephenson, Bruce Sterling, Bruce Bethke, Pat Cadigan, Rudy Rucker, and John Shirley. Philip K. Dick (author of Do Androids Dream of Electric Sheep?, from which the film Blade Runner was adapted) is also seen by some as prefiguring the movement. Blade Runner can be seen as a quintessential example of the cyberpunk style and theme. Video games, board games, and tabletop role-playing games, such as Cyberpunk 2020 and Shadowrun, often feature storylines that are heavily influenced by cyberpunk writing and movies. Beginning in the early 1990s, some trends in fashion and music were also labeled as cyberpunk. Cyberpunk is also featured prominently in anime and manga (Japanese cyberpunk), with Akira, Ghost in the Shell and Cowboy Bebop being among the most notable. Setting Cyberpunk writers tend to use elements from crime fiction—particularly hardboiled detective fiction and film noir—and postmodernist prose to describe an often nihilistic underground side of an electronic society. The genre's vision of a troubled future is often called the antithesis of the generally utopian visions of the future popular in the 1940s and 1950s. Gibson defined cyberpunk's antipathy towards utopian SF in his 1981 short story "The Gernsback Continuum," which pokes fun at and, to a certain extent, condemns utopian science fiction. In some cyberpunk writing, much of the action takes place online, in cyberspace, blurring the line between actual and virtual reality. A typical trope in such work is a direct connection between the human brain and computer systems. Cyberpunk settings are dystopias with corruption, computers and internet connectivity. Giant, multinational corporations have for the most part replaced governments as centers of political, economic, and even military power. The economic and technological state of Japan is a regular theme in the cyberpunk literature of the 1980s. Of Japan's influence on the genre, William Gibson said, "Modern Japan simply was cyberpunk." Cyberpunk is often set in urbanized, artificial landscapes, and "city lights, receding" was used by Gibson as one of the genre's first metaphors for cyberspace and virtual reality. The cityscapes of Hong Kong has had major influences in the urban backgrounds, ambiance and settings in many cyberpunk works such as Blade Runner and Shadowrun. Ridley Scott envisioned the landscape of cyberpunk Los Angeles in Blade Runner to be "Hong Kong on a very bad day". The streetscapes of the Ghost in the Shell film were based on Hong Kong. Its director Mamoru Oshii felt that Hong Kong's |
worlds first comic strip. It satirised the political and social life of Scotland in the 1820s. Conceived and illustrated by William Heath. Swiss author and caricature artist Rodolphe Töpffer (Geneva, 1799–1846) is considered the father of the modern comic strips. His illustrated stories such as Histoire de M. Vieux Bois (1827), first published in the USA in 1842 as The Adventures of Obadiah Oldbuck or Histoire de Monsieur Jabot (1831), inspired subsequent generations of German and American comic artists. In 1865, German painter, author, and caricaturist Wilhelm Busch created the strip Max and Moritz, about two trouble-making boys, which had a direct influence on the American comic strip. Max and Moritz was a series of seven severely moralistic tales in the vein of German children's stories such as Struwwelpeter ("Shockheaded Peter"). In the story's final act, the boys, after perpetrating some mischief, are tossed into a sack of grain, run through a mill, and consumed by a flock of geese (without anybody mourning their demise). Max and Moritz provided an inspiration for German immigrant Rudolph Dirks, who created the Katzenjammer Kids in 1897 – a strip starring two German-American boys visually modelled on Max and Moritz. Familiar comic-strip iconography such as stars for pain, sawing logs for snoring, speech balloons, and thought balloons originated in Dirks' strip. Hugely popular, Katzenjammer Kids occasioned one of the first comic-strip copyright ownership suits in the history of the medium. When Dirks left William Randolph Hearst for the promise of a better salary under Joseph Pulitzer, it was an unusual move, since cartoonists regularly deserted Pulitzer for Hearst. In a highly unusual court decision, Hearst retained the rights to the name "Katzenjammer Kids", while creator Dirks retained the rights to the characters. Hearst promptly hired Harold Knerr to draw his own version of the strip. Dirks renamed his version Hans and Fritz (later, The Captain and the Kids). Thus, two versions distributed by rival syndicates graced the comics pages for decades. Dirks' version, eventually distributed by United Feature Syndicate, ran until 1979. In the United States, the great popularity of comics sprang from the newspaper war (1887 onwards) between Pulitzer and Hearst. The Little Bears (1893–96) was the first American comic strip with recurring characters, while the first color comic supplement was published by the Chicago Inter-Ocean sometime in the latter half of 1892, followed by the New York Journals first color Sunday comic pages in 1897. On January 31, 1912, Hearst introduced the nation's first full daily comic page in his New York Evening Journal. The history of this newspaper rivalry and the rapid appearance of comic strips in most major American newspapers is discussed by Ian Gordon. Numerous events in newspaper comic strips have reverberated throughout society at large, though few of these events occurred in recent years, owing mainly to the declining use of continuous storylines on newspaper comic strips, which since the 1970s had been waning as an entertainment form. From 1903 to 1905 Gustave Verbeek, wrote his comic series "The UpsideDowns of Old Man Muffaroo and Little Lady Lovekins". These comics were made in such a way that one could read the 6 panel comic, flip the book and keep reading. He made 64 such comics in total. The longest-running American comic strips are: The Katzenjammer Kids (1897–2006; 109 years) Gasoline Alley (1918–present) Ripley's Believe It or Not! (1918–present) Barney Google and Snuffy Smith (1919–present) Thimble Theater/Popeye (1919–present) Blondie (1930–present) Dick Tracy (1931–present) Alley Oop (1932–present) Bringing Up Father (1913–2000; 87 years) Little Orphan Annie (1924–2010; 86 years) Most newspaper comic strips are syndicated; a syndicate hires people to write and draw a strip and then distributes it to many newspapers for a fee. Some newspaper strips begin or remain exclusive to one newspaper. For example, the Pogo comic strip by Walt Kelly originally appeared only in the New York Star in 1948 and was not picked up for syndication until the following year. Newspaper comic strips come in two different types: daily strips and Sunday strips. In the United States, a daily strip appears in newspapers on weekdays, Monday through Saturday, as contrasted with a Sunday strip, which typically only appears on Sundays. Daily strips usually are printed in black and white, and Sunday strips are usually in color. However, a few newspapers have published daily strips in color, and some newspapers have published Sunday strips in black and white. Popularity While in the early 20th century comic strips were a frequent target for detractors of "yellow journalism", by the 1920s the medium became wildly popular. While radio, and later, television surpassed newspapers as a means of entertainment, most comic strip characters were widely recognizable until the 1980s, and the "funny pages" were often arranged in a way they appeared at the front of Sunday editions. In 1931, George Gallup's first poll had the comic section as the most important part of the newspaper, with additional surveys pointing out that the comic strips were the second most popular feature after the picture page. During the 1930s, many comic sections had between 12 and 16 pages, although in some cases, these had up to 24 pages. The popularity and accessibility of strips meant they were often clipped and saved; authors including John Updike and Ray Bradbury have written about their childhood collections of clipped strips. Often posted on bulletin boards, clipped strips had an ancillary form of distribution when they were faxed, photocopied or mailed. The Baltimore Suns Linda White recalled, "I followed the adventures of Winnie Winkle, Moon Mullins and Dondi, and waited each fall to see how Lucy would manage to trick Charlie Brown into trying to kick that football. (After I left for college, my father would clip out that strip each year and send it to me just to make sure I didn’t miss it.)" Production and format The two conventional formats for newspaper comics are strips and single gag panels. The strips are usually displayed horizontally, wider than they are tall. Single panels are square, circular or taller than they are wide. Strips usually, but not always, are broken up into several smaller panels with continuity from panel to panel. A horizontal strip can also be used for a single panel with a single gag, as seen occasionally in Mike Peters' Mother Goose and Grimm. Early daily strips were large, often running the entire width of the newspaper, and were sometimes three or more inches high. Initially, a newspaper page included only a single daily strip, usually either at the top or the bottom of the page. By the 1920s, many newspapers had a comics page on which many strips were collected together. During the 1930s, the original art for a daily strip could be drawn as large as 25 inches wide by six inches high. Over decades, the size of daily strips became smaller and smaller, until by 2000, four standard daily strips could fit in an area once occupied by a single daily strip. As strips have become smaller, the number of panels have been reduced. Proof sheets were the means by which syndicates provided newspapers with black-and-white line art for the reproduction of strips (which they arranged to have colored in the case of Sunday strips). Michigan State University Comic Art Collection librarian Randy Scott describes these as "large sheets of paper on which newspaper comics have traditionally been distributed to subscribing newspapers. Typically each sheet will have either six daily strips of a given title or one Sunday strip. Thus, a week of Beetle Bailey would arrive at the Lansing State Journal in two sheets, printed much larger than the final version and ready to be cut apart and fitted into the local comics page." Comic strip historian Allan Holtz described how strips were provided as mats (the plastic or cardboard trays in which molten metal is poured to make plates) or even plates ready to be put directly on the printing press. He also notes that with electronic means of distribution becoming more prevalent printed sheets "are definitely on their way out." NEA Syndicate experimented briefly with a two-tier daily strip, Star Hawks, but after a few years, Star Hawks dropped down to a single tier. In Flanders, the two-tier strip is the standard publication style of most daily strips like Spike and Suzy and Nero. They appear Monday through Saturday; until 2003 there were no Sunday papers in Flanders. In the last decades, they have switched from black and white to color. Cartoon panels Single panels usually, but not always, are not broken up and lack continuity. The daily Peanuts is a strip, and the daily Dennis the Menace is a single panel. J. R. Williams' long-run Out Our Way continued as a daily panel even after it expanded into a Sunday strip, Out Our Way with the Willets. Jimmy Hatlo's They'll Do It Every Time was often displayed in a two-panel format with the first panel showing some deceptive, pretentious, unwitting or scheming human behavior and the second panel revealing the truth of the situation. Sunday comics Sunday newspapers traditionally included a special color section. Early Sunday strips (known colloquially as "the funny papers", shortened to "the funnies"), such as Thimble Theatre and Little Orphan Annie, filled an entire newspaper page, a format known to collectors as full page. Sunday pages during the 1930s and into the 1940s often carried a secondary strip by the same artist as the main strip. No matter whether it appeared above or below a main strip, the extra strip was known as the topper, such as The Squirrel Cage which ran along with Room and Board, both drawn by Gene Ahern. During the 1930s, the original art for a Sunday strip was usually drawn quite large. For example, in 1930, Russ Westover drew his Tillie the Toiler Sunday page at a size of 17" × 37". In 1937, the cartoonist Dudley Fisher launched the innovative Right Around Home, drawn as a huge single panel filling an entire Sunday page. Full-page strips were eventually replaced by strips half that size. Strips such as The Phantom and Terry and the Pirates began appearing in a format of two strips to a page in full-size newspapers, such as the New Orleans Times Picayune, or with one strip on a tabloid page, as in the Chicago Sun-Times. When Sunday strips began to appear in more than one format, it became necessary for the cartoonist to allow for rearranged, cropped or dropped panels. During World War II, because of paper shortages, the size of Sunday strips began to shrink. After the war, strips continued to get smaller and smaller because of increased paper and printing costs. The last full-page comic strip was the Prince Valiant strip for 11 April 1971. Comic strips have also been published in Sunday newspaper magazines. Russell Patterson and Carolyn Wells' New Adventures of Flossy Frills was a continuing strip series seen on Sunday magazine covers. Beginning January 26, 1941, it ran on the front covers of Hearst's American Weekly newspaper magazine supplement, continuing until March 30 of that year. Between 1939 and 1943, four different stories featuring Flossy appeared on American Weekly covers. Sunday comics sections employed offset color printing with multiple print runs imitating a wide range of colors. Printing plates were created with four or more colors—traditionally, the CMYK color model: cyan, magenta, yellow and "K" for black. With a screen of tiny dots on each printing plate, the dots allowed an image to be printed in a halftone that appears to the eye in different gradations. The semi-opaque property of ink allows halftone dots of different colors to create an optical effect of full-color imagery. Underground comic strips The decade of the 1960s saw the rise of underground newspapers, which often carried comic strips, such as Fritz the Cat and The Fabulous Furry Freak Brothers. Zippy the Pinhead initially appeared in underground publications in the 1970s before being syndicated. Bloom County and Doonesbury began as strips in college newspapers under different titles, and later moved to national syndication. Underground comic strips covered subjects that are usually taboo in newspaper strips, such as sex and drugs. Many underground artists, notably Vaughn Bode, Dan O'Neill, Gilbert Shelton, and Art Spiegelman went on to draw comic strips for magazines such as Playboy, National Lampoon, and Pete Millar's CARtoons. Jay Lynch graduated from undergrounds to alternative weekly newspapers to Mad and children's books. Webcomics Webcomics, also known as online comics and internet comics, are comics that are available to read on the Internet. Many are exclusively published online, but the majority of traditional newspaper comic strips have some Internet presence. King Features Syndicate and other syndicates often provide archives of recent strips on their websites. Some, such as Scott Adams, creator of Dilbert, include an email address in each strip. Conventions and genres Most comic strip characters do not age throughout the strip's life, but in some strips, like Lynn Johnston's award-winning For Better or For Worse, the characters age as the years pass. The first strip to feature aging characters was Gasoline Alley. The history of comic strips also includes series that are not humorous, but tell an ongoing dramatic story. Examples include The Phantom, Prince Valiant, Dick Tracy, Mary Worth, Modesty Blaise, Little Orphan Annie, Flash Gordon, and Tarzan. Sometimes these are spin-offs from comic books, for example Superman, Batman, and The Amazing Spider-Man. A number of strips have featured animals as main characters. Some are non-verbal (Marmaduke, The Angriest Dog in the World), some have verbal thoughts but are not understood by humans, (Garfield, Snoopy in Peanuts), and some can converse with humans (Bloom County, Calvin and Hobbes, Mutts, Citizen Dog, Buckles, Get Fuzzy, Pearls Before Swine, and Pooch Cafe). Other strips are centered entirely on animals, as in Pogo and Donald Duck. Gary Larson's The Far Side was unusual, as there were no central characters. Instead The Far Side used a wide variety of characters including humans, monsters, aliens, chickens, cows, worms, amoebas, and more. John McPherson's Close to Home also uses this theme, though the characters are mostly restricted to humans and real-life situations. Wiley Miller not only mixes human, animal, and fantasy characters, but also does several different comic strip continuities under one umbrella title, Non Sequitur. Bob Thaves's Frank & Ernest began in 1972 and paved the way for some of these strips, as its human characters were manifest in diverse forms — as animals, vegetables, and minerals. Social and political influence The comics have long held a distorted mirror to contemporary society, and almost from the beginning have been used for political or social commentary. This ranged from the conservative slant of Harold Gray's Little Orphan Annie to the unabashed liberalism of Garry Trudeau's Doonesbury. Al Capp's Li'l Abner espoused liberal opinions for most of its run, but by the late 1960s, it became a mouthpiece for Capp's repudiation of the counterculture. Pogo used animals to particularly devastating effect, caricaturing many prominent politicians of the day as animal denizens of Pogo's Okeefenokee Swamp. In a fearless move, Pogo's creator Walt Kelly took on Joseph McCarthy in the 1950s, caricaturing him as a bobcat named Simple J. Malarkey, a megalomaniac who was bent on taking over the characters' birdwatching club and rooting out all undesirables. Kelly also defended the medium | case of Sunday strips). Michigan State University Comic Art Collection librarian Randy Scott describes these as "large sheets of paper on which newspaper comics have traditionally been distributed to subscribing newspapers. Typically each sheet will have either six daily strips of a given title or one Sunday strip. Thus, a week of Beetle Bailey would arrive at the Lansing State Journal in two sheets, printed much larger than the final version and ready to be cut apart and fitted into the local comics page." Comic strip historian Allan Holtz described how strips were provided as mats (the plastic or cardboard trays in which molten metal is poured to make plates) or even plates ready to be put directly on the printing press. He also notes that with electronic means of distribution becoming more prevalent printed sheets "are definitely on their way out." NEA Syndicate experimented briefly with a two-tier daily strip, Star Hawks, but after a few years, Star Hawks dropped down to a single tier. In Flanders, the two-tier strip is the standard publication style of most daily strips like Spike and Suzy and Nero. They appear Monday through Saturday; until 2003 there were no Sunday papers in Flanders. In the last decades, they have switched from black and white to color. Cartoon panels Single panels usually, but not always, are not broken up and lack continuity. The daily Peanuts is a strip, and the daily Dennis the Menace is a single panel. J. R. Williams' long-run Out Our Way continued as a daily panel even after it expanded into a Sunday strip, Out Our Way with the Willets. Jimmy Hatlo's They'll Do It Every Time was often displayed in a two-panel format with the first panel showing some deceptive, pretentious, unwitting or scheming human behavior and the second panel revealing the truth of the situation. Sunday comics Sunday newspapers traditionally included a special color section. Early Sunday strips (known colloquially as "the funny papers", shortened to "the funnies"), such as Thimble Theatre and Little Orphan Annie, filled an entire newspaper page, a format known to collectors as full page. Sunday pages during the 1930s and into the 1940s often carried a secondary strip by the same artist as the main strip. No matter whether it appeared above or below a main strip, the extra strip was known as the topper, such as The Squirrel Cage which ran along with Room and Board, both drawn by Gene Ahern. During the 1930s, the original art for a Sunday strip was usually drawn quite large. For example, in 1930, Russ Westover drew his Tillie the Toiler Sunday page at a size of 17" × 37". In 1937, the cartoonist Dudley Fisher launched the innovative Right Around Home, drawn as a huge single panel filling an entire Sunday page. Full-page strips were eventually replaced by strips half that size. Strips such as The Phantom and Terry and the Pirates began appearing in a format of two strips to a page in full-size newspapers, such as the New Orleans Times Picayune, or with one strip on a tabloid page, as in the Chicago Sun-Times. When Sunday strips began to appear in more than one format, it became necessary for the cartoonist to allow for rearranged, cropped or dropped panels. During World War II, because of paper shortages, the size of Sunday strips began to shrink. After the war, strips continued to get smaller and smaller because of increased paper and printing costs. The last full-page comic strip was the Prince Valiant strip for 11 April 1971. Comic strips have also been published in Sunday newspaper magazines. Russell Patterson and Carolyn Wells' New Adventures of Flossy Frills was a continuing strip series seen on Sunday magazine covers. Beginning January 26, 1941, it ran on the front covers of Hearst's American Weekly newspaper magazine supplement, continuing until March 30 of that year. Between 1939 and 1943, four different stories featuring Flossy appeared on American Weekly covers. Sunday comics sections employed offset color printing with multiple print runs imitating a wide range of colors. Printing plates were created with four or more colors—traditionally, the CMYK color model: cyan, magenta, yellow and "K" for black. With a screen of tiny dots on each printing plate, the dots allowed an image to be printed in a halftone that appears to the eye in different gradations. The semi-opaque property of ink allows halftone dots of different colors to create an optical effect of full-color imagery. Underground comic strips The decade of the 1960s saw the rise of underground newspapers, which often carried comic strips, such as Fritz the Cat and The Fabulous Furry Freak Brothers. Zippy the Pinhead initially appeared in underground publications in the 1970s before being syndicated. Bloom County and Doonesbury began as strips in college newspapers under different titles, and later moved to national syndication. Underground comic strips covered subjects that are usually taboo in newspaper strips, such as sex and drugs. Many underground artists, notably Vaughn Bode, Dan O'Neill, Gilbert Shelton, and Art Spiegelman went on to draw comic strips for magazines such as Playboy, National Lampoon, and Pete Millar's CARtoons. Jay Lynch graduated from undergrounds to alternative weekly newspapers to Mad and children's books. Webcomics Webcomics, also known as online comics and internet comics, are comics that are available to read on the Internet. Many are exclusively published online, but the majority of traditional newspaper comic strips have some Internet presence. King Features Syndicate and other syndicates often provide archives of recent strips on their websites. Some, such as Scott Adams, creator of Dilbert, include an email address in each strip. Conventions and genres Most comic strip characters do not age throughout the strip's life, but in some strips, like Lynn Johnston's award-winning For Better or For Worse, the characters age as the years pass. The first strip to feature aging characters was Gasoline Alley. The history of comic strips also includes series that are not humorous, but tell an ongoing dramatic story. Examples include The Phantom, Prince Valiant, Dick Tracy, Mary Worth, Modesty Blaise, Little Orphan Annie, Flash Gordon, and Tarzan. Sometimes these are spin-offs from comic books, for example Superman, Batman, and The Amazing Spider-Man. A number of strips have featured animals as main characters. Some are non-verbal (Marmaduke, The Angriest Dog in the World), some have verbal thoughts but are not understood by humans, (Garfield, Snoopy in Peanuts), and some can converse with humans (Bloom County, Calvin and Hobbes, Mutts, Citizen Dog, Buckles, Get Fuzzy, Pearls Before Swine, and Pooch Cafe). Other strips are centered entirely on animals, as in Pogo and Donald Duck. Gary Larson's The Far Side was unusual, as there were no central characters. Instead The Far Side used a wide variety of characters including humans, monsters, aliens, chickens, cows, worms, amoebas, and more. John McPherson's Close to Home also uses this theme, though the characters are mostly restricted to humans and real-life situations. Wiley Miller not only mixes human, animal, and fantasy characters, but also does several different comic strip continuities under one umbrella title, Non Sequitur. Bob Thaves's Frank & Ernest began in 1972 and paved the way for some of these strips, as its human characters were manifest in diverse forms — as animals, vegetables, and minerals. Social and political influence The comics have long held a distorted mirror to contemporary society, and almost from the beginning have been used for political or social commentary. This ranged from the conservative slant of Harold Gray's Little Orphan Annie to the unabashed liberalism of Garry Trudeau's Doonesbury. Al Capp's Li'l Abner espoused liberal opinions for most of its run, but by the late 1960s, it became a mouthpiece for Capp's repudiation of the counterculture. Pogo used animals to particularly devastating effect, caricaturing many prominent politicians of the day as animal denizens of Pogo's Okeefenokee Swamp. In a fearless move, Pogo's creator Walt Kelly took on Joseph McCarthy in the 1950s, caricaturing him as a bobcat named Simple J. Malarkey, a megalomaniac who was bent on taking over the characters' birdwatching club and rooting out all undesirables. Kelly also defended the medium against possible government regulation in the McCarthy era. At a time when comic books were coming under fire for supposed sexual, violent, and subversive content, Kelly feared the same would happen to comic strips. Going before the Congressional subcommittee, he proceeded to charm the members with his drawings and the force of his personality. The comic strip was safe for satire. During the early 20th century, comic strips were widely associated with publisher William Randolph Hearst, whose papers had the largest circulation of strips in the United States. Hearst was notorious for his practice of yellow journalism, and he was frowned on by readers of The New York Times and other newspapers which featured few or no comic strips. Hearst's critics often assumed that all the strips in his papers were fronts for his own political and social views. Hearst did occasionally work with or pitch ideas to cartoonists, most notably his continued support of George Herriman's Krazy Kat. An inspiration for Bill Watterson and other cartoonists, Krazy Kat gained a considerable following among intellectuals during the 1920s and 1930s. Some comic strips, such as Doonesbury and Mallard Fillmore, may be printed on the editorial or op-ed page rather than the comics page because of their regular political commentary. For example, the August 12, 1974 Doonesbury strip was awarded a 1975 Pulitzer Prize for its depiction of the Watergate scandal. Dilbert is sometimes found in the business section of a newspaper instead of the comics page because of the strip's commentary about office politics, and Tank McNamara often appears on the sports page because of its subject matter. Lynn Johnston's For Better or For Worse created an uproar when Lawrence, one of the strip's supporting characters, came out of the closet. Publicity and recognition The world's longest comic strip is long and on display at Trafalgar Square as part of the London Comedy Festival. The London Cartoon Strip was created by 15 of Britain's best known cartoonists and depicts the history of London. The Reuben, named for cartoonist Rube Goldberg, is the most prestigious award for U.S. comic strip artists. Reuben awards are presented annually by the National Cartoonists Society (NCS). In 1995, the United States Postal Service |
of constructibility, which implies CH. More recently, Matthew Foreman has pointed out that ontological maximalism can actually be used to argue in favor of CH, because among models that have the same reals, models with "more" sets of reals have a better chance of satisfying CH. Another viewpoint is that the conception of set is not specific enough to determine whether CH is true or false. This viewpoint was advanced as early as 1923 by Skolem, even before Gödel's first incompleteness theorem. Skolem argued on the basis of what is now known as Skolem's paradox, and it was later supported by the independence of CH from the axioms of ZFC since these axioms are enough to establish the elementary properties of sets and cardinalities. In order to argue against this viewpoint, it would be sufficient to demonstrate new axioms that are supported by intuition and resolve CH in one direction or another. Although the axiom of constructibility does resolve CH, it is not generally considered to be intuitively true any more than CH is generally considered to be false. At least two other axioms have been proposed that have implications for the continuum hypothesis, although these axioms have not currently found wide acceptance in the mathematical community. In 1986, Chris Freiling presented an argument against CH by showing that the negation of CH is equivalent to Freiling's axiom of symmetry, a statement derived by arguing from particular intuitions about probabilities. Freiling believes this axiom is "intuitively true" but others have disagreed. A difficult argument against CH developed by W. Hugh Woodin has attracted considerable attention since the year 2000. Foreman does not reject Woodin's argument outright but urges caution. Woodin proposed a new hypothesis that he labeled the , or "Star axiom". The Star axiom would imply that is , thus falsifying CH. The Star axiom was bolstered by an independent May 2021 proof showing the Star axiom can be derived from a variation of Martin's maximum. However, Woodin stated in the 2010s that he now instead believes CH to be true, based on his belief in his new "ultimate L" conjecture. Solomon Feferman has argued that CH is not a definite mathematical problem. He proposes a theory of "definiteness" using a semi-intuitionistic subsystem of ZF that accepts classical logic for bounded quantifiers but uses intuitionistic logic for unbounded ones, and suggests that a proposition is mathematically "definite" if the semi-intuitionistic theory can prove . He conjectures that CH is not definite according to this notion, and proposes that CH should, therefore, be considered not to have a truth value. Peter Koellner wrote a critical commentary on Feferman's article. Joel David Hamkins proposes a multiverse approach to set theory and argues that "the continuum hypothesis is settled on the multiverse view by our extensive knowledge about how it behaves in the multiverse, and, as a result, it can no longer be settled in the manner formerly hoped for". In a related vein, Saharon Shelah wrote that he does "not agree with the pure Platonic view that the interesting problems in set theory can be decided, that we just have to discover the additional axiom. My mental picture is that we have many possible set theories, all conforming to ZFC". The generalized continuum hypothesis The generalized continuum hypothesis (GCH) states that if an infinite set's cardinality lies between that of an infinite set S and that of the power set of S, then it has the same cardinality as either S or . That is, for any infinite cardinal there is no cardinal such that . GCH is equivalent to: for every ordinal (occasionally called Cantor's aleph hypothesis). The beth numbers provide an alternate notation for this condition: for every ordinal . The continuum hypothesis is the special case for | real numbers than rational numbers. However, this intuitive analysis is flawed; it does not take proper account of the fact that all three sets are infinite. It turns out the rational numbers can actually be placed in one-to-one correspondence with the integers, and therefore the set of rational numbers is the same size (cardinality) as the set of integers: they are both countable sets. Cantor gave two proofs that the cardinality of the set of integers is strictly smaller than that of the set of real numbers (see Cantor's first uncountability proof and Cantor's diagonal argument). His proofs, however, give no indication of the extent to which the cardinality of the integers is less than that of the real numbers. Cantor proposed the continuum hypothesis as a possible solution to this question. The continuum hypothesis states that the set of real numbers has minimal possible cardinality which is greater than the cardinality of the set of integers. That is, every set, S, of real numbers can either be mapped one-to-one into the integers or the real numbers can be mapped one-to-one into S. As the real numbers are equinumerous with the powerset of the integers, and the continuum hypothesis says that there is no set for which . Assuming the axiom of choice, there is a smallest cardinal number greater than , and the continuum hypothesis is in turn equivalent to the equality . Independence from ZFC The independence of the continuum hypothesis (CH) from Zermelo–Fraenkel set theory (ZF) follows from combined work of Kurt Gödel and Paul Cohen. Gödel showed that CH cannot be disproved from ZF, even if the axiom of choice (AC) is adopted (making ZFC). Gödel's proof shows that CH and AC both hold in the constructible universe L, an inner model of ZF set theory, assuming only the axioms of ZF. The existence of an inner model of ZF in which additional axioms hold shows that the additional axioms are consistent with ZF, provided ZF itself is consistent. The latter condition cannot be proved in ZF itself, due to Gödel's incompleteness theorems, but is widely believed to be true and can be proved in stronger set theories. Cohen showed that CH cannot be proven from the ZFC axioms, completing the overall independence proof. To prove his result, Cohen developed the method of forcing, which has become a standard tool in set theory. Essentially, this method begins with a model of ZF in which CH holds, and constructs another model which contains more sets than the original, in a way that CH does not hold in the new model. Cohen was awarded the Fields Medal in 1966 for his proof. The independence proof just described shows that CH is independent of ZFC. Further research has shown that CH is independent of all known large cardinal axioms in the context of ZFC. Moreover, it has been shown that the cardinality of the continuum can be any cardinal consistent with König's theorem. A result of Solovay, proved shortly after Cohen's result on the independence of the continuum hypothesis, shows that in any model of ZFC, if is a cardinal of uncountable cofinality, then there is a forcing extension in which . However, per König's theorem, it is not consistent to assume is or or any cardinal with cofinality . The continuum hypothesis is closely related to many statements in analysis, point set topology and measure theory. As a result of its independence, many substantial conjectures in those fields have subsequently been shown to be independent as well. The independence from ZFC means that proving or disproving the CH within ZFC is impossible. However, Gödel and Cohen's negative results are not universally accepted as disposing of all interest in the continuum hypothesis. Hilbert's problem remains an active topic of research; see Woodin and Peter Koellner for an overview of the current research status. The continuum hypothesis was not the first statement shown |
dictator Siad Barre’s ousting, conflicts between the General Mohammed Farah Aidid party and other clans in Somalia had led to famine and lawlessness throughout the country. An estimated 300,000 people had died from starvation. A combined military force of United States and United Nations (under the name "UNOSOM") were deployed to Mogadishu, to monitor the ceasefire and deliver food and supplies to the starving people of Somali. Çevik Bir, who was then a lieutenant-general of Turkey, became the force commander of UNOSOM II in April 1993. Despite the retreat of US and UN forces after several deaths due to local hostilities mainly led by Aidid, the introduction of a powerful military force opened the transportation routes, enabling the provision of supplies and ended the famine quickly. He was succeeded as Force Commander by a Malaysian general in January 1994. He became a four-star | introduction of a powerful military force opened the transportation routes, enabling the provision of supplies and ended the famine quickly. He was succeeded as Force Commander by a Malaysian general in January 1994. He became a four-star general and served three years as vice chairman of the Turkish Armed Forces, then appointed commander of the Turkish First Army, in Istanbul. While he was vice chairman of the TAF, he signed the Turkish-Israeli Military Coordination agreement in 1996. Çevik Bir became the Turkish army's deputy chief of general staff shortly after the Somali operation and played a vital role in establishing a Turkish-Israeli entente. Çevik Bir retired from the army on August 30, 1999. He is a former member of the Association for the Study of the Middle East and Africa (ASMEA). On April 12, 2012, Bir and 30 other officers were taken in custody for their role in the 1997 military memorandum that forced the then Turkish government, led by the Refah Partisi (Welfare Party), to step down. Çevik Bir, one of the generals who planned the process, said "In Turkey we have a marriage of Islam and democracy. (…) The child of this marriage is secularism. Now this child gets sick from time to time. The Turkish Armed |
the official language and praised in agitprop literature, for example by Vladimir Mayakovsky (Who needs a "1") and Bertolt Brecht (The Decision, Man Equals Man). Anarcho-collectivism Anarcho-collectivism deals with collectivism in a decentralized anarchistic system, in which people are paid off their surplus labor. Collectivist anarchism is contrasted with anarcho-communism, where wages would be abolished and where individuals would take freely from a storehouse of goods "to each according to his need". It is most commonly associated with Mikhail Bakunin, the anti-authoritarian sections of the International Workingmen's Association and the early Spanish anarchist movement. Corporatism Corporatism is sometimes seen as an ideology which relies on collectivist co-operation as one of its central components. The term is derived from the Latin corpus, or "human body", which in this case means that society should function like unto a body, through the means of loyalty to an individual's in-group or corpus. Collective bargaining is one example of corporatist economic principles. Often, state-sanctioned bargaining is considered collectivist. Terminology and measurement The construct of collectivism is represented in empirical literature under several different names. Most commonly, the term interdependent self-construal is used. Other phrases used to describe the concept of collectivism-individualism include allocentrism-idiocentrism, collective-private self, as well as subtypes of collectivism-individualism (meaning, vertical and horizontal subtypes). Inconsistent terminology is thought to account for some of the difficulty in effectively synthesizing the empirical literature on collectivism. See also Collective guilt Collective identity Collective leadership Collective narcissism Collective responsibility Communitarianism Corporatism Cultural conservatism | over individual goals. Hofstede insights describes collectivism as: "Collectivism, represents a preference for a tightly-knit framework in society in which individuals can expect their relatives or members of a particular ingroup to look after them in exchange for unquestioning loyalty." Marxism–Leninism Collectivism was an important part of Marxist–Leninist ideology in the Soviet Union, where it played a key part in forming the New Soviet man, willingly sacrificing his or her life for the good of the collective. Terms such as "collective" and "the masses" were frequently used in the official language and praised in agitprop literature, for example by Vladimir Mayakovsky (Who needs a "1") and Bertolt Brecht (The Decision, Man Equals Man). Anarcho-collectivism Anarcho-collectivism deals with collectivism in a decentralized anarchistic system, in which people are paid off their surplus labor. Collectivist anarchism is contrasted with anarcho-communism, where wages would be abolished and where individuals would take freely from a storehouse of goods "to each according to his need". It is most commonly associated with Mikhail Bakunin, the anti-authoritarian sections of the International Workingmen's Association and the early Spanish anarchist movement. Corporatism Corporatism is sometimes seen as an ideology which relies on collectivist co-operation as one of its central components. The term is derived from the Latin corpus, or "human body", which in this case means that society should function like unto a body, through the means of |
Benth. Nepeta gontscharovii Kudrjasch. Nepeta govaniana (Wall. ex Benth.) Benth. Nepeta graciliflora Benth. Nepeta granatensis Boiss. Nepeta grandiflora M.Bieb. Nepeta grata Benth. Nepeta griffithii Hedge Nepeta heliotropifolia Lam. Nepeta hemsleyana Oliv. ex Prain Nepeta henanensis C.S.Zhu Nepeta hindostana (B.Heyne ex Roth) Haines Nepeta hispanica Boiss. & Reut. Nepeta hormozganica Jamzad Nepeta humilis Benth. Nepeta hymenodonta Boiss. Nepeta isaurica Boiss. & Heldr. ex Benth. Nepeta ispahanica Boiss. Nepeta italica L. Nepeta jakupicensis Micevski Nepeta jomdaensis H.W.Li Nepeta juncea Benth. Nepeta knorringiana Pojark. Nepeta koeieana Rech.f. Nepeta kokamirica Regel Nepeta kokanica Regel Nepeta komarovii E.A.Busch Nepeta kotschyi Boiss. Nepeta kurdica Hausskn. & Bornm. Nepeta kurramensis Rech.f. Nepeta ladanolens Lipsky Nepeta laevigata (D.Don) Hand.-Mazz. Nepeta lagopsis Benth. Nepeta lamiifolia Willd. Nepeta lamiopsis Benth. ex Hook.f. Nepeta lasiocephala Benth. Nepeta latifolia DC. Nepeta leucolaena Benth. ex Hook.f. Nepeta linearis Royle ex Benth. Nepeta lipskyi Kudrjasch. Nepeta longibracteata Benth. Nepeta longiflora Vent. Nepeta longituba Pojark. Nepeta ludlow-hewittii Blakelock Nepeta macrosiphon Boiss. Nepeta mahanensis Jamzad & M.Simmonds Nepeta manchuriensis S.Moore Nepeta mariae Regel Nepeta maussarifii Lipsky Nepeta melissifolia Lam. Nepeta membranifolia C.Y.Wu Nepeta menthoides Boiss. & Buhse Nepeta meyeri Benth. Nepeta micrantha Bunge Nepeta minuticephala Jamzad Nepeta mirzayanii Rech.f. & Esfand. Nepeta mollis Benth. Nepeta monocephala Rech.f. Nepeta monticola Kudr. Nepeta multibracteata Desf. Nepeta multicaulis Mukerjee Nepeta multifida L. Nepeta natanzensis Jamzad Nepeta nawarica Rech.f. Nepeta nepalensis Spreng. Nepeta nepetella L. Nepeta nepetellae Forssk. Nepeta nepetoides (Batt. ex Pit.) Harley Nepeta nervosa Royle ex Benth. Nepeta nuda L. Nepeta obtusicrena Boiss. & Kotschy ex Hedge Nepeta odorifera Lipsky Nepeta olgae Regel Nepeta orphanidea Boiss. Nepeta pabotii Mouterde Nepeta paktiana Rech.f. Nepeta pamirensis Franch. Nepeta parnassica Heldr. & Sart. Nepeta paucifolia Mukerjee Nepeta persica Boiss. Nepeta petraea Benth. Nepeta phyllochlamys P.H.Davis Nepeta pilinux P.H.Davis Nepeta podlechii Rech.f. Nepeta podostachys Benth. Nepeta pogonosperma Jamzad & Assadi Nepeta polyodonta Rech.f. Nepeta praetervisa Rech.f. Nepeta prattii H.Lév. Nepeta prostrata Benth. Nepeta pseudokokanica Pojark. Nepeta pubescens Benth. Nepeta pungens (Bunge) Benth. Nepeta racemosa Lam. Nepeta raphanorhiza Benth. Nepeta rechingeri Hedge Nepeta rivularis Bornm. Nepeta roopiana Bordz. Nepeta rtanjensis Diklic & Milojevic Nepeta rubella A.L.Budantzev Nepeta rugosa Benth. Nepeta saccharata Bunge Nepeta santoana Popov Nepeta saturejoides Boiss. Nepeta schiraziana Boiss. Nepeta schmidii Rech.f. Nepeta schugnanica Lipsky Nepeta scordotis L. Nepeta septemcrenata Ehrenb. ex Benth. Nepeta sessilis C.Y.Wu | govaniana (Wall. ex Benth.) Benth. Nepeta graciliflora Benth. Nepeta granatensis Boiss. Nepeta grandiflora M.Bieb. Nepeta grata Benth. Nepeta griffithii Hedge Nepeta heliotropifolia Lam. Nepeta hemsleyana Oliv. ex Prain Nepeta henanensis C.S.Zhu Nepeta hindostana (B.Heyne ex Roth) Haines Nepeta hispanica Boiss. & Reut. Nepeta hormozganica Jamzad Nepeta humilis Benth. Nepeta hymenodonta Boiss. Nepeta isaurica Boiss. & Heldr. ex Benth. Nepeta ispahanica Boiss. Nepeta italica L. Nepeta jakupicensis Micevski Nepeta jomdaensis H.W.Li Nepeta juncea Benth. Nepeta knorringiana Pojark. Nepeta koeieana Rech.f. Nepeta kokamirica Regel Nepeta kokanica Regel Nepeta komarovii E.A.Busch Nepeta kotschyi Boiss. Nepeta kurdica Hausskn. & Bornm. Nepeta kurramensis Rech.f. Nepeta ladanolens Lipsky Nepeta laevigata (D.Don) Hand.-Mazz. Nepeta lagopsis Benth. Nepeta lamiifolia Willd. Nepeta lamiopsis Benth. ex Hook.f. Nepeta lasiocephala Benth. Nepeta latifolia DC. Nepeta leucolaena Benth. ex Hook.f. Nepeta linearis Royle ex Benth. Nepeta lipskyi Kudrjasch. Nepeta longibracteata Benth. Nepeta longiflora Vent. Nepeta longituba Pojark. Nepeta ludlow-hewittii Blakelock Nepeta macrosiphon Boiss. Nepeta mahanensis Jamzad & M.Simmonds Nepeta manchuriensis S.Moore Nepeta mariae Regel Nepeta maussarifii Lipsky Nepeta melissifolia Lam. Nepeta membranifolia C.Y.Wu Nepeta menthoides Boiss. & Buhse Nepeta meyeri Benth. Nepeta micrantha Bunge Nepeta minuticephala Jamzad Nepeta mirzayanii Rech.f. & Esfand. Nepeta mollis Benth. Nepeta monocephala Rech.f. Nepeta monticola Kudr. Nepeta multibracteata Desf. Nepeta multicaulis Mukerjee Nepeta multifida L. Nepeta natanzensis Jamzad Nepeta nawarica Rech.f. Nepeta nepalensis Spreng. Nepeta nepetella L. Nepeta nepetellae Forssk. Nepeta nepetoides (Batt. ex Pit.) Harley Nepeta nervosa Royle ex Benth. Nepeta nuda L. Nepeta obtusicrena Boiss. & Kotschy ex Hedge Nepeta odorifera Lipsky Nepeta olgae Regel Nepeta orphanidea Boiss. Nepeta pabotii Mouterde Nepeta paktiana Rech.f. Nepeta pamirensis Franch. Nepeta parnassica Heldr. & Sart. Nepeta paucifolia Mukerjee Nepeta persica Boiss. Nepeta petraea Benth. Nepeta phyllochlamys P.H.Davis Nepeta pilinux P.H.Davis Nepeta podlechii Rech.f. Nepeta podostachys Benth. Nepeta pogonosperma Jamzad & Assadi Nepeta polyodonta Rech.f. Nepeta praetervisa Rech.f. Nepeta prattii H.Lév. Nepeta prostrata Benth. Nepeta pseudokokanica Pojark. Nepeta pubescens Benth. Nepeta pungens (Bunge) Benth. Nepeta racemosa Lam. Nepeta raphanorhiza Benth. Nepeta rechingeri Hedge Nepeta rivularis Bornm. Nepeta roopiana Bordz. Nepeta rtanjensis Diklic & Milojevic Nepeta rubella A.L.Budantzev Nepeta rugosa Benth. Nepeta saccharata Bunge Nepeta santoana Popov Nepeta saturejoides Boiss. Nepeta schiraziana Boiss. Nepeta schmidii Rech.f. Nepeta schugnanica Lipsky Nepeta scordotis L. Nepeta septemcrenata Ehrenb. ex Benth. Nepeta sessilis C.Y.Wu & S.J.Hsuan Nepeta shahmirzadensis Assadi & Jamzad Nepeta sheilae Hedge & R.A.King Nepeta sibirica L. Nepeta sorgerae Hedge & Lamond Nepeta sosnovskyi Askerova Nepeta souliei H.Lév. Nepeta spathulifera Benth. Nepeta sphaciotica P.H.Davis Nepeta spruneri Boiss. Nepeta stachyoides Coss. ex Batt. Nepeta staintonii Hedge Nepeta stenantha Kotschy & Boiss. Nepeta stewartiana Diels Nepeta straussii Hausskn. & Bornm. Nepeta stricta (Banks & Sol.) Hedge & Lamond Nepeta suavis Stapf Nepeta subcaespitosa Jehan Nepeta subhastata Regel Nepeta subincisa Benth. Nepeta subintegra Maxim. Nepeta subsessilis Maxim. Nepeta sudanica F.W.Andrews Nepeta sulfuriflora P.H.Davis Nepeta sulphurea C. Koch Nepeta sungpanensis C.Y.Wu Nepeta supina Steven Nepeta taxkorganica Y.F.Chang Nepeta tenuiflora Diels Nepeta tenuifolia Benth. Nepeta teucriifolia Willd. Nepeta teydea Webb & Berthel. Nepeta tibestica Maire Nepeta × tmolea Boiss. Nepeta trachonitica Post Nepeta transiliensis Pojark. Nepeta trautvetteri Boiss. & Buhse Nepeta trichocalyx Greuter |
Programme of the CNP were published in 1975 and included the following points: To look after the interests of Cornish people. To preserve and enhance the identity of Kernow, an essentially Celtic identity. To achieve self-government for Kernow. Total sovereignty will be exercised by the Cornish state over the land within its traditional border. Kernow's official language will be Cornish. Better job prospects for Cornish people. Reduction of unemployment to an acceptable level (2.5%). The protection of the self-employed and small businesses in Cornwall. Cheaper housing and priority for Cornish people. Discouragement of second homes. Controls over tourism. The Cornish state will have control over the number and nature of immigrants. The establishment of a Cornish economic department to aid the basic industries of farming, fishing, china clay and mining and secondary industries developing from these. Improved transport facilities in Cornwall with greater scope for private enterprise to operate. Existing medical and welfare services for Cornish people will be developed and improved. Protection of Cornish natural resources, including offshore resources. Conservation of the Cornish landscape and the unique Cornish environment, culture and identity. Courses on Cornish language and history should be made available in schools for those who want them. Recognition of the Cornish flag of St Piran and the retention of the Tamar border with England. The rule of law will be upheld by the Cornish state and the judiciary will be separate from the legislative and executive functions of the state. The Cornish state will create a home defence force, linked to local communities and civil units of administration. Young Cornish people will be given instruction as to world religions and secular philosophies but the greatest attention will be given to Christianity and early Celtic beliefs. A far greater say in government for Cornish people (by referenda if necessary) and the decentralisation of considerable powers to a Cornish nation within a united Europe - special links being established with our Celtic brothers and sisters in Scotland, Ireland, Isle of Man, Wales and Brittany. There have been perceived image problems as the CNP has been seen as similarly styled BNP and NF | small businesses in Cornwall. Cheaper housing and priority for Cornish people. Discouragement of second homes. Controls over tourism. The Cornish state will have control over the number and nature of immigrants. The establishment of a Cornish economic department to aid the basic industries of farming, fishing, china clay and mining and secondary industries developing from these. Improved transport facilities in Cornwall with greater scope for private enterprise to operate. Existing medical and welfare services for Cornish people will be developed and improved. Protection of Cornish natural resources, including offshore resources. Conservation of the Cornish landscape and the unique Cornish environment, culture and identity. Courses on Cornish language and history should be made available in schools for those who want them. Recognition of the Cornish flag of St Piran and the retention of the Tamar border with England. The rule of law will be upheld by the Cornish state and the judiciary will be separate from the legislative and executive functions of the state. The Cornish state will create a home defence force, linked to local communities and civil units of administration. Young Cornish people will be given instruction as to world religions and secular philosophies but the greatest attention will be given to Christianity and early Celtic beliefs. A far greater say in government for Cornish people (by referenda if necessary) and the decentralisation of considerable powers to a Cornish nation within a united Europe - special links being established with our Celtic brothers and sisters in Scotland, Ireland, Isle of Man, Wales and Brittany. There have been perceived image problems as the CNP has been seen as similarly styled BNP and NF (the nativist British National Party and National Front), and during the 1970s letters were published in the party magazine The Cornish Banner (An Baner Kernewek) sympathetic to the NF and critical of "Zionist" politicians. The CNP also formed a controversial uniformed wing known as the Greenshirts led by the CNP Youth Movement leader and Public Relations Officer, Wallace Simmons who also founded the pro-NF Cornish Front. (Although the CNP and CF were sympathetic to Irish republicanism while the NF was supportive of Ulster loyalism, with the exception of leading NF figures like Patrick Harrington, who refused to condemn the IRA during an interview for the Channel 4 TV documentary Disciples of Chaos). The CNP polled 227 (0.4) votes in Truro during the 1979 UK General Election, 364 (0.67) in North Cornwall in the 1983 UK General Election, and 1,892 (1.0) at the European Parliament elections in the Cornwall and Plymouth constituency in 1984. The candidate on all three occasions was the founder and first leader of the CNP, Dr James Whetter. The CNP was for some time seen as more of a pressure group, as it did not put up candidates for any elections, although its visibility and influence within Cornwall is negligible. , it is now registered on the UK political parties register, and so Mebyon Kernow is no longer the |
on block ciphers according to the amount and quality of secret information that was discovered: Total break – the attacker deduces the secret key. Global deduction – the attacker discovers a functionally equivalent algorithm for encryption and decryption, but without learning the key. Instance (local) deduction – the attacker discovers additional plaintexts (or ciphertexts) not previously known. Information deduction – the attacker gains some Shannon information about plaintexts (or ciphertexts) not previously known. Distinguishing algorithm – the attacker can distinguish the cipher from a random permutation. Academic attacks are often against weakened versions of a cryptosystem, such as a block cipher or hash function with some rounds removed. Many, but not all, attacks become exponentially more difficult to execute as rounds are added to a cryptosystem, so it's possible for the full cryptosystem to be strong even though reduced-round variants are weak. Nonetheless, partial breaks that come close to breaking the original cryptosystem may mean that a full break will follow; the successful attacks on DES, MD5, and SHA-1 were all preceded by attacks on weakened versions. In academic cryptography, a weakness or a break in a scheme is usually defined quite conservatively: it might require impractical amounts of time, memory, or known plaintexts. It also might require the attacker be able to do things many real-world attackers can't: for example, the attacker may need to choose particular plaintexts to be encrypted or even to ask for plaintexts to be encrypted using several keys related to the secret key. Furthermore, it might only reveal a small amount of information, enough to prove the cryptosystem imperfect but too little to be useful to real-world attackers. Finally, an attack might only apply to a weakened version of cryptographic tools, like a reduced-round block cipher, as a step towards breaking the full system. History Cryptanalysis has coevolved together with cryptography, and the contest can be traced through the history of cryptography—new ciphers being designed to replace old broken designs, and new cryptanalytic techniques invented to crack the improved schemes. In practice, they are viewed as two sides of the same coin: secure cryptography requires design against possible cryptanalysis. Classical ciphers Although the actual word "cryptanalysis" is relatively recent (it was coined by William Friedman in 1920), methods for breaking codes and ciphers are much older. David Kahn notes in The Codebreakers that Arab scholars were the first people to systematically document cryptanalytic methods. The first known recorded explanation of cryptanalysis was given by Al-Kindi (c. 801–873, also known as "Alkindus" in Europe), a 9th-century Arab polymath, in Risalah fi Istikhraj al-Mu'amma (A Manuscript on Deciphering Cryptographic Messages). This treatise contains the first description of the method of frequency analysis. Al-Kindi is thus regarded as the first codebreaker in history. His breakthrough work was influenced by Al-Khalil (717–786), who wrote the Book of Cryptographic Messages, which contains the first use of permutations and combinations to list all possible Arabic words with and without vowels. Frequency analysis is the basic tool for breaking most classical ciphers. In natural languages, certain letters of the alphabet appear more often than others; in English, "E" is likely to be the most common letter in any sample of plaintext. Similarly, the digraph "TH" is the most likely pair of letters in English, and so on. Frequency analysis relies on a cipher failing to hide these statistics. For example, in a simple substitution cipher (where each letter is simply replaced with another), the most frequent letter in the ciphertext would be a likely candidate for "E". Frequency analysis of such a cipher is therefore relatively easy, provided that the ciphertext is long enough to give a reasonably representative count of the letters of the alphabet that it contains. Al-Kindi's invention of the frequency analysis technique for breaking monoalphabetic substitution ciphers was the most significant cryptanalytic advance until World War II. Al-Kindi's Risalah fi Istikhraj al-Mu'amma described the first cryptanalytic techniques, including some for polyalphabetic ciphers, cipher classification, Arabic phonetics and syntax, and most importantly, gave the first descriptions on frequency analysis. He also covered methods of encipherments, cryptanalysis of certain encipherments, and statistical analysis of letters and letter combinations in Arabic. An important contribution of Ibn Adlan (1187–1268) was on sample size for use of frequency analysis. In Europe, Italian scholar Giambattista della Porta (1535–1615) was the author of a seminal work on cryptanalysis, De Furtivis Literarum Notis. Successful cryptanalysis has undoubtedly influenced history; the ability to read the presumed-secret thoughts and plans of others can be a decisive advantage. For example, in England in 1587, Mary, Queen of Scots was tried and executed for treason as a result of her involvement in three plots to assassinate Elizabeth I of England. The plans came to light after her coded correspondence with fellow conspirators was deciphered by Thomas Phelippes. In Europe during the 15th and 16th centuries, the idea of a polyalphabetic substitution cipher was developed, among others by the French diplomat Blaise de Vigenère (1523–96). For some three centuries, the Vigenère cipher, which uses a repeating key to select different encryption alphabets in rotation, was considered to be completely secure (le chiffre indéchiffrable—"the indecipherable cipher"). Nevertheless, Charles Babbage (1791–1871) and later, independently, Friedrich Kasiski (1805–81) succeeded in breaking this cipher. During World War I, inventors in several countries developed rotor cipher machines such as Arthur Scherbius' Enigma, in an attempt to minimise the repetition that had been exploited to break the Vigenère system. Ciphers from World War I and World War II In World War I, the breaking of the Zimmermann Telegram was instrumental in bringing the United States into the war. In World War II, the Allies benefitted enormously from their joint success cryptanalysis of the German ciphers – including the Enigma machine and the Lorenz cipher – and Japanese ciphers, particularly 'Purple' and JN-25. 'Ultra' intelligence has been credited with everything between shortening the end of the European war by up to two years, to determining the eventual result. The war in the Pacific was similarly helped by 'Magic' intelligence. Cryptanalysis of enemy messages played a significant part in the Allied victory in World War II. F. W. Winterbotham, quoted the western Supreme Allied Commander, Dwight D. Eisenhower, at the war's end as describing Ultra intelligence as having been "decisive" to Allied victory. Sir Harry Hinsley, official historian of British Intelligence in World War II, made a similar assessment about Ultra, saying that it shortened the war "by not less than two years and probably by four years"; moreover, he said that in the absence of Ultra, it is uncertain how the war would have ended. In practice, frequency analysis relies as much on linguistic knowledge as it does on statistics, but as ciphers became more complex, mathematics became more important in cryptanalysis. This change was particularly evident before and during World War II, where efforts to crack Axis ciphers required new levels of mathematical sophistication. Moreover, automation was first applied to cryptanalysis in that era with the Polish Bomba device, the British Bombe, the use of punched card equipment, and in the Colossus computers – the first electronic digital computers to be controlled by a | ciphers are much older. David Kahn notes in The Codebreakers that Arab scholars were the first people to systematically document cryptanalytic methods. The first known recorded explanation of cryptanalysis was given by Al-Kindi (c. 801–873, also known as "Alkindus" in Europe), a 9th-century Arab polymath, in Risalah fi Istikhraj al-Mu'amma (A Manuscript on Deciphering Cryptographic Messages). This treatise contains the first description of the method of frequency analysis. Al-Kindi is thus regarded as the first codebreaker in history. His breakthrough work was influenced by Al-Khalil (717–786), who wrote the Book of Cryptographic Messages, which contains the first use of permutations and combinations to list all possible Arabic words with and without vowels. Frequency analysis is the basic tool for breaking most classical ciphers. In natural languages, certain letters of the alphabet appear more often than others; in English, "E" is likely to be the most common letter in any sample of plaintext. Similarly, the digraph "TH" is the most likely pair of letters in English, and so on. Frequency analysis relies on a cipher failing to hide these statistics. For example, in a simple substitution cipher (where each letter is simply replaced with another), the most frequent letter in the ciphertext would be a likely candidate for "E". Frequency analysis of such a cipher is therefore relatively easy, provided that the ciphertext is long enough to give a reasonably representative count of the letters of the alphabet that it contains. Al-Kindi's invention of the frequency analysis technique for breaking monoalphabetic substitution ciphers was the most significant cryptanalytic advance until World War II. Al-Kindi's Risalah fi Istikhraj al-Mu'amma described the first cryptanalytic techniques, including some for polyalphabetic ciphers, cipher classification, Arabic phonetics and syntax, and most importantly, gave the first descriptions on frequency analysis. He also covered methods of encipherments, cryptanalysis of certain encipherments, and statistical analysis of letters and letter combinations in Arabic. An important contribution of Ibn Adlan (1187–1268) was on sample size for use of frequency analysis. In Europe, Italian scholar Giambattista della Porta (1535–1615) was the author of a seminal work on cryptanalysis, De Furtivis Literarum Notis. Successful cryptanalysis has undoubtedly influenced history; the ability to read the presumed-secret thoughts and plans of others can be a decisive advantage. For example, in England in 1587, Mary, Queen of Scots was tried and executed for treason as a result of her involvement in three plots to assassinate Elizabeth I of England. The plans came to light after her coded correspondence with fellow conspirators was deciphered by Thomas Phelippes. In Europe during the 15th and 16th centuries, the idea of a polyalphabetic substitution cipher was developed, among others by the French diplomat Blaise de Vigenère (1523–96). For some three centuries, the Vigenère cipher, which uses a repeating key to select different encryption alphabets in rotation, was considered to be completely secure (le chiffre indéchiffrable—"the indecipherable cipher"). Nevertheless, Charles Babbage (1791–1871) and later, independently, Friedrich Kasiski (1805–81) succeeded in breaking this cipher. During World War I, inventors in several countries developed rotor cipher machines such as Arthur Scherbius' Enigma, in an attempt to minimise the repetition that had been exploited to break the Vigenère system. Ciphers from World War I and World War II In World War I, the breaking of the Zimmermann Telegram was instrumental in bringing the United States into the war. In World War II, the Allies benefitted enormously from their joint success cryptanalysis of the German ciphers – including the Enigma machine and the Lorenz cipher – and Japanese ciphers, particularly 'Purple' and JN-25. 'Ultra' intelligence has been credited with everything between shortening the end of the European war by up to two years, to determining the eventual result. The war in the Pacific was similarly helped by 'Magic' intelligence. Cryptanalysis of enemy messages played a significant part in the Allied victory in World War II. F. W. Winterbotham, quoted the western Supreme Allied Commander, Dwight D. Eisenhower, at the war's end as describing Ultra intelligence as having been "decisive" to Allied victory. Sir Harry Hinsley, official historian of British Intelligence in World War II, made a similar assessment about Ultra, saying that it shortened the war "by not less than two years and probably by four years"; moreover, he said that in the absence of Ultra, it is uncertain how the war would have ended. In practice, frequency analysis relies as much on linguistic knowledge as it does on statistics, but as ciphers became more complex, mathematics became more important in cryptanalysis. This change was particularly evident before and during World War II, where efforts to crack Axis ciphers required new levels of mathematical sophistication. Moreover, automation was first applied to cryptanalysis in that era with the Polish Bomba device, the British Bombe, the use of punched card equipment, and in the Colossus computers – the first electronic digital computers to be controlled by a program. Indicator With reciprocal machine ciphers such as the Lorenz cipher and the Enigma machine used by Nazi Germany during World War II, each message had its own key. Usually, the transmitting operator informed the receiving operator of this message key by transmitting some plaintext and/or ciphertext before the enciphered message. This is termed the indicator, as it indicates to the receiving operator how to set his machine to decipher the message. Poorly designed and implemented indicator systems allowed first Polish cryptographers and then the British cryptographers at Bletchley Park to break the Enigma cipher system. Similar poor indicator systems allowed the British to identify depths that led to the diagnosis of the Lorenz SZ40/42 cipher system, and the comprehensive breaking of its messages without the cryptanalysts seeing the cipher machine. Depth Sending two or more messages with the same key is an insecure process. To a cryptanalyst the messages are then said to be "in depth." This may be detected by the messages having the same indicator by which the sending operator informs the receiving operator about the key generator initial settings for the message. Generally, the cryptanalyst may benefit from lining up identical enciphering operations among a set of messages. For example, the Vernam cipher enciphers by bit-for-bit combining plaintext with a long key using the "exclusive or" operator, which is also known as "modulo-2 addition" (symbolized by ⊕ ): Plaintext ⊕ Key = Ciphertext Deciphering combines the same key bits with the ciphertext to reconstruct the plaintext: Ciphertext ⊕ Key = Plaintext (In modulo-2 arithmetic, addition is the same as subtraction.) When two such ciphertexts are aligned in depth, combining them eliminates the common key, leaving just a combination of the two plaintexts: Ciphertext1 ⊕ Ciphertext2 = Plaintext1 ⊕ Plaintext2 The individual plaintexts can then be worked out linguistically by trying probable words (or phrases), also known as "cribs," at various locations; a correct guess, when combined with the merged plaintext stream, produces intelligible text from the other plaintext component: (Plaintext1 ⊕ Plaintext2) ⊕ Plaintext1 = Plaintext2 The recovered fragment of the second plaintext can often be extended in one or both directions, and the extra characters can be combined with the merged plaintext stream to extend the first plaintext. Working back and forth between the two plaintexts, using the intelligibility criterion to check guesses, the analyst may recover much or all of the original plaintexts. (With only two plaintexts in depth, the analyst may not know which one corresponds to which ciphertext, but in practice this is not a large problem.) When a recovered plaintext is then combined with its ciphertext, the key is revealed: Plaintext1 ⊕ Ciphertext1 = Key Knowledge of a key then allows the analyst to read other messages encrypted with the same key, and knowledge of a set of related keys may allow cryptanalysts to diagnose the system used for constructing them. Development of modern cryptography Governments have long recognized the potential benefits of cryptanalysis for intelligence, both military and diplomatic, and established dedicated organizations devoted to breaking the codes and ciphers of other nations, for example, GCHQ and the NSA, organizations which are still very active today. Even though computation was used to great effect in the cryptanalysis of the Lorenz cipher and other systems during World War II, it also made possible new methods of cryptography orders of magnitude more complex than ever before. Taken as a whole, modern cryptography has become much more impervious to cryptanalysis than the pen-and-paper systems of the past, and now seems to have the upper hand against pure cryptanalysis. The historian David Kahn notes: Kahn goes on to mention increased opportunities for interception, bugging, side channel attacks, and quantum computers as replacements for the traditional means of cryptanalysis. In 2010, former NSA technical director Brian Snow said that both academic and government cryptographers are "moving very slowly forward in a mature field." However, any postmortems for cryptanalysis may be premature. While the effectiveness of cryptanalytic methods employed by intelligence agencies remains unknown, many serious attacks against both academic and practical cryptographic primitives have been published in the modern era of computer cryptography: The block cipher Madryga, proposed in 1984 but not widely used, was found to be susceptible to ciphertext-only attacks in 1998. FEAL-4, proposed as a replacement for the DES standard encryption algorithm but not widely used, was demolished by a spate of attacks from the academic community, many of which are entirely practical. The A5/1, A5/2, CMEA, and DECT systems used in mobile and wireless phone technology can all be broken in hours, minutes or even in real-time using widely available computing equipment. Brute-force keyspace search has broken some real-world ciphers and applications, including single-DES (see EFF DES cracker), 40-bit "export-strength" cryptography, and the DVD Content Scrambling System. In 2001, Wired Equivalent Privacy (WEP), a protocol used to secure Wi-Fi wireless networks, was shown to be breakable in practice because of a weakness in the RC4 cipher and aspects of the WEP design that made related-key attacks practical. WEP was later replaced by Wi-Fi Protected Access. In 2008, researchers conducted a proof-of-concept break of SSL using weaknesses in the MD5 hash function and certificate issuer practices that made it possible to exploit collision attacks on hash functions. The certificate issuers involved changed their practices to prevent the attack from being repeated. Thus, while the best modern ciphers may be far more resistant to cryptanalysis than the Enigma, cryptanalysis and the broader field of information security remain quite active. Symmetric ciphers Boomerang |
Chicano Manifesto (1971), "I am Chicano. What it means to me may be different than what it means to you." Similarly, writer Benjamin Alire Sáenz wrote "There is no such thing as the Chicano voice: there are only Chicano and Chicana voices." The identity thus may be understood as somewhat ambiguous (e.g. in the 1991 Culture Clash play A Bowl of Beings, in response to Che Guevara's demand for a definition of "Chicano," an "armchair activist" cries out, "I still don't know!"). However, as substantiated by Chicanas/os since the Chicano Movement, many Chicanos/as understand themselves as being "neither from here, nor from there," in reference to the United States and Mexico. Juan Bruce-Novoa, a professor of Spanish and Portuguese at University of California, Irvine, wrote in 1990: "A Chicano lives in the space between the hyphen in Mexican-American." Being Chicano represents the struggle of being institutionally acculturated to assimilate into the Anglo-dominated society of the United States, while maintaining the cultural sense developed as a Latin-American cultured, U.S.-born Mexican child. As described by Rafael Pérez-Torres, "one can no longer assert the wholeness of a Chicano subject ... It is illusory to deny the nomadic quality of the Chicano communtiy, a community in flux that yet survives and, through survival, affirms its self." Ethnic identity From a popular perspective, the term Chicano became widely visible outside of Chicano communities during the American civil rights movement. It was commonly used during the mid-1960s by Mexican-American activists such as Rodolfo "Corky" Gonzales, who was one of the first to reclaim the term, in an attempt to assert their civil rights and rid the word of its polarizing negative connotations. Chicano soon became an identity for Mexican Americans to assert their ethnic pride, proudly identifying themselves as Chicanos/as while also asserting a notion of Brown Pride, drawing on the "Black is Beautiful" movement, inverting phrases of insult into forms of ethnic empowerment. As journalist Rubén Salazar described in a 1970 Los Angeles Times piece entitled "Who is a Chicano? And what is it the Chicanos want?": "A Chicano is a Mexican-American with a non-Anglo image of himself." After it was reclaimed, Chicano/a identity became a celebration of being non-white and non-European and worked against the state-sanctioned census categories of "Whites with Spanish Surnames," originally promulgated on the 1950 U.S. census, and "Mexican-American," which Chicanas/os felt encouraged assimilation into European American society. Chicanos/as asserted ethnic pride during a time when Mexican assimilation into whiteness was being actively promoted by the U.S. government in order to "serve Anglo self-interest," who tried to claim Chicano/as were white in order to deny racism against them, as noted by Ian Haney López. The U.S. Census Bureau provided no clear way for Mexican Americans or Latinos to officially identify as a racial/ethnic category prior to 1980, when the broader-than-Mexican term "Hispanic" was first available as a self-identification in census forms. While Chicano also appeared on the 1980 census, indicating the success of the Chicano Movement in gaining some federal recognition, it was only permitted to be selected as a subcategory underneath Spanish/Hispanic descent, which erased Afro-Chicanos/as and the visibility of Amerindian and African ancestries among Chicanas/os and populations throughout Latin America and the Caribbean. Chicana/o ethnic identity is born out of colonial encounters between Europe, Africa, and the Americas. Alfred Arteaga writes how the Chicana/o arose as a result of the violence of colonialism, emerging as a hybrid ethnicity or race. Arteaga acknowledges how this ethnic and racial hybridity among Chicanos is highly complex and extends beyond a previously generalized "Aztec" ancestry, as originally asserted during the formative years of the Chicano Movement. Chicano ethnic identity may involve more than just Spanish ancestry and may include African ancestry (as a result of Spanish slavery or runaway slaves from Anglo-Americans). Arteaga concludes that "the physical manifestation of the Chicano, is itself a product of hybridity." Afro-Chicanos/as, most of whom have origins in working class community interactions, have faced erasure from Chicano/a identity until recently. "Because so many people uncritically apply the 'one drop rule' in the U.S., our popular language ignores the complexity of racial hybridity," as described by Afro-Chicano poet Robert Quintana Hopkins. Black and Chicano/a communities have engaged in close political interactions "around civil rights struggles, union activism, and demographic changes," especially during the Black Power and Chicano Movement struggles for liberation in the 1960s and 1970s. There have also been tensions between Black and Chicano/a communities because of "increased competition for scarce resources," which has "too often positioned workers of different races in opposition to each other." Afro-Chicano photographer Walter Thompson-Hernandez reflected on how there were difficulties in his personal life because of racial conflicts between Black and Latino communities, yet stated how "being able to connect with other Blaxicans [Black-Mexicans] has allowed me to see that in all of my conclusions and struggles, I was never alone." Similarly, Afro-Chicano rapper Choosey stated "there's a stigma that Black and Mexican cultures don't get along, but I wanted to show the beauty in being a product of both.” Political identity Chicano/a political identity developed from a reverence of pachuco resistance to assimilation in the 1940s and 1950s. Luis Valdez records that "pachuco determination and pride grew through the 1950s and gave impetus to the Chicano Movement of the 1960s ... By then the political consciousness stirred by the 1943 Zoot Suit Riots had developed into a movement that would soon issue the Chicano Manifesto–a detailed platform of political activism." By the 1960s, according to Catherine S. Ramírez, the pachuco figure "emerged as an icon of resistance in Chicano cultural production." However, the pachuca figure was not regarded with the same status as the pachuco, which Ramírez credits with the pachuca's embodiment of "dissident femininity, female masculinity, and, in some instances, lesbian sexuality." By the 1960s, Chicano/a identity was consolidating around several key political positions: rejecting assimilation into Anglo-American society, resisting systemic racism and the American nation-state, and affirming the need to create alliances with other oppressed ethnic groups and Third World peoples. Political liberation was a founding principle of Chicano nationalism, which called for the creation of a Chicano/a subject whose political identity was separate from the U.S. nation-state, which Chicanos recognized had impoverished, oppressed, and destroyed their people and communities. Alberto Varon writes that, while Chicano nationalism "created enduring social improvement for the lives of Mexican Americans and others" through political action, this brand of Chicano nationalism privileged the machismo subject in its calls for political resistance, which has since been critiqued by Chicana feminism. Several Chicana/o writers state that Chicano hypermasculinity inhibited and stifled the Chicano Movement. Chicana writer Cherríe Moraga identifies homophobia and sexism as obstacles to the Movement which deprived Chicanas of critical knowledge about a "grassroots feminist movement where women of color, including lesbians of color, [had] been actively involved in reproductive rights, especially sterilization abuse, battered women's shelters, rape crisis centers, welfare advocacy, Third World women's conferences, cultural events, health and self-help clinics and more." Sonia Saldívar-Hull writes that crucial texts such as Essays on La Mujer (1977), Mexican Women in the United States (1980), and This Bridge Called My Back (1981) have been relatively ignored, even in Chicana/o Studies, while "a failure to address women's issues and women's historical participation in the political arena continues." Saldívar-Hull notes that when Chicanas have challenged sexism, their identities have been invalidated. Chicano political activist groups such as the Brown Berets (1967-1972; 1992–Present), founded by David Sánchez in East L.A. as the Young Chicanos for Community Action, gained support for their political objectives of protesting educational inequalities and demanding an end to police brutality. They collaborated with the Black Panthers and Young Lords, which were founded in 1966 and 1968 respectively. Membership in the Brown Berets was estimated to have reached five thousand in over 80 chapters (mostly centered in California and Texas). The Brown Berets helped organize the Chicano Blowouts of 1968 and the national Chicano Moratorium, which protested the high rate of Chicano casualties in the Vietnam War. Continued police harassment, infiltration by federal agents provacateur via COINTELPRO, and internal disputes led to the decline and disbandment of the Berets in 1972. Sánchez, then a professor at East Los Angeles College, revived the Brown Berets in 1992 after being prompted by the high number of Chicano homicides in Los Angeles County, seeking to supplant the structure of the gang as family with the Brown Berets. At certain points in the 1970s, Chicano was the preferred term for reference to Mexican Americans, particularly in scholarly literature. Chicano/a fell out of favor as a way of referring to the entire population in the 1980s following the decline of the Chicano Movement. This indicated a political shift among Mexican Americans, many of whom shifted to identifying as Hispanic in an era of American conservatism. Hispanic itself emerged from an assimilationist politics rooted in anti-Black sentiments. The term was forged out of a collaboration between Mexican American political elites in the emerging Hispanic Caucus and the U.S. government, who wanted to use the term to encourage a shift away from Chicana/o identity in order to appear more 'mainstream' or respectable to white Americans. The Hispanic Caucus also sought to separate themselves from the radical politics of Chicanismo and what they perceived as the 'militancy' of Chicana/o and Black political consciousness. Reies Tijerina, who was a vocal claimant to the rights of Latin Americans and Mexican Americans and a major figure of the early Chicano Movement, wrote: "The Anglo press degradized the word 'Chicano'. They use it to divide us. We use it to unify ourselves with our people and with Latin America." Cultural identity Since the Chicano Movement, Chicano has been reclaimed by Mexican-Americans to denote an identity that is in opposition to Anglo-American culture while being neither fully "American" or "Mexican." Chicano culture embodies the "in-between" nature of cultural hybridity. Central aspects of Chicano culture include lowriding, hip hop, rock, graffiti art, theater, muralism, visual art, literature, poetry, and more. Notable subscultures include the chola/o, pachuca, pachuco, and pinta/o subcultures. Chicano culture has had international influence in the form of lowrider car clubs in Brazil and England, music and youth culture in Japan, Māori youth enhancing lowrider bicycles and taking on cholo style, and intellectuals in France "embracing the deterritorializing qualities of Chicano subjectivity." Former president of the Modern Language Association Mary Louise Pratt stated that Chicano cultural practices constitute a space "of ongoing critical and inventive interaction with the dominant culture, as contact zones across which significations move in many directions." As early as the 1930s, the precursors to Chicano cultural identity were developing in Los Angeles, California and the Southwestern United States. Former zoot suiter Salvador "El Chava" reflects on how racism and poverty forged a hostile social environment for Chicanos/as which led to the development of gangs: "we had to protect ourselves." Barrios and colonias (rural barrios) emerged throughout southern California and elsewhere in neglected districts of cities and outlying areas with little infrastructure. Alienation from public institutions made some Chicano youth susceptible to gang channels, who became drawn to their rigid hierarchical structure and assigned social roles in a world of government-sanctioned disorder. Pachuco/a culture developed in the borderland areas of California and Texas as Pachuquismo, which would eventually evolve into Chicanismo. Chicano zoot suiters on the west coast were influenced by Black zoot suiters in the jazz and swing music scene on the East Coast. Chicano zoot suiters developed a unique cultural identity, as noted by Charles "Chaz" Bojórquez, "with their hair done in big pompadours, and 'draped' in tailor-made suits, they were swinging to their own styles. They spoke Cálo, their own language, a cool jive of half-English, half-Spanish rhythms. [...] Out of the zootsuiter experience came lowrider cars and culture, clothes, music, tag names, and, again, its own graffiti language." As described by artist Carlos Jackson, "Pachuco culture remains a prominent theme in Chicano art because the contemporary urban cholo culture" is seen as its heir. Many aspects of Chicano culture, such as lowriding cars and bicycles, have been stigmatized and policed by Anglo Americans who perceive Chicanos as "juvenile delinquents or gang members" for their embrace of nonwhite style and cultures, much as they did Pachucos. These negative societal perceptions of Chicanos were amplified by media outlets such as the Los Angeles Times. Luis Alvarez remarks how negative portrayals in the media served as a tool to increase policing of Black and Brown male bodies in particular: "Popular discourse characterizing nonwhite youth as animal-like, hypersexual, and criminal marked their bodies as 'other' and, when coming from city officials and the press, served to help construct for the public a social meaning of African Americans and Mexican American youth. In these ways, the physical and discursive bodies of nonwhite youth were the sites upon which their dignity was denied." Chicano rave culture in southern California provided a space for Chicanos to partially escape criminalization in the 1990s. Artist and archivist Guadalupe Rosales states that "a lot of teenagers were being criminalised or profiled as criminals or gangsters, so the party scene gave access for people to escape that." Numerous party crews, such as Aztek Nation, organized events and parties would frequently take place in neighborhood backyards, particularly in East and South Los Angeles, the surrounding valleys, and Orange County. By 1995, it was estimated that over 500 party crews were in existence. They laid the foundations for "an influential but oft-overlooked Latin dance subculture that offered community for Chicano ravers, queer folk, and other marginalized youth." Ravers used map points techniques to derail police raids. Rosales states that a shift occurred around the late 1990s and increasing violence effected the Chicano party scene. Indigenous identity Chicano/a identity functions as a way to reclaim one's Indigenous American, and often Indigenous Mexican, ancestry—to form an identity distinct from European identity, despite some Chicanos/as being of partial European descent—as a way to resist and subvert colonial domination. Rather than a "subculture" of European American culture, Alicia Gasper de Alba refers to Chicanismo as an "alter-Native culture, an Other American culture Indigenous to the land base now known as the West and Southwest of the United States." While influenced by settler-imposed systems and structures, Alba refers to Chicana/o culture as "not immigrant but native, not foreign but colonized, not alien but different from the overarching hegemony of white America." The Plan Espiritual de Aztlán (1969) drew from Frantz Fanon's The Wretched of the Earth (1961). In Wretched, Fanon stated: "the past existence of an Aztec civilization does not change anything very much in the diet of the Mexican peasant today," elaborating that "this passionate search for a national culture which existed before the colonial era finds its legitimate reason in the anxiety shared by native intellectuals to shrink away from that of Western culture in which they all risk being swamped ... the native intellectuals, since they could not stand wonderstruck before the history of today's barbarity, decided to go back further and to delve deeper down; and, let us make no mistake, it was with the greatest delight that they discovered that there was nothing to be ashamed of in the past, but rather dignity, glory, and solemnity." The Chicano Movement adopted this perspective through the notion of Aztlán—a mythic Aztec homeland which Chicano/as used as a way to connect themselves to a precolonial past, before the time of the "'gringo' invasion of our lands." Chicano/a scholars describe how this reclamation functioned as a way for Chicano/as to reclaim a diverse or imprecise Indigenous past; while recognizing how Aztlán promoted divisive forms of Chicano nationalism that "did little to shake the walls and bring down the structures of power as its rhetoric so firmly proclaimed." As stated by Chicano historian Juan Gómez-Quiñones, the Plan Espiritual de Aztlán was "stripped of what radical element it possessed by stressing its alleged romantic idealism, reducing the concept of Aztlán to a psychological ploy ... all of which became possible because of the Plan's incomplete analysis which, in turn, allowed it ... to degenerate into reformism." While acknowledging its romanticized and exclusionary foundations, Chicano/a scholars like Rafael Pérez-Torres state that Aztlán opened a subjectivity which stressed a connection to Indigenous peoples and cultures at a critical historical moment in which Mexican-Americans and Mexicans were "under pressure to assimilate particular standards—of beauty, of identity, of aspiration. In a Mexican context, the pressure was to urbanize and Europeanize .... 'Mexican-Americans' were expected to accept anti-indigenous discourses as their own." As Pérez-Torres concludes, Aztlán allowed "for another way of aligning one's interests and concerns with community and with history ... though hazy as to the precise means in which agency would emerge, Aztlán valorized a Chicanismo that rewove into the present previously devalued lines of descent." Romanticized notions of Aztlán have declined among some Chicano/as, who argue for a need to reconstruct the place of Indigeneity in relation to Chicano/a identity. Danza Azteca grew popular in the U.S. with the rise of the Chicano Movement, which inspired some "Latinos to embrace their ethnic heritage and question the Eurocentric norms forced upon them." The appropriation of pre-contact Aztec cultural elements has been critiqued by some Chicana/os who argue for a need to affirm the diversity of Indigenous ancestry among Chicana/os. Patrisia Gonzales portrays Chicano people as descendants of the Indigenous peoples of Mexico who have been displaced because of colonial violence, positioning them among "detribalized Indigenous peoples and communities." Roberto Cintli Rodríguez describes Chicano/as as "de-Indigenized," which he remarks occurred "in part due to religious indoctrination and a violent uprooting from the land," detaching them from maíz-based cultures throughout the greater Mesoamerican region. Rodríguez examines how and why "peoples who are clearly red or brown and undeniably Indigenous to this continent have allowed ourselves, historically, to be framed by bureaucrats and the courts, by politicians, scholars, and the media as alien, illegal, and less than human." Gloria E. Anzaldúa has addressed detribalization, stating "In the case of Chicanos, being 'Mexican' is not a tribe. So in a sense Chicanos and Mexicans are 'detribalized'. We don't have tribal affiliations but neither do we have to carry ID cards establishing tribal affiliation." Anzaldúa also recognizes that "Chicanos, people of color, and 'whites'," have often chosen "to ignore the struggles of Native people even when it's right in our caras (faces)," expressing disdain for this "willful ignorance." She concludes that "though both 'detribalized urban mixed bloods' and Chicanas/os are recovering and reclaiming, this society is killing off urban mixed bloods through cultural genocide, by not allowing them equal opportunities for better jobs, schooling, and health care." Inés Hernández-Ávila emphasizes how Chicano/as should recognize and reconnect with their roots "respectfully and humbly" while also validating "those peoples who still maintain their identity as original peoples of this continent" in order to create radical change capable of "transforming our world, our universe, and our lives." Political aspects Anti-imperialism and international solidarity During World War II, Chicano youth were targeted by white servicemen, who despised their "cool, measured indifference to the war, as well as an increasingly defiant posture toward whites in general." Historian Robin Kelley states that this "annoyed white servicemen to no end." During the Zoot Suit Riots (1943), white rage erupted in Los Angeles, which "became the site of racist attacks on Black and Chicano youth, during which white soldiers engaged in what amounted to a ritualized stripping of the zoot." Zoot suits were a symbol of collective resistance among Chicano and Black youth against city segregation and fighting in the war. Many Chicano and Black zoot-suiters engaged in draft evasion because they felt it was hypocritical for them to be expected to "fight for democracy" abroad yet face racism and oppression daily in the U.S. This galvanized Chicano youth to focus on anti-war activism, "especially influenced by the Third World movements of liberation in Asia, Africa, and Latin America." Historian Mario T. García reflects that "these anti-colonial and anti-Western movements for national liberation and self-awareness touched a historical nerve among Chicanos/as as they began to learn that they shared some similarities with these Third World struggles." Chicano poet Alurista argued that "Chicanas/os cannot be truly free until they recognize that the struggle in the United States is intricately bound with the anti-imperialist struggle in other countries." The Cuban Revolution (1953–59) led by Fidel Castro and Che Guevara was particularly influential to Chicanos, as noted by García, who notes that Chicanas/os viewed the revolution as "a nationalist revolt against "Yankee imperialism" and neo-colonialism." In the 1960s, the Chicano Movement brought "attention and commitment to local struggles with an analysis and understanding of international struggles." Chicano youth organized with Black, Latin American, and Filipino activists to form the Third World Liberation Front (TWLF), which fought for the creation of a Third World college. During the Third World Liberation Front strikes of 1968, Chicano artists created posters to express solidarity. Chicano poster artist Rupert García referred to the place of artists in the movement: "I was critical of the police, of capitalist exploitation. I did posters of Che, of Zapata, of other Third World leaders. As artists, we climbed down from the ivory tower." Learning from Cuban poster makers of the post-revolutionary period, Chicano artists "incorporated international struggles for freedom and self-determination, such as those of Angola, Chile, and South Africa," while also promoting the struggles of Indigenous people and other civil rights movements through Black-brown unity. Chicanas organized with women of color activists to create the Third World Women's Alliance (1968-1980), representing "visions of liberation in third world solidarity that inspired political projects among racially and economically marginalized communities" against U.S. capitalism and imperialism. The Chicano Moratorium (1969–71) against the Vietnam War was one of the largest demonstrations of Mexican-Americans in history, drawing over 30,000 supporters in East L.A. Draft evasion was a form of resistance for Chicano anti-war activists such as Rosalio Muñoz, Ernesto Vigil, and Salomon Baldengro. They faced a felony charge—a minimum of five years prison time, $10,000, or both. In response, Munoz wrote "I declare my independence of the Selective Service System. I accuse the government of the United States of America of genocide against the Mexican people. Specifically, I accuse the draft, the entire social, political, and economic system of the United States of America, of creating a funnel which shoots Mexican youth into Vietnam to be killed and to kill innocent men, women, and children...." Rodolfo Corky Gonzales expressed a similar stance: “My feelings and emotions are aroused by the complete disregard of our present society for the rights, dignity, and lives of not only people of other nations but of our own unfortunate young men who die for an abstract cause in a war that cannot be honestly justified by any of our present leaders.” Anthologies such as This Bridge Called My Back: Writings by Radical Women of Color (1981) were produced in the late 1970s and early 80s by lesbian of color writers Cherríe Moraga, Pat Parker, Toni Cade Bambara, Chrystos, Audre Lorde, Gloria E. Anzaldúa, Cheryl Clarke, Jewelle Gomez, Kitty Tsui, and Hattie Gossett, who developed a poetics of liberation. Kitchen Table: Women of Color Press and Third Woman Press, founded in 1979 by Chicana feminist Norma Alarcón, provided sites for the production of women of color and Chicana literatures and critical essays. While first world feminists focused "on the liberal agenda of political rights," Third World feminists "linked their agenda for women's rights with economic and cultural rights" and unified together "under the banner of Third World solidarity." Maylei Blackwell identifies that this internationalist critique of capitalism and imperialism forged by women of color has yet to be fully historicized and is "usually dropped out of the false historical narrative." In the 1980s and 90s, Central American activists influenced Chicano leaders. The Mexican American Legislative Caucus (MALC) supported the Esquipulas Peace Agreement in 1987, standing in opposition to Contra aid. Al Luna criticized Reagan and American involvement while defending Nicaragua's Sandinista-led government: "President Reagan cannot credibly make public speeches for peace in Central America while at the same time advocating for a three-fold increase in funding to the Contras." The Southwest Voter Research Initiative (SVRI), launched by Chicano leader Willie Velásquez, intended to educate Chicano youth about Central and Latin American political issues. In 1988, "there was no significant urban center in the Southwest where Chicano leaders and activists had not become involved in lobbying or organizing to change U.S. policy in Nicaragua." In the early 90s, Cherríe Moraga urged Chicano activists to recognize that "the Anglo invasion of Latin America [had] extended well beyond the Mexican/American border" while Gloria E. Anzaldúa positioned Central America as the primary target of a U.S. interventionism that had murdered and displaced thousands. However, Chicano solidarity narratives of Central Americans in the 1990s tended to center themselves, stereotype Central Americans, and filter their struggles "through Chicana/o struggles, histories, and imaginaries." Chicano activists organized against the Gulf War (1990–91). Raul Ruiz of the Chicano Mexican Committee against the Gulf War stated that U.S. intervention was "to support U.S. oil interests in the region." Ruiz expressed, "we were the only Chicano group against the war. We did a lot of protesting in L.A. even though it was difficult because of the strong support for the war and the anti-Arab reaction that followed ... we experienced racist attacks [but] we held our ground." The end of the Gulf War, along with the Rodney King Riots, were crucial in inspiring a new wave of Chicano political activism. In 1994, one of the largest demonstrations of Mexican Americans in the history of the United States occurred when 70,000 people, largely Chicano/a and Latino/a marched in Los Angeles and other cities to protest Proposition 187, which aimed to cut educational and welfare benefits for undocumented immigrants. In 2004, Mujeres against Militarism and the Raza Unida Coalition sponsored a Day of the Dead vigil against militarism within the Latino community, addressing the War in Afghanistan (2001-) and the Iraq War (2003–11) They held photos of the dead and chanted "no blood for oil." The procession ended with a five-hour vigil at Tia Chucha's Centro Cultural. They condemned "the Junior Reserve Officers Training Corps (JROTC) and other military recruitment programs that concentrate heavily in Latino and African American communities, noting that JROTC is rarely found in upper-income Anglo communities." Rubén Funkahuatl Guevara organized a benefit concert for Latin@s Against the War in Iraq and Mexamérica por la Paz at Self-Help Graphics against the Iraq War. Although the events were well-attended, Guevara stated that "the Feds know how to manipulate fear to reach their ends: world military dominance and maintaining a foothold in a oil-rich region were their real goals." Labor organizing against capitalist exploitation Chicano/a and Mexican labor organizers played an active role in notable labor strikes since the early 20th century including the Oxnard strike of 1903, Pacific Electric Railway strike of 1903, 1919 Streetcar Strike of Los Angeles, Cantaloupe strike of 1928, California agricultural strikes (1931–41), and the Ventura County agricultural strike of 1941, endured mass deportations as a form of strikebreaking in the Bisbee Deportation of 1917 and Mexican Repatriation (1929–36), and experienced tensions with one another during the Bracero program (1942–64). Although organizing laborers were harassed, sabotaged, and repressed, sometimes through war-like tactics from capitalist owners who engaged in coervice labor relations and collaborated with and received support from local police and community organizations, Chicano/a and Mexican workers, particularly in agriculture, have been engaged in widespread unionization activities since the 1930s. Prior to unionization, agricultural workers, many of whom were undocumented aliens, worked in dismal conditions. Historian F. Arturo Rosales recorded a Federal Project Writer of the period, who stated: "It is sad, yet true, commentary that to the average landowner and grower in California the Mexican was to be placed in much the same category with ranch cattle, with this exception–the cattle were for the most part provided with comparatively better food and water and immeasurably better living accommodations." Growers used cheap Mexican labor to reap bigger profits and, until the 1930s, perceived Mexicans as docile and compliant with their subjugated status because they "did not organize troublesome labor unions, and it was held that he was not educated to the level of unionism." As one grower described, "We want the Mexican because we can treat them as we cannot treat any other living man ... We can control them by keeping them at night behind bolted gates, within a stockade eight feet high, surrounded by barbed wire ... We can make them work under armed guards in the fields." Unionization efforts were initiated by the Confederación de Uniones Obreras (Federation of Labor Unions) in Los Angeles, with twenty-one chapters quickly extending throughout southern California, and La Unión de Trabajadores del Valle Imperial (Imperial Valley Workers' Union). The latter organized the Cantaloupe strike of 1928, in which workers demanded better working conditions and higher wages, but "the growers refused to budge and, as became a pattern, local authorities sided with the farmers and through harassment broke the strike." Communist-led organizations such as the Cannery and Agricultural Workers' Industrial Union (CAWIU) supported Mexican workers, renting spaces for cotton pickers during the cotton strikes of 1933 after they were thrown out of company housing by growers. Capitalist owners used "red-baiting" techniques to discredit the strikes through associating them with communists. Chicana and Mexican working women showed the greatest tendency to organize, particularly in the Los Angeles garment industry with the International Ladies' Garment Workers' Union, led by anarchist Rose Pesotta. During World War II, the government-funded Bracero program (1942–64) hindered unionization efforts. In response to the California agricultural strikes and the 1941 Ventura County strike of Chicano/a and Mexican, as well as Filipino, lemon pickers/packers, growers organized the Ventura County Citrus Growers Committee (VCCGC) and launched a lobbying campaign to pressure the U.S. government to pass laws to prohibit labor organizing. VCCGC joined with other grower associations, forming a powerful lobbying bloc in Congress, and worked to legislate for | members were convinced by the police of cholo/a criminality, which led to criminalization and surveillance "reminiscent of the criminalization of Chicana and Chicano youth during the Zoot-Suit era in the 1940s." Sociologist José S. Plascencia-Castillo refers to the barrio as a panopticon, a space which leads to intense self-regulation, as Cholo/a youth are both scrutinized by law enforcement to "stay in their side of town" and by the community who in some cases "call the police to have the youngsters removed from the premises." The intense governance of Chicana/o youth, especially those who adopt the chola/o identity, has deep implications on youth experience, affecting their physical and mental health as well as their outlook on the future. Some youth feel they "can either comply with the demands of authority figures, and become obedient and compliant, and suffer the accompanying loss of identity and self-esteem, or, adopt a resistant stance and contest social invisibility to command respect in the public sphere." Gender and sexuality Chicana women and girls often confront objectification in Anglo society, being perceived as "exotic," "lascivious," and "hot" at a very young age while also facing denigration as "barefoot," "pregnant," "dark," and "low-class." These perceptions in society engender numerous negative sociological and psychological effects, such as excessive dieting and eating disorders. Social media may enhance these stereotypes of Chicana women and girls. Numerous studies have found that Chicanas experience elevated levels of stress as a result of sexual expectations by their parents and families. Although many Chicana youth desire open conversation of these gendered and sexual expectations, as well as mental health, these issues are often not discussed openly in Chicano families, which perpetuates unsafe and destructive practices. While young Chicana women are objectified, middle-aged Chicanas discuss feelings of being invisible, saying they feel trapped in balancing family obligations to their parents and children while attempting to create a space for their own sexual desires. The expectation that Chicana women should be "protected" by Chicano men may also constrict the agency and mobility of Chicana women. Chicano men develop their identity within a context of marginalization in Anglo society. Some writers argue that "Mexican men and their Chicano brothers suffer from an inferiority complex due to the conquest and genocide inflicted upon their Indigenous ancestors," which leaves Chicano men feeling trapped between identifying with the so-called "superior" European and the so-called "inferior" Indigenous sense of self. This conflict is said to manifest itself in the form of hypermasculinity or machismo, in which a "quest for power and control over others in order to feel better" about oneself is undertaken. This may result in abusive behavior, the development of an impenetrable "cold" persona, alcohol abuse, and other destructive and self-isolating behaviors. The lack of discussion of sexuality between Chicano men and their fathers or their mothers means that Chicano men tend to learn about sex from their peers as well as older male family members who perpetuate the idea that as men they have "a right to engage in sexual activity without commitment." The looming threat of being labeled a joto (gay) for not engaging in sexual activity also conditions many Chicano men to "use" women for their own sexual desires. Heteronormative gender roles are often enforced in Chicano families. Any deviation from gender and sexual conformity is perceived as a weakening or attack of la familia. However, certain Chicano men who retain a masculine gender identity are afforded some mobility to secretly engage in homosexual behaviors because of their gender performance, as long as it remains on the fringes. Effeminacy in Chicano men, Chicana lesbianism, and any other deviation which challenges patriarchal gender and sexuality is highly policed and understood as an attack on the family by Chicano men. Chicana women in the normative Chicano family are relegated to a secondary and subordinate status. Cherrie Moraga argues that this issue of patriarchal ideology in Chicano and Latino communities runs deep, as the great majority of Chicano and Latino men believe in and uphold male supremacy. Moraga also points to how this ideology is upheld in Chicano families by mothers in their relationship to their children: "the daughter must constantly earn the mother's love, prove her fidelity to her. The son–he gets her love for free." Queer Chicanas/os may seek refuge in their families, if possible, because it is difficult for them to find spaces where they feel safe in the dominant and hostile Anglo culture which surrounds them while also feeling excluded because of the hypermasculinity, and subsequent homophobia, that frequently exists in Chicano familial and communal spaces. Gabriel S. Estrada describes how "the overarching structures of capitalist white (hetero)sexism," including higher levels of criminalization directed toward Chicanos, has proliferated "further homophobia" among Chicano boys and men who may adopt "hypermasculine personas that can include sexual violence directed at others." Estrada notes that not only does this constrict "the formation of a balanced Indigenous sexuality for anyone, but especially ... for those who do identify" as part of the queer community to reject the "Judeo-Christian mandates against homosexuality that are not native to their own ways," recognizing that many Indigenous societies in Mexico and elsewhere accepted homosexuality openly prior to arrival of European colonizers. Mental health Chicana/os may seek out both Western biomedical healthcare and Indigenous health practices when dealing with trauma or illness. The effects of colonization are proven to produce psychological distress among Indigenous communities. Intergenerational trauma, along with racism and institutionalized systems of oppression, have been shown to adversely impact the mental health of Chicana/os and Latina/os. Mexican Americans are three times more likely than European Americans to live in poverty. Chicano/a adolescent youth experience high rates of depression and anxiety. Chicana adolescents have higher rates of depression and suicidal ideation than their European-American and African-American peers. Chicano adolescents experience high rates of homicide, and suicide. Chicana/os ages ten to seventeen are at a greater risk for mood and anxiety disorders than their European-American and African-American peers. Scholars have determined that the reasons for this are unclear due to the scarcity of studies on Chicana/o youth, but that intergenerational trauma, acculturative stress, and family factors are believed to contribute. Among Mexican immigrants who have lived in the United States for less than thirteen years, lower rates of mental health disorders were found in comparison to Mexican-Americans and Chicanos born in the United States. Scholar Yvette G. Flores concludes that these studies demonstrate that "factors associated with living in the United States are related to an increased risk of mental disorders." Risk factors for negative mental health include historical and contemporary trauma stemming from colonization, marginalization, discrimination, and devaluation. The disconnection of Chicanos from their Indigeneity has been cited as a cause of trauma and negative mental health:Loss of language, cultural rituals, and spiritual practices creates shame and despair. The loss of culture and language often goes unmourned, because it is silenced and denied by those who occupy, conquer, or dominate. Such losses and their psychological and spiritual impact are passed down across generations, resulting in depression, disconnection, and spiritual distress in subsequent generations, which are manifestations of historical or intergenerational trauma.Psychological distress may emerge from Chicanos being "othered" in society since childhood and is linked to psychiatric disorders and symptoms which are culturally bound – susto (fright), nervios (nerves), mal de ojo (evil eye), and ataque de nervios (an attack of nerves resembling a panic attack). Dr. Manuel X. Zamarripa discusses how mental health and spirituality are often seen as disconnected subjects in Western perspectives. Zamarripa states "in our community, spirituality is key for many of us in our overall wellbeing and in restoring and giving balance to our lives." For Chicana/os, Zamarripa recognizes that identity, community, and spirituality are three core aspects which are essential to maintaining good mental health. Spirituality Chicano/a spirituality has been described as a process of engaging in a journey to unite one's consciousness for the purposes of cultural unity and social justice. It brings together many elements and is therefore hybrid in nature. Scholar Regina M Marchi states that Chicano/a spirituality "emphasizes elements of struggle, process, and politics, with the goal of creating a unity of consciousness to aid social development and political action." Lara Medina and Martha R. Gonzales explain that "reclaiming and reconstructing our spirituality based on non-Western epistemologies is central to our process of decolonization, particularly in these most troubling times of incessant Eurocentric, heteronormative patriarchy, misogyny, racial injustice, global capitalist greed, and disastrous global climate change." As a result, some scholars state that Chicana/o spirituality must involve a study of Indigenous Ways of Knowing (IWOK). The Circulo de Hombres group in San Diego, California spiritually heals Chicano, Latino, and Indigenous men "by exposing them to Indigenous-based frameworks, men of this cultural group heal and rehumanize themselves through Maya-Nahua Indigenous-based concepts and teachings," helping them process integenerational trauma and dehumanization that has resulted from colonization. A study on the group reported that reconnecting with Indigenous worldviews was overwhelmingly successful in helping Chicano, Latino, and Indigenous men heal. As stated by Jesus Mendoza, "our bodies remember our indigenous roots and demand that we open our mind, hearts, and souls to our reality." Chicano/a spirituality is a way for Chicana/os to listen, reclaim, and survive while disrupting coloniality. While historically Catholicism was the primary way for Chicana/os to express their spirituality, this is changing rapidly. According to a Pew Research Center report in 2015, "the primary role of Catholicism as a conduit to spirituality has declined and some Chicana/os have changed their affiliation to other Christian religions and many more have stopped attending church altogether." Increasingly, Chicana/os are considering themselves spiritual rather than religious or part of an organized religion. A study on spirituality and Chicano men in 2020 found that many Chicanos indicated the benefits of spirituality through connecting with Indigenous spiritual beliefs and worldviews instead of Christian or Catholic organized religion in their lives. Dr. Lara Medina defines spirituality as (1) Knowledge of oneself - one's gifts and one's challenges, (2) Co-creation or a relationship with communities (others), and (3) A relationship with sacred sources of life and death 'the Great Mystery' or Creator. Jesus Mendoza writes that, for Chicana/os, "spirituality is our connection to the earth, our pre-Hispanic history, our ancestors, the mixture of pre-Hispanic religion with Christianity ... a return to a non-Western worldview that understands all life as sacred." In her writing on Gloria Anzaldua's idea of spiritual activism, AnaLouise Keating states that spirituality is distinct from organized religion and New Age thinking. Leela Fernandes defines spirituality as follows:When I speak of spirituality, at the most basic level I am referring to an understanding of the self as encompassing body and mind, as well as spirit. I am also referring to a transcendent sense of interconnection that moves beyond the knowable, visible material world. This sense of interconnection has been described variously as divinity, the sacred, spirit, or simply the universe. My understanding is also grounded in a form of lived spirituality, which is directly accessible to all and which does not need to be mediated by religious experts, institutions or theological texts; this is what is often referred to as the mystical side of spirituality... Spirituality can be as much about practices of compassion, love, ethics, and truth defined in nonreligious terms as it can be related to the mystical reinterpretations of existing religious traditions. David Carrasco states that Mesoamerican spiritual or religious beliefs have historically always been evolving in response to the conditions of the world around them: "These ritual and mythic traditions were not mere repetitions of ancient ways. New rituals and mythic stories were produced to respond to ecological, social, and economic changes and crises." This was represented through the art of the Olmecs, Maya, and Mexica. European colonizers sought and worked to destroy Mesoamerican worldviews regarding spirituality and replace these with a Christian model. The colonizers used syncreticism in art and culture, exemplified through practices such as the idea presented in the Testerian Codices that "Jesus ate tortillas with his disciples at the last supper" or the creation of the Virgen de Guadalupe (mirroring the Christian Mary) in order to force Christianity into Mesoamerican cosmology. Chicana/os can create new spiritual traditions by recognizing this history or "by observing the past and creating a new reality." Gloria Anzaldua states that this can be achieved through nepantla spirituality or a space where, as stated by Jesus Mendoza, "all religious knowledge can coexist and create a new spirituality ... where no one is above the other ... a place where all is useful and none is rejected." Anzaldua and other scholars acknowledge that this is a difficult process that involves navigating many internal contradictions in order to find a path towards spiritual liberation. Cherrie Moraga calls for a deeper self-exploration of who Chicana/os are in order to reach "a place of deeper inquiry into ourselves as a people ... possibly, we must turn our eyes away from racist America and take stock at the damages done to us. Possibly, the greatest risks yet to be taken are entre nosotros, where we write, paint, dance, and draw the wound for one another to build a stronger pueblo. The women artist seemed disposed to do this, their work often mediating the delicate area between cultural affirmation and criticism." Laura E. Pérez states in her study of Chicana art that "the artwork itself [is] altar-like, a site where the disembodied - divine, emotional, or social - [is] acknowledged, invoked, meditated upon, and released as a shared offering." Cultural aspects The term Chicanismo describes the cultural, cinematic, literary, musical, and artistic movements that emerged with the Chicano Movement. While the Chicano Movement tended to focus and prioritize the masculine subject, the diversity of Chicano cultural production is vast. As noted by artist Guillermo Gómez-Peña, "the actual diversity and complexity" of the Chicana/o community, which includes influences from Central American, Caribbean, Asian, and African Americans who have moved into Chicana/o communities as well as queer people of color, has been consistently overlooked. Many Chicanx artists therefore continue to challenge and question "conventional, static notions of Chicanismo," while others conform to more conventional cultural traditions. With mass media, Chicana/o culture has become popularized internationally. Lowrider car clubs have emerged, most notably in São Paulo, Brazil, Māori youth enhancing lowrider bicycles and taking on cholo style, and elements of Chicano culture including music, lowriders, and the arts being adopted in Japan. Chicano culture took hold in Japan in the 1980s and continues to grow with contributions from Shin Miyata, Junichi Shimodaira, Miki Style, Night Tha Funksta, and MoNa (Sad Girl). Miyata owns a record label, Gold Barrio Records, that re-releases Chicano music. Chicana/o fashion and other cultural aspects have also been adopted in Japan. There has been debate over whether this should be termed cultural appropriation, with some arguing that it is appreciation rather than appropriation. Film Chicana/o film is rooted in economic, social, and political oppression and has therefore been marginalized since its inception. Scholar Charles Ramírez Berg has suggested that Chicana/o cinema has progressed through three fundamental stages since its establishment in the 1960s. The first wave occurred from 1969 to 1976 and was characterized by the creation of radical documentaries which chronicled "the cinematic expression of a cultural nationalist movement, it was politically contestational and formally oppositional." Some films of this era include El Teatro Campesino's Yo Soy Joaquín (1969) and Luis Valdez's El Corrido (1976). These films were focused on documenting the systematic oppression of Chicanas/os in the United States. The second wave of Chicana/o film, according to Ramírez Berg, developed out of portraying anger against oppression faced in society, highlighting immigration issues, and re-centering the Chicana/o experience, yet channeling this in more accessible forms which were not as outright separatist as the first wave of films. Docudramas like Esperanza Vasquez's Agueda Martínez (1977), Jesús Salvador Treviño's Raíces de Sangre (1977), and Robert M. Young's ¡Alambrista! (1977) served as transitional works which would inspire full-length narrative films. Early narrative films of the second wave include Valdez's Zoot Suit (1981), Young's The Ballad of Gregorio Cortez (1982), Gregory Nava's, My Family/Mi familia (1995) and Selena (1997), and Josefina López's Real Women Have Curves, originally a play which premiered in 1990 and was later released as a film in 2002. The second wave of Chicana/o film is still ongoing and overlaps with the third wave, the latter of which gained noticeable momentum in the 1990s and does not emphasize oppression, exploitation, or resistance as central themes. According to Ramírez Berg, third wave films "do not accentuate Chicano oppression or resistance; ethnicity in these films exists as one fact of several that shape characters' lives and stamps their personalities." Literature Chicana/o literature tends to incorporate themes of identity, discrimination, and culture, with an emphasis on validating Mexican American and Chicana/o culture in the United States. Chicana/o writers also focus on challenging the dominant colonial narrative, "not only to critique the uncritically accepted 'historical' past, but more importantly to reconfigure it in order to envision and prepare for a future in which native peoples can find their appropriate place in the world and forge their individual, hybrid sense of self." Notable Chicana/o writers include Norma Elia Cantú, Gary Soto, Sergio Troncoso, Rigoberto González, Raul Salinas, Daniel Olivas, Benjamin Alire Sáenz, Luís Alberto Urrea, Dagoberto Gilb, Alicia Gaspar de Alba, Luis J. Rodriguez and Pat Mora. Rodolfo "Corky" Gonzales's "Yo Soy Joaquin" is one of the first examples of explicitly Chicano poetry, while José Antonio Villarreal's Pocho (1959) is widely recognized as the first major Chicano novel. The novel Chicano, by Richard Vasquez, was the first novel about Mexican Americans to be released by a major publisher (Doubleday, 1970). It was widely read in high schools and universities during the 1970s and is now recognized as a breakthrough novel. Vasquez's social themes have been compared with those found in the work of Upton Sinclair and John Steinbeck. Chicana writers have tended to focus on themes of identity, questioning how identity is constructed, who constructs it, and for what purpose in a racist, classist, and patriarchal structure. Characters in books such as Victuum (1976) by Isabella Ríos, The House on Mango Street (1983) by Sandra Cisneros, Loving in the War Years: lo que nunca pasó por sus labios (1983) by Cherríe Moraga, The Last of the Menu Girls (1986) by Denise Chávez, Margins (1992) by Terri de la Peña, and Gulf Dreams (1996) by Emma Pérez have also been read regarding how they intersect with themes of gender and sexuality. Academic Catrióna Rueda Esquibel performs a queer reading of Chicana literature in her work With Her Machete in Her Hand: Reading Chicana Lesbians (2006), demonstrating how some of the intimate relationships between girls and women in these works contributes to a discourse on homoeroticism and nonnormative sexuality in Chicana/o literature. Chicano writers have tended to gravitate toward themes of cultural, racial, and political tensions in their work, while not explicitly focusing on issues of identity or gender and sexuality, in comparison to the work of Chicana writers. Chicanos who were marked as overtly gay in early Chicana/o literature, from 1959 to 1972, tended to be removed from the Mexican-American barrio and were typically portrayed with negative attributes, as examined by Daniel Enrique Pérez, such as the character of "Joe Pete" in Pocho and the unnamed protagonist of John Rechy's City of Night (1963). However, other characters in the Chicano canon may also be read as queer, such as the unnamed protagonist of Tomás Rivera's ...y no se lo tragó la tierra (1971), and "Antonio Márez" in Rudolfo Anaya's Bless Me, Ultima (1972), since, according to Pérez, "these characters diverge from heteronormative paradigms and their identities are very much linked to the rejection of heteronormativity." As noted by scholar Juan Bruce-Novoa, Chicano novels allowed for androgynous and complex characters "to emerge and facilitate a dialogue on nonnormative sexuality" and that homosexuality was "far from being ignored during the 1960s and 1970s" in Chicano literature, although homophobia may have curtailed portrayals of openly gay characters during this era. Given this representation in early Chicano literature, Bruce-Novoa concludes, "we can say our community is less sexually repressive than we might expect." Music Lalo Guerrero has been lauded as the "father of Chicano music." Beginning in the 1930s, he wrote songs in the big band and swing genres that were popular at the time. He expanded his repertoire to include songs written in traditional genres of Mexican music, and during the farmworkers' rights campaign, wrote music in support of César Chávez and the United Farm Workers. Jeffrey Lee Pierce of The Gun Club often spoke about being half-Mexican and growing up with the Chicano culture. Other Chicano/Mexican-American singers include Selena, who sang a mixture of Mexican, Tejano, and American popular music, and died in 1995 at the age of 23; Zack de la Rocha, social activist and lead vocalist of Rage Against the Machine; and Los Lonely Boys, a Texas-style country rock band who have not ignored their Mexican-American roots in their music. In recent years, a growing Tex-Mex polka band trend influenced by the and music of Mexican immigrants, has in turn influenced much new Chicano folk music, especially on large-market Spanish language radio stations and on television music video programs in the U.S. Some of these artists, like the band Quetzal, are known for the political content of political songs. Electronic Chicano electronic artists DJ Rolando, Santiago Salazar, DJ Tranzo, and Esteban Adame have released music through independent labels like Underground Resistance, Planet E, Krown Entertainment, and Rush Hour. In the 1990s, artists such as DJ Juanito (Johnny Loopz), Rudy "Rude Dog" Gonzalez, and Juan V. released numerous tracks through Los Angeles-based house labels Groove Daddy Records and Bust A Groove. DJ Rolando's "Knights of the Jaguar," released on the UR label in 1999, became the most well-known Chicano techno track after charting at #43 in the UK in 2000 and being named one of the "20 best US rave anthems of the '90s" by Mixmag: "after it was released, it spread like wildfire all over the world. It's one of those rare tracks that feels like it can play for an eternity without anyone batting an eyelash." In 2013, it was voted the 26th best house track of all time by Mixmag. Salazar and Adame are also affiliated with UR and have collaborated with DJ Dex (Nomadico). Salazar founded music labels Major People, Ican (as in Mex-Ican, with Esteban Adame) and Historia y Violencia (with Juan Mendez a.k.a. Silent Servant) and released his debut album Chicanismo in 2015 to positive reviews. Nomadico's label Yaxteq, founded in 2015, has released tracks by veteran Los Angeles techno producer Xavier De Enciso and Honduran producer Ritmos. Hip hop Hip hop culture, which is cited as having formed in the 1980s street culture of African American, West Indian (especially Jamaican), and Puerto Rican New York City Bronx youth and characterized by DJing, rap music, graffiti, and breakdancing, was adopted by many Chicano youth by the 1980s as its influence moved westward across the United States. Chicano artists were beginning to develop their own style of hip hop. Rappers such as Ice-T and Easy-E shared their music and commercial insights with Chicano rappers in the late 1980s. Chicano rapper Kid Frost, who is often cited as "the godfather of Chicano rap" was highly influenced by Ice-T and was even cited as his protégé. Chicano rap is a unique style of hip hop music which started with Kid Frost, who saw some mainstream exposure in the early 1990s. While Mellow Man Ace was the first mainstream rapper to use Spanglish, Frost's song "La Raza" paved the way for its use in American hip hop. Chicano rap tends to discuss themes of importance to young urban Chicanos. Some of today's Chicano artists include A.L.T., Lil Rob, Psycho Realm, Baby Bash, Serio, A Lighter Shade of Brown, and Funky Aztecs Sir Dyno, and Choosey. Chicano R&B artists include Paula DeAnda, Frankie J, and Victor Ivan Santos (early member of the Kumbia Kings and associated with Baby Bash). Jazz Although Latin jazz is most popularly associated with artists from the Caribbean (particularly Cuba) and Brazil, young Mexican Americans have played a role in its development over the years, going back to the 1930s and early 1940s, the era of the zoot suit, when young Mexican-American musicians in Los Angeles and San Jose, such as Jenni Rivera, began to experiment with , a jazz-like fusion genre that has grown recently in popularity among Mexican Americans Rock In the 1950s, 1960s and 1970s, a wave of Chicano pop music surfaced through innovative musicians Carlos Santana, Johnny Rodriguez, Ritchie Valens and Linda Ronstadt. Joan Baez, who is also of Mexican-American descent, included Hispanic themes in some of her protest folk songs. Chicano rock is rock music performed by Chicano groups or music with themes derived from Chicano culture. There are two undercurrents in Chicano rock. One is a devotion to the original rhythm and blues roots of Rock and roll including Ritchie Valens, Sunny and the Sunglows, and ? and the Mysterians. Groups inspired by this include Sir Douglas Quintet, Thee Midniters, Los Lobos, War, Tierra, and El Chicano, and, of course, the Chicano Blues Man himself, the late Randy Garribay. The second theme is the openness to Latin American sounds and influences. Trini Lopez, Santana, Malo, Azteca, Toro, Ozomatli and other Chicano Latin rock groups follow this approach. Chicano rock crossed paths of other Latin rock genres (Rock en español) by Cubans, Puerto Ricans, such as Joe Bataan and Ralphi Pagan and South America (Nueva canción). Rock band The Mars Volta combines elements of progressive rock with traditional Mexican folk music and Latin rhythms along with Cedric Bixler-Zavala's Spanglish lyrics. Chicano punk is a branch of Chicano rock. There were many bands that emerged from the California |
are a Spanish region and archipelago in the Atlantic Ocean, in Macaronesia. At their closest point to the African mainland, they are west of Morocco. They are the southernmost of the autonomous communities of Spain. The archipelago is economically and politically European and is part of the European Union. The eight main islands are (from largest to smallest in area) Tenerife, Fuerteventura, Gran Canaria, Lanzarote, La Palma, La Gomera, El Hierro and La Graciosa. The archipelago includes many smaller islands and islets, including Alegranza, Isla de Lobos, Montaña Clara, Roque del Oeste, and Roque del Este. It also includes a number of rocks, including those of Salmor, Fasnia, Bonanza, Garachico, and Anaga. In ancient times, the island chain was often referred to as "the Fortunate Isles". The Canary Islands are the southernmost region of Spain, and the largest and most populous archipelago of Macaronesia. Because of their location, the Canary Islands have historically been considered a link between the four continents of Africa, North America, South America, and Europe. In 2019, the Canary Islands had a population of 2,153,389, with a density of 287.39 inhabitants per km2, making it the eighth most populous autonomous community of Spain. The population is mostly concentrated in the two capital islands: around 43% on the island of Tenerife and 40% on the island of Gran Canaria. The Canary Islands, especially Tenerife, Gran Canaria, Fuerteventura, and Lanzarote, are a major tourist destination, with over 12 million visitors per year. This is due to their beaches, subtropical climate, and important natural attractions, especially Maspalomas in Gran Canaria, Teide National Park, and Mount Teide (a World Heritage Site) in Tenerife. Mount Teide is the highest peak in Spain and the third tallest volcano in the world, measured from its base on the ocean floor. The islands have warm summers and winters warm enough for the climate to be technically tropical at sea level. The amount of precipitation and the level of maritime moderation vary depending on location and elevation. The archipelago includes green areas as well as desert areas. The islands’ high mountains are ideal for astronomical observation, because they lie above the temperature inversion layer. As a result, the archipelago boasts two professional observatories: Teide Observatory on the island of Tenerife, and Roque de los Muchachos Observatory on the island of La Palma. In 1927, the Province of Canary Islands was split into two provinces. In 1982, the autonomous community of the Canary Islands was established. The cities of Santa Cruz de Tenerife and Las Palmas de Gran Canaria are, jointly, the capital of the islands. Those cities are also, respectively, the capitals of the provinces of Santa Cruz de Tenerife and Las Palmas. Las Palmas de Gran Canaria has been the largest city in the Canaries since 1768, except for a brief period in the 1910s. Between the 1833 territorial division of Spain and 1927, Santa Cruz de Tenerife was the sole capital of the Canary Islands. In 1927, it was ordered by decree that the capital of the Canary Islands would be shared between two cities, and this arrangement persists to the present day. The third largest city in the Canary Islands is San Cristóbal de La Laguna (a World Heritage Site) on Tenerife. This city is also home to the Consejo Consultivo de Canarias, which is the supreme consultative body of the Canary Islands. During the Age of Sail, the Canaries were the main stopover for Spanish galleons on their way to the Americas, which sailed that far south in order to catch the prevailing northeasterly trade winds. Etymology The name Islas Canarias is likely derived from the Latin name Canariae Insulae, meaning "Islands of the Dogs", a name that was evidently generalized from the ancient name of one of these islands, Canaria – presumably Gran Canaria. According to the historian Pliny the Elder, the island Canaria contained "vast multitudes of dogs of very large size". Other theories speculate that the name comes from the Nukkari Berber tribe living in the Moroccan Atlas, named in Roman sources as Canarii, though Pliny again mentions the relation of this term with dogs. The connection to dogs is retained in their depiction on the islands' coat-of-arms. It is considered that the aborigines of Gran Canaria called themselves "Canarios". It is possible that after being conquered, this name was used in plural in Spanish, i.e., as to refer to all of the islands as the Canarii-as. The name of the islands is not derived from the canary bird; rather, the birds are named after the islands. Physical geography Tenerife is the largest and most populous island of the archipelago. Gran Canaria, with 865,070 inhabitants, is both the Canary Islands' second most populous island, and the third most populous one in Spain after Tenerife (966,354 inhabitants) and Majorca (896,038 inhabitants). The island of Fuerteventura is the second largest in the archipelago and located from the African coast. The islands form the Macaronesia ecoregion with the Azores, Cape Verde, Madeira, and the Savage Isles. The Canary Islands is the largest and most populated archipelago of the Macaronesia region. The archipelago consists of seven large and several smaller islands, all of which are volcanic in origin. The antipodes of the Canary Islands are found in the Pacific Ocean, between New Zealand, New Caledonia, Australia and the ocean. According to the position of the islands with respect to the north-east trade winds, the climate can be mild and wet or very dry. Several native species form laurisilva forests. As a consequence, the individual islands in the Canary archipelago tend to have distinct microclimates. Those islands such as El Hierro, La Palma and La Gomera lying to the west of the archipelago have a climate which is influenced by the moist Canary Current. They are well vegetated even at low levels and have extensive tracts of sub-tropical laurisilva forest. As one travels east toward the African coast, the influence of the current diminishes, and the islands become increasingly arid. Fuerteventura and Lanzarote, the islands which are closest to the African mainland, are effectively desert or semi desert. Gran Canaria is known as a "continent in miniature" for its diverse landscapes like Maspalomas and Roque Nublo. In terms of its climate Tenerife is particularly interesting. The north of the island lies under the influence of the moist Atlantic winds and is well vegetated, while the south of the island around the tourist resorts of Playa de las Americas and Los Cristianos is arid. The island rises to almost above sea level, and at altitude, in the cool relatively wet climate, forests of the endemic pine Pinus canariensis thrive. Many of the plant species in the Canary Islands, like the Canary Island pine and the dragon tree, Dracaena draco are endemic, as noted by Sabin Berthelot and Philip Barker Webb in their work, L'Histoire Naturelle des Îles Canaries (1835–50). Climate The climate is warm subtropical and generally semidesertic, moderated by the sea and in summer by the trade winds. There are a number of microclimates and the classifications range mainly from semi-arid to desert. According to Köppen, the majority of the Canary Islands have a hot desert climate represented as BWh, caused partly due to the cool Canary Current. There also exists a subtropical humid climate which is very influenced by the ocean in the middle of the islands of La Gomera, Tenerife and La Palma, where laurisilva cloud forests grow. Geology The seven major islands, one minor island, and several small islets were originally volcanic islands, formed by the Canary hotspot. The Canary Islands is the only place in Spain where volcanic eruptions have been recorded during the Modern Era, with some volcanoes still active (El Hierro, 2011). Volcanic islands such as those in the Canary chain often have steep ocean cliffs caused by catastrophic debris avalanches and landslides. The island chain's most recent eruption occurred at Cumbre Vieja, a volcanic ridge on La Palma, in 2021. The Teide volcano on Tenerife is the highest mountain in Spain, and the third tallest volcano on Earth on a volcanic ocean island. All the islands except La Gomera have been active in the last million years; four of them (Lanzarote, Tenerife, La Palma and El Hierro) have historical records of eruptions since European discovery. The islands rise from Jurassic oceanic crust associated with the opening of the Atlantic. Underwater magmatism commenced during the Cretaceous, and continued to the present day. The current islands reached the ocean's surface during the Miocene. The islands were once considered as a distinct physiographic section of the Atlas Mountains province, which in turn is part of the larger African Alpine System division, but are nowadays recognized as being related to a magmatic hot spot. In the summer of 2011 a series of low-magnitude earthquakes occurred beneath El Hierro. These had a linear trend of northeast–southwest. In October a submarine eruption occurred about south of Restinga. This eruption produced gases and pumice, but no explosive activity was reported. The following table shows the highest mountains in each of the islands: Natural symbols The official natural symbols associated with Canary Islands are the bird Serinus canaria (canary) and the Phoenix canariensis palm. National parks Four of Spain's thirteen national parks are located in the Canary Islands, more than any other autonomous community. Two of these have been declared UNESCO World Heritage Sites and the other two are part of Biosphere Reserves. The parks are: Teide National Park is the oldest and largest national park in the Canary Islands and one of the oldest in Spain. Located in the geographic centre of the island of Tenerife, it is the most visited national park in Spain. In 2010, it became the most visited national park in Europe and second worldwide. The park's highlight is the Teide volcano; standing at an altitude of , it is the highest elevation of the country and the third largest volcano on Earth from its base. In 2007, the Teide National Park was declared one of the 12 Treasures of Spain. Politics Governance The regional executive body, the Parliament of the Canary Islands, is presided over by Ángel Víctor Torres (PSOE), the current President of the Canary Islands. The latter is invested by the members of the regional legislature, the Parliament of the Canary Islands, that consists of 70 elected legislators. The last regional election took place in May 2019. The islands have 14 seats in the Spanish Senate. Of these, 11 seats are directly elected (3 for Gran Canaria, 3 for Tenerife, and 1 each for Lanzarote (including La Graciosa), Fuerteventura, La Palma, La Gomera and El Hierro) while the other 3 are appointed by the regional legislature. Political geography The Autonomous Community of the Canary Islands consists of two provinces (provincias), Las Palmas and Santa Cruz de Tenerife, whose capitals (Las Palmas de Gran Canaria and Santa Cruz de Tenerife) are capitals of the autonomous community. Each of the seven major islands is ruled by an island council named Cabildo Insular. Each island is subdivided into smaller municipalities (municipios); Las Palmas is divided into 34 municipalities, and Santa Cruz de Tenerife is divided into 54 municipalities. The international boundary of the Canaries is one subject of dispute in the Morocco Spain relations. Moreover, in 2022 the UN has declared the Canary Island´s territorial waters as Moroccan coast and Morocco has authorised gas and oil exploration in what the Canary Islands states to be Canarian territorial waters and Western Sahara waters. Morocco's official position is that international laws regarding territorial limits do not authorise Spain to claim seabed boundaries based on the territory of the Canaries, since the Canary Islands enjoy a large degree of autonomy. In fact, the islands do not enjoy any special degree of autonomy as each one of the Spanish regions is considered an autonomous community with equal status to the European ones. Canarian nationalism There are some pro-independence political parties, like the National Congress of the Canaries (CNC) and the Popular Front of the Canary Islands, but their popular support is almost insignificant, with no presence in either the autonomous parliament or the cabildos insulares. According to a 2012 study by the Centro de Investigaciones Sociológicas, when asked about national identity, the majority of respondents from the Canary Islands (53.8%) consider themselves Spanish and Canarian in equal measures, followed by 24% who consider themselves more Canarian than Spanish. Only 6.1% of the respondents consider themselves only Canarian while 7% consider themselves only Spanish. History Ancient and pre-Hispanic times Before the arrival of humans, the Canaries were inhabited by prehistoric animals; for example, the giant lizard (Gallotia goliath), the Tenerife and Gran Canaria giant rats, and giant prehistoric tortoises, Geochelone burchardi and Geochelone vulcanica. Although the original settlement of what are now called the Canary Islands is not entirely clear, linguistic, genetic, and archaeological analyses indicate that indigenous peoples were living on the Canary Islands at least 2000 years ago but possibly one thousand years or more before, and that they shared a common origin with the Berbers on the nearby North African coast. Reaching the islands may have taken place using several small boats, landing on the easternmost islands Lanzarote and Fuerteventura. These groups came to be known collectively as the Guanches, although Guanches had been the name for only the indigenous inhabitants of Tenerife. As José Farrujia describes, ‘The indigenous Canarians lived mainly in natural caves, usually near the coast, 300-500m above sea level. These caves were sometimes isolated but more commonly formed settlements, with burial caves nearby’. Archaeological work has uncovered a rich culture visible through artefacts of ceramics, human figures, fishing, hunting and farming tools, plant fibre clothing and vessels, as well as cave paintings. At Lomo de los Gatos on Gran Canaria, a site occupied from 1,600 years ago up until the 1960s, round stone houses, complex burial sites, and associated artefacts have been found. Across the islands are thousands of Libyco-Berber alphabet inscriptions scattered and they have been extensively documented by many linguists. The social structure of indigenous Canarians encompassed ‘a system of matrilineal descent in most of the islands, in which inheritance was passed on via the female line. Social status and wealth were hereditary and determined the individual’s position in the social pyramid, which consisted of the king, the relatives of the king, the lower nobility, villeins, plebeians, and finally executioners, butchers, embalmers, and prisoners’. Their religion was animist, centring on the sun and moon, as well as natural features such as mountains. From the 14th century onward, numerous visits were made by sailors from Majorca, Portugal and Genoa. Lancelotto Malocello settled on Lanzarote in 1312. The Majorcans established a mission with a bishop in the islands that lasted from 1350 to 1400. The islands may have been visited by the Phoenicians, the Greeks, and the Carthaginians. King Juba II, Caesar Augustus's Numidian protégé, is credited with discovering the islands for the Western world. According to Pliny the Elder, Juba found the islands uninhabited, but found "a small temple of stone" and "some traces of buildings". Juba dispatched a naval contingent to re-open the dye production facility at Mogador in what is now western Morocco in the early first century AD. That same naval force was subsequently sent on an exploration of the Canary Islands, using Mogador as their mission base. The names given by Romans to the individual islands were Ninguaria or Nivaria (Tenerife), Canaria (Gran Canaria), Pluvialia or Invale (Lanzarote), Ombrion (La Palma), Planasia (Fuerteventura), Iunonia or Junonia (El Hierro) and Capraria (La Gomera). Castilian conquest In 1402, the Castilian colonisation of the islands began, with the expedition of the French explorers Jean de Béthencourt and Gadifer de la Salle, nobles and vassals of Henry III of Castile, to Lanzarote. From there, they went on to conquer Fuerteventura (1405) and El Hierro. These invasions were ‘brutal cultural and military clashes between the indigenous population and the Castilians’ lasting over a century due to formidable resistance by indigenous Canarians. Béthencourt received the title King of the Canary Islands, but still recognised King Henry III as his overlord. It was not a simple military enterprise, given the aboriginal resistance on some islands. Neither was it politically, since the particular interests of the nobility (determined to strengthen their economic and political power through the acquisition of the islands) conflicted with those of the states, particularly Castile, which were in the midst of territorial expansion and in a process of strengthening of the Crown against the nobility. Historians distinguish two periods in the conquest of the Canary Islands: Aristocratic conquest (Conquista señorial). This refers to the early conquests carried out by the nobility, for their own benefit and without the direct participation of the Crown of Castile, which merely granted rights of conquest in exchange for pacts of vassalage between the noble conqueror and the Crown. One can identify within this period an early phase known as the Betancurian or Norman Conquest, carried out by Jean de Bethencourt (who was originally from Normandy) and Gadifer de la Salle between 1402 and 1405, which involved the islands of Lanzarote, El Hierro and Fuerteventura. The subsequent phase is known as the Castilian Conquest, carried out by Castilian nobles who acquired, through purchases, assignments and marriages, the previously conquered islands and also incorporated the island of La Gomera around 1450. Royal conquest (Conquista realenga). This defines the conquest between 1478 and 1496, carried out directly by the Crown of Castile, during the reign of the Catholic Monarchs, who armed and partly financed the conquest of those islands which were still unconquered: Gran Canaria, La Palma and Tenerife. This phase of the conquest came to an end in the year 1496, with the dominion of the island of Tenerife, bringing the entire Canarian Archipelago under the control of the Crown of Castile. Béthencourt also established a base on the island of La Gomera, but it would be many years before the island was fully conquered. The natives of La Gomera, and of Gran Canaria, Tenerife, and La Palma, resisted the Castilian invaders for almost a century. In 1448 Maciot de Béthencourt sold the lordship of Lanzarote to Portugal's Prince Henry the Navigator, an action that was accepted by neither the natives nor the Castilians. Despite Pope Nicholas V ruling that the Canary Islands were under Portuguese control, the crisis swelled to a revolt which lasted until 1459 with the final expulsion of the Portuguese. In 1479, Portugal and Castile signed the Treaty of Alcáçovas, which settled disputes between Castile and Portugal over the control of the Atlantic. This treaty recognized Castilian control of the Canary Islands but also confirmed Portuguese possession of the Azores, Madeira, and the Cape Verde islands, and gave the Portuguese rights to any further islands or lands in the Atlantic that might be discovered. The Castilians continued to dominate the islands, but due to the topography and the resistance of the native Guanches, they did not achieve complete control until 1496, when Tenerife and La Palma were finally subdued by Alonso Fernández de Lugo. As a result of this 'the native pre-Hispanic population declined quickly due to war, epidemics, and slavery'. The Canaries were incorporated into the Kingdom of Castile. After the conquest and the introduction of slavery After the conquest, the Castilians imposed a new economic model, based on single-crop cultivation: first sugarcane; then wine, an important item of trade with England. Gran Canaria was conquered by the Crown of Castile on 6 March 1480 (from 1556, of Spain), and Tenerife was conquered on 1496, and each had its own governor. There has been speculation that the abundance of Roccella tinctoria on the Canary Islands offered a profit motive for Jean de Béthencourt during his conquest of the islands. Lichen has been used for centuries to make dyes. This includes royal purple colors derived from roccella tinctoria, also known as orseille. The objective of the Spanish Crown to convert the islands into a powerhouse of cultivation required a much larger labour force. This was attained through a brutal practice of enslavement, not only of indigenous Canarians but large numbers of Africans who were forcibly taken from North and Sub-Saharan Africa. Whilst the first slave plantations in the Atlantic region were across Madeira, Cape Verde, and the Canary Islands, it was only the Canary Islands which had an indigenous population and were therefore invaded rather than newly occupied. This agriculture industry was largely based on sugarcane and the Castilians converted large swaths of the landscape for sugarcane production, and the processing and manufacturing of sugar, facilitated by enslaved labourers. The cities of Santa Cruz de Tenerife and Las Palmas de Gran Canaria became a stopping point for the Spanish traders, as well as conquistadors, and missionaries on their way to the New World. This trade route brought great wealth to the Castilian social sectors of the islands and soon were attracting merchants and adventurers from all over Europe. As the wealth grew, enslaved African workers were also forced into demeaning domestic roles for the rich Castilians on the islands such as servants in their houses. Research on the skeletons of some of these enslaved workers from the burial site of Finca Clavijo on Gran Canaria have showed that 'all of the adults buried in Finca Clavijo undertook extensive physical activity that involved significant stress on the spine and appendicular skeleton' that result from relentless hard labour, akin to the physical abnormalities found with enslaved peoples from other sugarcane plantations around the world. These findings of the physical strain that the enslaved at Finca Clavijo were subjected to in order to provide wealth for the Spanish elite has inspired a poem by British writer Ralph Hoyte, entitled Close to the Bone. The method of forcibly relocating Africans to the Canary Islands in order to provide intensive labour, the first time this had been attempted, was looked at favourably by other European powers and was the inspiration behind the Transatlantic Slave Trade whereby around 12 million Africans were taken from their homelands in order to enter forced labour as plantation workers and domestic servants in the Americas over a period of 400 years. As a result of the huge wealth generated by enslaved labour, magnificent palaces and churches were built on La Palma during this busy, prosperous period. The Church of El Salvador survives as one of the island's finest examples of the architecture of the 16th century. Civilian architecture survives in forms such as Casas de los Sánchez-Ochando or Casa Quintana. The Canaries' wealth invited attacks by pirates and privateers. Ottoman Turkish admiral and privateer Kemal Reis ventured into the Canaries in 1501, while Murat Reis the Elder captured Lanzarote in 1585. The most severe attack took place in 1599, during the Dutch Revolt. A Dutch fleet of 74 ships and 12,000 men, commanded by Pieter van der Does, attacked the capital Las Palmas de Gran Canaria (the city had 3,500 of Gran Canaria's 8,545 inhabitants). The Dutch attacked the Castillo de la Luz, which guarded the harbor. The Canarians evacuated civilians from the city, and the Castillo surrendered (but not the city). The Dutch moved inland, but Canarian cavalry drove them back to Tamaraceite, near the city. The Dutch then laid siege to the city, demanding the surrender of all its wealth. They received 12 sheep and 3 calves. Furious, the Dutch sent 4,000 soldiers to attack the Council of the Canaries, who were sheltering in the village of Santa Brígida. 300 Canarian soldiers ambushed the Dutch in the village of Monte Lentiscal, killing 150 and forcing the rest to retreat. The Dutch concentrated on Las Palmas de Gran Canaria, attempting to burn it down. The Dutch pillaged Maspalomas, on the southern coast of Gran Canaria, San Sebastián on La Gomera, and Santa Cruz on La Palma, but eventually gave up the siege of Las Palmas and withdrew. In 1618 the Barbary pirates from North Africa attacked Lanzarote and La Gomera taking 1000 captives to be sold as slaves. Another noteworthy attack occurred in 1797, when Santa Cruz de Tenerife was attacked by a British fleet under Horatio Nelson on 25 July. The British were repulsed, losing almost 400 men. It was during this battle that Nelson lost his right arm. 18th to 19th century The sugar-based economy of the islands faced stiff competition from Spain's Caribbean colonies. Low sugar prices in the 19th century caused severe recessions on the islands. A new cash crop, cochineal (cochinilla), came into cultivation during this time, reinvigorating the islands' economy. During this time the Canarian-American trade was developed, in which Canarian products such as cochineal, sugarcane and rum were sold in American ports such | thousands of Canarians emigrated to the shores of Cuba. During the Spanish–American War of 1898, the Spanish fortified the islands against a possible American attack, but no such event took place. Romantic period and scientific expeditions Sirera and Renn (2004) distinguish two different types of expeditions, or voyages, during the period 1770–1830, which they term "the Romantic period": First are "expeditions financed by the States, closely related with the official scientific Institutions. characterised by having strict scientific objectives (and inspired by) the spirit of Illustration and progress". In this type of expedition, Sirera and Renn include the following travellers: J. Edens, whose 1715 ascent and observations of Mt. Teide influenced many subsequent expeditions. Louis Feuillée (1724), who was sent to measure the meridian of El Hierro and to map the islands. Jean-Charles de Borda (1771, 1776) who more accurately measured the longitudes of the islands and the height of Mount Teide the Baudin-Ledru expedition (1796) which aimed to recover a valuable collection of natural history objects. The second type of expedition identified by Sirera and Renn is one that took place starting from more or less private initiatives. Among these, the key exponents were the following: Alexander von Humboldt (1799) Buch and Smith (1815) Broussonet Webb Sabin Berthelot. Sirera and Renn identify the period 1770–1830 as one in which "In a panorama dominated until that moment by France and England enters with strength and brio Germany of the Romantic period whose presence in the islands will increase". Early 20th century At the beginning of the 20th century, the British introduced a new cash-crop, the banana, the export of which was controlled by companies such as Fyffes. 30 November 1833 the Province of Canary Islands had been created with the capital being declared as Santa Cruz de Tenerife. The rivalry between the cities of Las Palmas de Gran Canaria and Santa Cruz de Tenerife for the capital of the islands led to the division of the archipelago into two provinces on 23 September 1927. During the time of the Second Spanish Republic, Marxist and anarchist workers' movements began to develop, led by figures such as Jose Miguel Perez and Guillermo Ascanio. However, outside of a few municipalities, these organisations were a minority and fell easily to Nationalist forces during the Spanish Civil War. Franco regime In 1936, Francisco Franco was appointed General Commandant of the Canaries. He joined the military revolt of 17 July which began the Spanish Civil War. Franco quickly took control of the archipelago, except for a few points of resistance on La Palma and in the town of Vallehermoso, on La Gomera. Though there was never a war in the islands, the post-war suppression of political dissent on the Canaries was most severe. During the Second World War, Winston Churchill prepared plans for the British seizure of the Canary Islands as a naval base, in the event of Gibraltar being invaded from the Spanish mainland.The planned operation was known as Operation Pilgrim. Opposition to Franco's regime did not begin to organise until the late 1950s, which experienced an upheaval of parties such as the Communist Party of Spain and the formation of various nationalist, leftist parties. Self-governance After the death of Franco, there was a pro-independence armed movement based in Algeria, the Movement for the Independence and Self-determination of the Canaries Archipelago (MAIAC). In 1968, the Organisation of African Unity recognized the MAIAC as a legitimate African independence movement, and declared the Canary Islands as an African territory still under foreign rule. After the establishment of a democratic constitutional monarchy in Spain, autonomy was granted to the Canaries via a law passed in 1982, with a newly established autonomous devolved government and parliament. In 1983, the first autonomous elections were held. The Spanish Socialist Workers' Party (PSOE) won. In the 2007 elections, the PSOE gained a plurality of seats, but the nationalist Canarian Coalition and the conservative Partido Popular (PP) formed a ruling coalition government. Capitals At present, the Canary Islands is the only autonomous community in Spain that has two capitals: Santa Cruz de Tenerife and Las Palmas de Gran Canaria, since the was created in 1982. The political capital of the archipelago did not exist as such until the nineteenth century. The first cities founded by the Europeans at the time of the conquest of the Canary Islands in the 15th century were: Telde (in Gran Canaria), San Marcial del Rubicón (in Lanzarote) and Betancuria (in Fuerteventura). These cities boasted the first European institutions present in the archipelago, including Catholic bishoprics. Although, because the period of splendor of these cities developed before the total conquest of the archipelago and its incorporation into the Crown of Castile never had a political and real control of the entire Canary archipelago. The function of a Canarian city with full jurisdiction for the entire archipelago only exists after the conquest of the Canary Islands, although originally de facto, that is, without legal and real meaning and linked to the headquarters of the Canary Islands General Captaincy. Las Palmas de Gran Canaria was the first city that exercised this function. This is because the residence of the Captain General of the Canary Islands was in this city during part of the sixteenth and seventeenth centuries. In May 1661, the Captain General of the Canary Islands, Jerónimo de Benavente y Quiñones, moved the headquarters of the captaincy to the city of San Cristóbal de La Laguna on the island of Tenerife. This was due to the fact that this island since the conquest was the most populated, productive and with the highest economic expectations. La Laguna would be considered the de facto capital of the archipelago until the official status of the capital of Canary Islands in the city of Santa Cruz de Tenerife was confirmed in the 19th century, due in part to the constant controversies and rivalries between the bourgeoisies of San Cristóbal de La Laguna and Las Palmas de Gran Canaria for the economic, political and institutional hegemony of the archipelago. Already in 1723, the Captain General of the Canary Islands Lorenzo Fernandez de Villavicencio had moved the headquarters of the General Captaincy of the Canary Islands from San Cristóbal de La Laguna to Santa Cruz de Tenerife. This decision continued without pleasing the society of the island of Gran Canaria. It would be after the creation of the Province of Canary Islands in November 1833 in which Santa Cruz would become the first fully official capital of the Canary Islands (De jure and not of de facto as happened previously). Santa Cruz de Tenerife would be the capital of the Canary archipelago until during the Government of General Primo de Rivera in 1927 the Province of Canary Islands was split in two provinces: Las Palmas with capital in Las Palmas de Gran Canaria, and Santa Cruz de Tenerife with capital in the homonymous city. Finally, with the Statute of Autonomy of the Canary Islands in 1982 and the creation of the Autonomous Community of the Canary Islands, the capital of the archipelago between Las Palmas de Gran Canaria and Santa Cruz de Tenerife is fixed, which is how it remains today. Demographics The Canary Islands have a population of 2,153,389 inhabitants (2019), making it the eighth most populous of Spain's autonomous communities. The total area of the archipelago is , resulting in a population density of 287.4 inhabitants per square kilometre. The Canary Islands has become home to many European residents, mainly coming from Italy, Germany and the Uk. Because of the vast inmigration to Venezuela and Cuba during the second half of the XXth century and the return later to the Canary Islands of these along with their families, there are many residents whose country of origin was Venezuela (66,593), Cuba (41,807). Since the 1990´s many illegal migrants have reached the Canary Islands, Melilla an Ceuta as reaching points to the EU. Population of the individual islands The population of the islands according to the 2019 data are: Tenerife – 917,841 Gran Canaria – 851,231 Lanzarote – 152,289 (including the population of La Graciosa) Fuerteventura – 116,886 La Palma – 82,671 La Gomera – 21,503 El Hierro – 10,968 Religion The Catholic Church has been the majority religion in the archipelago for more than five centuries, ever since the Conquest of the Canary Islands. There are also several other religious communities. Roman Catholic Church The overwhelming majority of native Canarians are Roman Catholic (76.7%) with various smaller foreign-born populations of other Christian beliefs such as Protestants. The appearance of the Virgin of Candelaria (Patron of Canary Islands) was credited with moving the Canary Islands toward Christianity. Two Catholic saints were born in the Canary Islands: Peter of Saint Joseph de Betancur and José de Anchieta. Both born on the island of Tenerife, they were respectively missionaries in Guatemala and Brazil. The Canary Islands are divided into two Catholic dioceses, each governed by a bishop: Diócesis Canariense: Includes the islands of the Eastern Province: Gran Canaria, Fuerteventura and Lanzarote. Its capital was San Marcial El Rubicón (1404) and Las Palmas de Gran Canaria (1483–present). There was a previous bishopric which was based in Telde, but it was later abolished. Diócesis Nivariense: Includes the islands of the western province: Tenerife, La Palma, La Gomera and El Hierro. Its capital is San Cristóbal de La Laguna (1819–present). Other religions Separate from the overwhelming Christian majority are a minority of Muslims. Among the followers of Islam, the Islamic Federation of the Canary Islands exists to represent the Islamic community in the Canary Islands as well as to provide practical support to members of the Islamic community. Other religious faiths represented include Jehovah's Witnesses, The Church of Jesus Christ of Latter-day Saints as well as Hinduism. Minority religions are also present such as the Church of the Guanche People which is classified as a neo-pagan native religion. Also present are Buddhism, Judaism, Baháʼí, African religion, and Chinese religions. According to Statista in 2019, there are 75.662 Muslims in Canary Islands. Statistics The distribution of beliefs in 2012 according to the CIS Barometer Autonomy was as follows: Catholic 84.9% Atheist/Agnostic/Unbeliever 12.3% Other religions 1.7% Population genetics Islands Ordered from west to east, the Canary Islands are El Hierro, La Palma, La Gomera, Tenerife, Gran Canaria, Fuerteventura, Lanzarote and La Graciosa. In addition, north of Lanzarote are the islets of Montaña Clara, Alegranza, Roque del Este and Roque del Oeste, belonging to the Chinijo Archipelago, and northeast of Fuerteventura is the islet of Lobos. There are also a series of small adjacent rocks in the Canary Islands: the Roques de Anaga, Garachico and Fasnia in Tenerife, and those of Salmor and Bonanza in El Hierro. El Hierro El Hierro, the westernmost island, covers , making it the second smallest of the major islands, and the least populous with 10,798 inhabitants. The whole island was declared Reserve of the Biosphere in 2000. Its capital is Valverde. Also known as Ferro, it was once believed to be the westernmost land in the world. Fuerteventura Fuerteventura, with a surface of , is the second-most extensive island of the archipelago. It has been declared a Biosphere reserve by Unesco. It has a population of 113,275. Being also the most ancient of the islands, it is the one that is more eroded: its highest point is the Peak of the Bramble, at a height of . Its capital is Puerto del Rosario. Gran Canaria Gran Canaria has 846,717 inhabitants. The capital, Las Palmas de Gran Canaria (377,203 inhabitants), is the most populous city and shares the status of capital of the Canaries with Santa Cruz de Tenerife. Gran Canaria's surface area is . In center of the island lie the Roque Nublo and Pico de las Nieves ("Peak of Snow") . In the south of island are the Maspalomas Dunes (Gran Canaria), these are the biggest tourist attractions. La Gomera La Gomera has an area of and is the second least populous island with 21,136 inhabitants. Geologically it is one of the oldest of the archipelago. The insular capital is San Sebastian de La Gomera. Garajonay's National Park is located on the island. Lanzarote Lanzarote is the easternmost island and one of the most ancient of the archipelago, and it has shown evidence of recent volcanic activity. It has a surface of , and a population of 149,183 inhabitants, including the adjacent islets of the Chinijo Archipelago. The capital is Arrecife, with 56,834 inhabitants. Chinijo Archipelago The Chinijo Archipelago includes the islands La Graciosa, Alegranza, Montaña Clara, Roque del Este and Roque del Oeste. It has a surface of , and only La Graciosa is populated, with 658 inhabitants. With , La Graciosa, is the smallest inhabited island of the Canaries, and the major island of the Chinijo Archipelago. La Palma La Palma, with 81,863 inhabitants covering an area of , is in its entirety a biosphere reserve. For long it showed no signs of volcanic activity, even though the volcano Teneguía entered into eruption last in 1971. On September 19, 2021, the volcanic Cumbre Vieja on the island erupted. It is the second-highest island of the Canaries, with the Roque de los Muchachos at as its highest point. Santa Cruz de La Palma (known to those on the island as simply "Santa Cruz") is its capital. Tenerife Tenerife is, with its area of , the most extensive island of the Canary Islands. In addition, with 904,713 inhabitants it is the most populated island of the archipelago and Spain. Two of the islands' principal cities are located on it: The capital, Santa Cruz de Tenerife and San Cristóbal de La Laguna (a World Heritage Site). San Cristóbal de La Laguna, the second city of the island is home to the oldest university in the Canary Islands, the University of La Laguna. Teide, with its is the highest peak of Spain and also a World Heritage Site. Tenerife is the site of the worst air disaster in the history of aviation, in which 583 people were killed in the collision of two Boeing 747s on 27 March 1977. La Graciosa Graciosa Island or commonly La Graciosa is a volcanic island in the Canary Islands of Spain, located 2 km (1.2 mi) north of the island of Lanzarote across the Strait of El Río. It was formed by the Canary hotspot. The island is part of the Chinijo Archipelago and the Chinijo Archipelago Natural Park (Parque Natural del Archipiélago Chinijo). It is administrated by the municipality of Teguise. In 2018 La Graciosa officially became the eighth Canary Island. Before then, La Graciosa had the status of an islet, administratively dependent on the island of Lanzarote. It is the smallest and least populated of the main islands, with a population of about 700 people. Data Economy and environment The economy is based primarily on tourism, which makes up 32% of the GDP. The Canaries receive about 12 million tourists per year. Construction makes up nearly 20% of the GDP and tropical agriculture, primarily bananas and tobacco, are grown for export to Europe and the Americas. Ecologists are concerned that the resources, especially in the more arid islands, are being overexploited but there are still many agricultural resources like tomatoes, potatoes, onions, cochineal, sugarcane, grapes, vines, dates, oranges, lemons, figs, wheat, barley, maize, apricots, peaches and almonds. Water resources are also being overexploited, due to the high water usage by tourists. Also, some islands (such as Gran Canaria and Tenerife) overexploit the ground water. This is done in such degree that, according to European and Spanish legal regulations, the current situation is not acceptable. To address the problems, good governance and a change in the water use paradigm have been proposed. These solutions depend largely on controlling water use and on demand management. As this is administratively difficult and politically unpalatable, most action is currently directed at increasing the public offer of water through import from outside; a decision which is economically, politically and environmentally questionable. To bring in revenue for environmental protection, innovation, training and water sanitation a tourist tax was considered in 2018, along with a doubling of the ecotax and restrictions on holiday rents in the zones with the greatest pressure of demand. The economy is € 25 billion (2001 GDP figures). The islands experienced continuous growth during a 20-year period, up until 2001, at a rate of approximately 5% annually. This growth was fueled mainly by huge amounts of foreign direct investment, mostly to develop tourism real estate (hotels and apartments), and European Funds (near €11 billion in the period from 2000 to 2007), since the Canary Islands are labelled Region Objective 1 (eligible for euro structural funds). Additionally, the EU allows the Canary Islands Government to offer special tax concessions for investors who incorporate under the Zona Especial Canaria (ZEC) regime and create more than five jobs. Spain gave permission in August 2014 for Repsol and its partners to explore oil and natural gas prospects off the Canary Islands, involving an investment of €7.5 billion over four years, to commence at the end of 2016. Repsol at the time said the area could ultimately produce 100,000 barrels of oil a day, which would meet 10 percent of Spain's energy needs. However, the analysis of samples obtained did not show the necessary volume nor quality to consider future extraction, and the project was scrapped. Despite currently having very high dependence on fossil fuels, research on the renewable energy potential concluded that a high potential for renewable energy technologies exists on the archipelago. This, in such extent even that a scenario pathway to 100% renewable energy supply by 2050 has been put forward. The Canary Islands have great natural attractions, climate and beaches make the islands a major tourist destination, being visited each year by about 12 million people (11,986,059 in 2007, noting 29% of Britons, 22% of Spanish (from outside the Canaries), and 21% of Germans). Among the islands, Tenerife has the largest number of tourists received annually, followed by Gran Canaria and Lanzarote. The archipelago's principal tourist attraction is the Teide National Park (in Tenerife) where the highest mountain in Spain and third largest volcano in the world (Mount Teide), receives over 2.8 million visitors annually. The combination of high mountains, proximity to Europe, and clean air has made the Roque de los Muchachos peak (on La Palma island) a leading location for telescopes like the Grantecan. The islands, as an autonomous region of Spain, are in the European Union and the Schengen Area. They are in the European Union Customs Union but outside the VAT area, Instead of VAT there is a local Sales Tax (IGIC) which has a general rate of 7%, an increased tax rate of 13.5%, a reduced tax rate of 3% and a zero tax rate for certain basic need products and services. Consequently, some products are subject to additional VAT if being exported from the islands into mainland Spain or the rest of the EU. Canarian time is Western European Time (WET) (or GMT; in summer one hour ahead of GMT). So Canarian time is one hour behind that of mainland Spain and the same as that of the UK, Ireland and mainland Portugal all year round. Tourism statistics The number of tourists who visited the Canary Islands had been in 2018 16,150,054 and in the year 2019 15,589,290. GDP statistics The Gross Domestic Product (GDP) in the Canary Islands in 2015 was , per capita. The figures by island are as follows: Transport The Canary Islands have eight airports altogether, two of the main ports of Spain, and an extensive network of autopistas (highways) and other roads. For a road map see multimap. Traffic congestion is sometimes a problem in Tenerife and on Grand Canaria. Large ferry boats and fast ferries link most of the islands. Both types can transport large numbers of passengers, cargo, and vehicles. Fast ferries are made of aluminium and powered by modern and efficient diesel engines, while conventional ferries have a steel hull and are powered by heavy oil. Fast ferries travel in excess of ; conventional ferries travel in excess of , but are slower than fast ferries. A typical ferry ride between La Palma and Tenerife may take up to eight hours or more while a fast ferry takes about two and a half hours and between Tenerife and Gran Canaria can be about one hour. The largest airport is the Gran Canaria Airport. Tenerife has two airports, Tenerife North Airport and Tenerife South Airport. The island of Tenerife gathers the highest passenger movement of all the Canary Islands through its two airports. The two main islands (Tenerife and Gran Canaria) receive the greatest number of passengers. Tenerife 6,204,499 passengers and Gran Canaria 5,011,176 passengers. The port of Las Palmas is first in freight traffic in the islands, while the port of Santa Cruz de Tenerife is the first fishing port with approximately 7,500 tons of fish caught, according to the Spanish government publication Statistical Yearbook of State Ports. Similarly, it is the second port in Spain as regards ship traffic, only surpassed by the Port of Algeciras Bay. The port's facilities include a border inspection post (BIP) approved by the European Union, which is responsible for inspecting all types of imports from third countries or exports to countries outside the European Economic Area. The port of Los Cristianos (Tenerife) has the greatest number of passengers recorded in the Canary Islands, followed by the port of Santa Cruz de Tenerife. The Port of Las Palmas is the third port in the islands in passengers and first in number of vehicles transported. The SS America |
own history. He also credits a trip to the ancient Egyptian pyramids at Giza with helping him appreciate the relative smallness of man. Ridenhour is politically active; he co-hosted Unfiltered on Air America Radio, testified before Congress in support of peer-to-peer MP3 sharing, and was involved in a 2004 rap political convention. He has continued to be an activist, publisher, lecturer, and producer. Addressing the negative views associated with rap music, he co-wrote the essay book Fight the Power: Rap, Race, and Reality with Yusuf Jah. He argues that "music and art and culture is escapism, and escapism sometimes is healthy for people to get away from reality", but sometimes the distinction is blurred and that's when "things could lead a young mind in a direction." He also founded the record company Slam Jamz and acted as narrator in Kareem Adouard's short film Bling: Consequences and Repercussions, which examines the role of conflict diamonds in bling fashion. Despite Chuck D and Public Enemy's success, Chuck D claims that popularity or public approval was never a driving motivation behind their work. He is admittedly skeptical of celebrity status, revealing in a 1999 interview with BOMB Magazine that, "The key for the record companies is to just keep making more and more stars, and make the ones who actually challenge our way of life irrelevant. The creation of celebrity has clouded the minds of most people in America, Europe and Asia. It gets people off the path they need to be on as individuals." In an interview with Le Monde published January 29, 2008, Chuck D stated that rap is devolving so much into a commercial enterprise, that the relationship between the rapper and the record label is that of slave to a master. He believes that nothing has changed for African-Americans since the debut of Public Enemy and, although he thinks that an Obama-Clinton alliance is great, he does not feel that the establishment will allow anything of substance to be accomplished. He stated that French President Nicolas Sarkozy is like any other European elite: he has profited through the murder, rape, and pillaging of those less fortunate and he refuses to allow equal opportunity for those men and women from Africa. In this article, he defended a comment made by Professor Griff in the past that he says was taken out of context by the media. The real statement was a critique of the Israeli government and its treatment of the Palestinian people. Chuck D stated that it is Public Enemy's belief that all human beings are equal. In an interview with the magazine N'Digo published in June 2008, he spoke of today's mainstream urban music seemingly relishing the addictive euphoria of materialism and sexism, perhaps being the primary cause of many people harboring resentment towards the genre and its future. However, he has expressed hope for its resurrection, saying "It's only going to be dead if it doesn't talk about the messages of life as much as the messages of death and non-movement", citing artists such as NYOil, M.I.A. and The Roots as socially conscious artists who push the envelope creatively. "A lot of cats are out there doing it, on the Web and all over. They're just not placing their career in the hands of some major corporation." In 2010, Chuck D released a track, "Tear Down That Wall." He said, "I talked about the wall not only just dividing the U.S. and Mexico but the states of California, New Mexico and Texas. But Arizona, it's like, come on. Now they're going to enforce a law that talks about basically racial profiling." He is on the board of the TransAfrica Forum, a Pan African organization that is focused on African, Caribbean and Latin American issues. He has been an activist with projects of The Revcoms, such as Refuse Fascism and Stop Mass Incarceration Network. Carl Dix interviewed Chuck D on The Revcoms' YouTube program The RNL – Revolution, Nothing Less! – Show. Personal life Chuck D lives in California, and lost his home in the Thomas Fire of December 2017–January 2018. TV appearances Narrated and appeared on-camera for the 2005 PBS documentary Harlem Globetrotters: The Team That Changed the World. Appeared on-camera for the PBS program Independent Lens: Hip-Hop: Beyond Beats and Rhymes. Appeared in an episode of NewsRadio as himself. He appeared on The Henry Rollins Show. He was a featured panelist (with Lars Ulrich) on the May 12, 2000 episode of the Charlie Rose show. Host Charlie Rose was discussing the Internet, copyright infringement, Napster Inc., and the future of the music industry. He appeared on an episode of Space Ghost Coast to Coast with Pat Boone. While there, Space Ghost tried (and failed) to show he was "hip" to rap, saying his favorite rapper was M. C. Escher. He appeared on an episode of Johnny Bravo. He appeared via satellite to the UK, as a panelist on BBC's Newsnight on January 20, 2009, following Barack Obama's Inauguration. He appeared on a Christmas episode of Adult Swim's Aqua Teen Hunger Force. He Appeared on VH1 Ultimate Albums Blood Sugar Sex Magik talking about the Red Hot Chili Peppers. He appeared on Foo Fighters: Sonic Highways in the New York City episode talking about the beginnings of the NYC Hip-Hop scene Music appearances In 1990, Chuck featured on Sonic Youth single Kool Thing. In 1993, Chuck rapped on "New Agenda" from Janet Jackson's janet. "I loved his work, but I'd never met him…" said Jackson. "I called Chuck up and told him how much I admired their work. When I hear Chuck, it's like I'm hearing someone teaching, talking to a whole bunch of people. And instead of just having the rap in the bridge, as usual, I wanted him to do stuff all the way through. I sent him a tape. He said he loved the song, but he was afraid he was going to mess it up. I said, 'Are you kidding?'" In 1999, Chuck D appeared on Prince's hit "Undisputed" on Prince's "Rave Un2 The Joy Fantastic". In 2001, Chuck D appeared on Japanese electronic duo Boom Boom Satellites track "Your Reality's A Fantasy But Your Fantasy Is Killing Me," off the album Umbra. In 2001, Chuck D provided vocals for Public Domain's Rock Da Funky Beats. In 2010, Chuck D made an appearance on the track "Transformação" (Portuguese for "Transformation") from Brazilian rapper MV Bill's album Causa E Efeito (:pt:Causa e Efeito, meaning Cause And Effect). In 2003 he was featured on the track "Access to the Excess" in Junkie XL's album Radio JXL: A Broadcast from the Computer Hell Cabin. In 2011 Chuck D made an appearance on the track "Blue Sky / Mad Mad World / The Good God Is A Woman And She Don't Like Ugly" from Meat Loaf's 2011 album Hell In A Handbasket. In 2013, he has appeared in Mat Zo's single, "Pyramid Scheme." In 2013 he performed at the Rock and Roll Hall of Fame Music Masters concert tribute to the Rolling Stones. In 2014 he performed with Jahi on "People Get Ready" and "Yo!" from the first album by Public Enemy spin-off project PE 2.0 In 2016 he appeared in ASAP Ferg's album "Always Strive and Prosper" on the track "Beautiful People". In 2017 he was featured on the track "America" on Logic's album "Everybody". In 2019, he appeared on "Story of Everything", a song on Threads, the final album by Sheryl Crow. The track also features Andra Day and Gary Clark Jr. Discography with Public Enemy Studio albums Yo! Bum Rush the Show (1987) It Takes a Nation of Millions to Hold Us Back (1988) Fear of a Black Planet (1990) Apocalypse 91... The Enemy Strikes Black (1991) Muse Sick-n-Hour Mess Age (1994) He Got Game (1998) There's a Poison Goin' On (1999) Revolverlution (2002) New Whirl Odor (2005) How You Sell Soul to a Soulless People Who Sold Their Soul? (2007) Most of My Heroes Still Don't Appear on No Stamp (2012) The Evil Empire of Everything (2012) Man Plans God Laughs (2015) Nothing Is Quick in the Desert (2017) What You Gonna Do When the Grid Goes Down? (2020) with Confrontation Camp Studio albums Objects in the Mirror Are Closer Than They Appear (2000) with | Takes a Nation of Millions to Hold Us Back (1988), Fear of a Black Planet (1990), Apocalypse 91... The Enemy Strikes Black (1991), the compilation album Greatest Misses (1992), and Muse Sick-n-Hour Mess Age (1994). They also released a full-length album soundtrack for the film He Got Game in 1998. Ridenhour also contributed (as Chuck D) to several episodes of the PBS documentary series The Blues. He has appeared as a featured artist on many other songs and albums, having collaborated with artists such as Janet Jackson, Kool Moe Dee, The Dope Poet Society, Run–D.M.C., Ice Cube, Boom Boom Satellites, Rage Against the Machine, Anthrax, John Mellencamp and many others. In 1990, he appeared on "Kool Thing", a song by the alternative rock band Sonic Youth, and along with Flavor Flav, he sang on George Clinton's song "Tweakin'", which appears on his 1989 album The Cinderella Theory. In 1993, he executive produced Got 'Em Running Scared, an album by Ichiban Records group Chief Groovy Loo and the Chosen Tribe. Later career In 1996, Ridenhour released Autobiography of Mistachuck on Mercury Records. Chuck D made a rare appearance at the 1998 MTV Video Music Awards, presenting the Video Vanguard Award to the Beastie Boys, whilst commending their musicianship. In November 1998, he settled out of court with Christopher "The Notorious B.I.G." Wallace's estate over the latter's sampling of his voice in the song "Ten Crack Commandments". The specific sampling is Ridenhour counting off the numbers one to nine on the track "Shut 'Em Down". He later described the decision to sue as "stupid". In September 1999, he launched a multi-format "supersite" on the web site Rapstation.com. The site includes a TV and radio station with original programming, prominent hip hop DJs, celebrity interviews, free MP3 downloads (the first was contributed by multi-platinum rapper Coolio), downloadable ringtones by ToneThis, social commentary, current events, and regular features on turning rap careers into a viable living. Since 2000, he has been one of the most vocal supporters of peer-to-peer file sharing in the music industry. He loaned his voice to Grand Theft Auto: San Andreas as DJ Forth Right MC for the radio station Playback FM. In 2000, he collaborated with Public Enemy's Gary G-Whiz and MC Lyte on the theme music to the television show Dark Angel. He appeared with Henry Rollins in a cover of Black Flag's "Rise Above" for the album Rise Above: 24 Black Flag Songs to Benefit the West Memphis Three. In 2003, he was featured in the PBS documentary Godfathers and Sons in which he recorded a version of Muddy Waters' song "Mannish Boy" with Common, Electrik Mud Cats, and Kyle Jason. He was also featured on Z-Trip's album Shifting Gears on a track called "Shock and Awe"; a 12-inch of the track was released featuring artwork by Shepard Fairey. In 2008 he contributed a chapter to Sound Unbound: Sampling Digital Music and Culture (The MIT Press, 2008) edited by Paul D. Miller a.k.a. DJ Spooky, and also turned up on The Go! Team's album Proof of Youth on the track "Flashlight Fight." He also fulfilled his childhood dreams of being a sports announcer by performing the play-by-play commentary in the video game NBA Ballers: Chosen One on Xbox 360 and PlayStation 3. In 2009, Ridenhour wrote the foreword to the book The Love Ethic: The Reason Why You Can't Find and Keep Beautiful Black Love by Kamau and Akilah Butler. He also appeared on Brother Ali's album, Us. In March 2011, Chuck D re-recorded vocals with The Dillinger Escape Plan for a cover of "Fight the Power". Chuck D duetted with Rock singer Meat Loaf on his 2011 album Hell in a Handbasket on the song "Mad Mad World/The Good God Is a Woman and She Don't Like Ugly". In 2016 Chuck D joined the band Prophets of Rage along with B-Real and former members of Rage Against the Machine. In July 2019, Ridenhour sued Terrordome Music Publishing and Reach Music Publishing for $1 million for withholding royalties. Rapping technique and creative process Chuck D is known for his powerful rapping. How to Rap says he "has a powerful, resonant voice that is often acclaimed as one of the most distinct and impressive in hip-hop". Chuck says this was based on listening to Melle Mel and sportscasters such as Marv Albert. Chuck often comes up with a title for a song first. He writes on paper, though sometimes edits using a computer. He prefers to not punch in or overdub vocals. Chuck listed his favourite rap albums in Hip Hop Connection: N.W.A, Straight Outta Compton Boogie Down Productions, Criminal Minded Run-DMC, Tougher Than Leather Big Daddy Kane, Looks Like a Job For... Stetsasonic, In Full Gear Ice Cube, AmeriKKKa's Most Wanted Dr. Dre, The Chronic De La Soul, 3 Feet High and Rising Eric B. & Rakim, Follow the Leader Run-DMC, Raising Hell ("It was the first record that made me realise this was an album-oriented genre") Politics Chuck D identifies as Black, as opposed to African or African-American. In a 1993 issue of DIRT Magazine covering a taping of In the Mix hosted by Alimi Ballard at the Apollo, Dan Field writes, At one point, Chuck bristles a bit at the term "African-American." He thinks of himself as Black and sees nothing wrong with the term. Besides, he says, having been born in the United States and lived his whole life here, he doesn't consider himself African. Being in Public Enemy has given him the chance to travel around the world, an experience that really opened his eyes and his mind. He says visiting Africa and experiencing life on a continent where the majority of people are Black gave him a new perspective and helped him get in touch with his own history. He also credits a trip to the ancient Egyptian pyramids at Giza with helping him appreciate the relative smallness of man. Ridenhour is politically active; he co-hosted Unfiltered on Air America Radio, testified before Congress in support of peer-to-peer MP3 sharing, and was involved in a 2004 rap political convention. He has continued to be an activist, publisher, lecturer, and producer. Addressing the negative views associated with rap music, he co-wrote the essay book Fight the Power: Rap, Race, and Reality with Yusuf Jah. He argues that "music and art and culture is escapism, and escapism sometimes is healthy for people to get away from reality", but sometimes the distinction is blurred and that's when "things could lead a young mind in a direction." He also founded the record company Slam Jamz and acted as narrator in Kareem Adouard's short film Bling: Consequences and Repercussions, which examines the role of conflict diamonds in bling fashion. Despite Chuck D and Public Enemy's success, Chuck D claims that popularity or public approval was never a driving motivation behind their work. He is admittedly skeptical of celebrity status, revealing in a 1999 interview with BOMB Magazine |
a group shot; rather than discarding a good version of the shot, the director may just have the actor repeat the lines for a new shot, and cut to that alternate view when necessary. Cutaways are also used often in older horror films in place of special effects. For example, a shot of a zombie getting its head cut off may, for instance, start with a view of an axe being swung through the air, followed by a close-up of the actor swinging it, then followed by a cut back to the now severed head. George A. Romero, creator of the Dead Series, and Tom Savini pioneered effects that removed the need for cutaways in horror films. 30 Rock would often use cutaway scenes to create visual humor, the "Werewolf Bar Mitzvah" scene taking three days to create for only five seconds of screen time. In news broadcasting and documentary work, the cutaway is used much as it would be in fiction. On location, there is usually just one camera to film an interview, and it's usually trained on the interviewee. Often there is also only one microphone. After the interview, the interviewer will usually repeat his questions while he himself is being filmed, with pauses as they act as if to listen to the answers. These shots can be used as cutaways. | the break. Or the actor may fumble some of his lines in a group shot; rather than discarding a good version of the shot, the director may just have the actor repeat the lines for a new shot, and cut to that alternate view when necessary. Cutaways are also used often in older horror films in place of special effects. For example, a shot of a zombie getting its head cut off may, for instance, start with a view of an axe being swung through the air, followed by a close-up of the actor swinging it, then followed by a cut back to the now severed head. George A. Romero, creator of the Dead Series, and Tom Savini pioneered effects that removed the need for cutaways in horror films. 30 Rock would often use cutaway scenes to create visual humor, the "Werewolf Bar Mitzvah" scene taking three days to create for only five seconds of screen time. In news broadcasting and documentary work, the cutaway is used much as it would be in fiction. On location, there is usually just one camera to film an interview, |
responses such as the skin conductance response may also provide further insight on the patient's emotional processing. In the treatment of traumatic brain injury (TBI), there are 4 examination methods that have proved useful: skull x-ray, angiography, computed tomography (CT), and magnetic resonance imaging (MRI). The skull x-ray can detect linear fractures, impression fractures (expression fractures) and burst fractures. Angiography is used on rare occasions for TBIs i.e. when there is suspicion of an aneurysm, carotid sinus fistula, traumatic vascular occlusion, and vascular dissection. A CT can detect changes in density between the brain tissue and hemorrhages like subdural and intracerebral hemorrhages. MRIs are not the first choice in emergencies because of the long scanning times and because fractures cannot be detected as well as CT. MRIs are used for the imaging of soft tissues and lesions in the posterior fossa which cannot be found with the use of CT. Body movements Assessment of the brainstem and cortical function through special reflex tests such as the oculocephalic reflex test (doll's eyes test), oculovestibular reflex test (cold caloric test), corneal reflex, and the gag reflex. Reflexes are a good indicator of what cranial nerves are still intact and functioning and is an important part of the physical exam. Due to the unconscious status of the patient, only a limited number of the nerves can be assessed. These include the cranial nerves number 2 (CN II), number 3 (CN III), number 5 (CN V), number 7 (CN VII), and cranial nerves 9 and 10 (CN IX, CN X). Assessment of posture and physique is the next step. It involves general observation about the patient's positioning. There are often two stereotypical postures seen in comatose patients. Decorticate posturing is a stereotypical posturing in which the patient has arms flexed at the elbow, and arms adducted toward the body, with both legs extended. Decerebrate posturing is a stereotypical posturing in which the legs are similarly extended (stretched), but the arms are also stretched (extended at the elbow). The posturing is critical since it indicates where the damage is in the central nervous system. A decorticate posturing indicates a lesion (a point of damage) at or above the red nucleus, whereas a decerebrate posturing indicates a lesion at or below the red nucleus. In other words, a decorticate lesion is closer to the cortex, as opposed to a decerebrate posturing which indicates that the lesion is closer to the brainstem. Pupil size Pupil assessment is often a critical portion of a comatose examination, as it can give information as to the cause of the coma; the following table is a technical, medical guideline for common pupil findings and their possible interpretations: Severity A coma can be classified as (1) supratentorial (above Tentorium cerebelli), (2) infratentorial (below Tentorium cerebelli), (3) metabolic or (4) diffused. This classification is merely dependent on the position of the original damage that caused the coma, and does not correlate with severity or the prognosis. The severity of coma impairment however is categorized into several levels. Patients may or may not progress through these levels. In the first level, the brain responsiveness lessens, normal reflexes are lost, the patient no longer responds to pain and cannot hear. The Rancho Los Amigos Scale is a complex scale that has eight separate levels, and is often used in the first few weeks or months of coma while the patient is under closer observation, and when shifts between levels are more frequent. Treatment Treatment for people in a coma will depend on the severity and cause of the comatose state. Upon admittance to an emergency department, coma patients will usually be placed in an Intensive Care Unit (ICU) immediately, where maintenance of the patient's respiration and circulation become a first priority. Stability of their respiration and circulation is sustained through the use of intubation, ventilation, administration of intravenous fluids or blood and other supportive care as needed. Continued care Once a patient is stable and no longer in immediate danger, there may be a shift of priority from stabilizing the patient to maintaining the state of their physical wellbeing. Moving patients every 2–3 hours by turning them side to side is crucial to avoiding bed sores as a result of being confined to a bed. Moving patients through the use of physical therapy also aids in preventing atelectasis, contractures or other orthopedic deformities which would interfere with a coma patient's recovery. Pneumonia is also common in coma patients due to their inability to swallow which can then lead to aspiration. A coma patient's lack of a gag reflex and use of a feeding tube can result in food, drink or other solid organic matter being lodged within their lower respiratory tract (from the trachea to the lungs). This trapping of matter in their lower respiratory tract can ultimately lead to infection, resulting in aspiration pneumonia. Coma patients may also deal with restlessness or seizures. As such, soft cloth restraints may be used to prevent them from pulling on tubes or dressings and side rails on the bed should be kept up to prevent patients from falling. Caregivers Coma has a wide variety of emotional reactions from the family members of the affected patients, as well as the primary care givers taking care of the patients. Research has shown that the severity of injury causing coma was found to have no significant impact compared to how much time has passed since the injury occurred. Common reactions, such as desperation, anger, frustration, and denial are possible. The focus of the patient care should be on creating an amicable relationship with the family members or dependents of a comatose patient as well as creating a rapport with the medical staff. Although there is heavy importance of a primary care taker, secondary care takers can play a supporting role to temporarily relieve the primary care taker's burden of tasks. Prognosis Comas can last from several days to, in particularly extreme cases, years. After this time, some patients gradually come out of the coma, some progress to a vegetative state, and others die. Some patients who have entered a vegetative state go on to regain a degree of awareness and in some cases, may remain in vegetative state for years or even decades (the longest recorded period being 42 years). Predicted chances of recovery will differ depending on which techniques were used to measure the patient's severity of neurological damage. Predictions of recovery are based on statistical rates, expressed as the level of chance the person has of recovering. Time is the best general predictor of a chance of recovery. For example, after four months of coma caused by brain damage, the chance of partial recovery is less than 15%, and the chance of full recovery is very low. The outcome for coma and vegetative state depends on the cause, location, severity and extent of neurological damage. A deeper coma alone does not necessarily mean a slimmer chance of recovery, similarly, milder comas do not ensure higher chances of recovery. The most common cause of death for a person in a vegetative state is secondary infection such as pneumonia, which can occur in patients who lie still for extended periods. Recovery People may emerge from a coma with a combination of physical, intellectual, and psychological difficulties that need special attention. It is common for coma patients to awaken in a profound state of confusion and suffer from dysarthria, the inability to articulate any speech. Recovery usually occurs gradually. In the first days, patients may only awaken for a few minutes, with increased duration of wakefulness as their recovery progresses and may eventually recover full awareness. That said, some patients may never progress beyond very basic responses. There are reports of people coming out of a coma after long periods of time. After 19 years in a minimally conscious state, Terry Wallis spontaneously began speaking and regained awareness of his surroundings. A brain-damaged man, trapped in a coma-like state for six years, was brought back to consciousness in 2003 by doctors who planted electrodes deep inside his brain. The method, called deep brain stimulation (DBS) successfully roused communication, complex movement and eating ability in the 38-year-old American man who suffered a traumatic brain injury. His injuries left him in a minimally conscious state (MCS), a condition akin to a coma but characterized by occasional, but brief, evidence of environmental and self-awareness that coma patients lack. Society and culture Research by Dr. Eelco Wijdicks on the depiction of comas in movies was published in Neurology in May 2006. Dr. Wijdicks studied 30 films (made between 1970 and 2004) that portrayed actors in prolonged comas, and he concluded that only two films accurately depicted the state of a coma victim and the agony of waiting for a patient to awaken: Reversal of Fortune (1990) and The Dreamlife of Angels (1998). The remaining 28 were criticized for portraying miraculous awakenings with no lasting side effects, unrealistic depictions of treatments and equipment required, and comatose patients remaining muscular and tanned. Bioethics A person in a coma is said to be in an unconscious state. Perspectives on personhood, identity and consciousness come into play when discussing the metaphysical and bioethical views on comas. It has been argued that unawareness should be just as ethically relevant and important as a state of awareness and that there should be metaphysical support of unawareness as a state. In the ethical discussions about disorders of consciousness (DOCs), two abilities are usually considered as central: | that may be causing coma (i.e., brainstem, back of brain...) and assess the severity of the coma with the Glasgow Coma Scale Take blood work to see if drugs were involved or if it was a result of hypoventilation/hyperventilation Check for levels of “serum glucose, calcium, sodium, potassium, magnesium, phosphate, urea, and creatinine” Perform brain scans to observe any abnormal brain functioning using either CT or MRI scans Continue to monitor brain waves and identify seizures of patient using EEGs Initial evaluation In the initial assessment of coma, it is common to gauge the level of consciousness on the AVPU (alert, vocal stimuli, painful stimuli, unresponsive) scale by spontaneously exhibiting actions and, assessing the patient's response to vocal and painful stimuli. More elaborate scales, such as the Glasgow Coma Scale, quantify an individual's reactions such as eye opening, movement and verbal response in order to indicate their extent of brain injury. The patient's score can vary from a score of 3 (indicating severe brain injury and death) to 15 (indicating mild or no brain injury). In those with deep unconsciousness, there is a risk of asphyxiation as the control over the muscles in the face and throat is diminished. As a result, those presenting to a hospital with coma are typically assessed for this risk ("airway management"). If the risk of asphyxiation is deemed high, doctors may use various devices (such as an oropharyngeal airway, nasopharyngeal airway or endotracheal tube) to safeguard the airway. Imaging and testing Imaging basically encompasses computed tomography (CAT or CT) scan of the brain, or MRI for example, and is performed to identify specific causes of the coma, such as hemorrhage in the brain or herniation of the brain structures. Special tests such as an EEG can also show a lot about the activity level of the cortex such as semantic processing, presence of seizures, and are important available tools not only for the assessment of the cortical activity but also for predicting the likelihood of the patient's awakening. The autonomous responses such as the skin conductance response may also provide further insight on the patient's emotional processing. In the treatment of traumatic brain injury (TBI), there are 4 examination methods that have proved useful: skull x-ray, angiography, computed tomography (CT), and magnetic resonance imaging (MRI). The skull x-ray can detect linear fractures, impression fractures (expression fractures) and burst fractures. Angiography is used on rare occasions for TBIs i.e. when there is suspicion of an aneurysm, carotid sinus fistula, traumatic vascular occlusion, and vascular dissection. A CT can detect changes in density between the brain tissue and hemorrhages like subdural and intracerebral hemorrhages. MRIs are not the first choice in emergencies because of the long scanning times and because fractures cannot be detected as well as CT. MRIs are used for the imaging of soft tissues and lesions in the posterior fossa which cannot be found with the use of CT. Body movements Assessment of the brainstem and cortical function through special reflex tests such as the oculocephalic reflex test (doll's eyes test), oculovestibular reflex test (cold caloric test), corneal reflex, and the gag reflex. Reflexes are a good indicator of what cranial nerves are still intact and functioning and is an important part of the physical exam. Due to the unconscious status of the patient, only a limited number of the nerves can be assessed. These include the cranial nerves number 2 (CN II), number 3 (CN III), number 5 (CN V), number 7 (CN VII), and cranial nerves 9 and 10 (CN IX, CN X). Assessment of posture and physique is the next step. It involves general observation about the patient's positioning. There are often two stereotypical postures seen in comatose patients. Decorticate posturing is a stereotypical posturing in which the patient has arms flexed at the elbow, and arms adducted toward the body, with both legs extended. Decerebrate posturing is a stereotypical posturing in which the legs are similarly extended (stretched), but the arms are also stretched (extended at the elbow). The posturing is critical since it indicates where the damage is in the central nervous system. A decorticate posturing indicates a lesion (a point of damage) at or above the red nucleus, whereas a decerebrate posturing indicates a lesion at or below the red nucleus. In other words, a decorticate lesion is closer to the cortex, as opposed to a decerebrate posturing which indicates that the lesion is closer to the brainstem. Pupil size Pupil assessment is often a critical portion of a comatose examination, as it can give information as to the cause of the coma; the following table is a technical, medical guideline for common pupil findings and their possible interpretations: Severity A coma can be classified as (1) supratentorial (above Tentorium cerebelli), (2) infratentorial (below Tentorium cerebelli), (3) metabolic or (4) diffused. This classification is merely dependent on the position of the original damage that caused the coma, and does not correlate with severity or the prognosis. The severity of coma impairment however is categorized into several levels. Patients may or may not progress through these levels. In the first level, the brain responsiveness lessens, normal reflexes are lost, the patient no longer responds to pain and cannot hear. The Rancho Los Amigos Scale is a complex scale that has eight separate levels, and is often used in the first few weeks or months of coma while the patient is under closer observation, and when shifts between levels are more frequent. Treatment Treatment for people in a coma will depend on the severity and cause of the comatose state. Upon admittance to an emergency department, coma patients will usually be placed in an Intensive Care Unit (ICU) immediately, where maintenance of the patient's respiration and circulation become a first priority. Stability of their respiration and circulation is sustained through the use of intubation, ventilation, administration of intravenous fluids or blood and other supportive care as needed. Continued care Once a patient is stable and no longer in immediate danger, there may be a shift of priority from stabilizing the patient to maintaining the state of their physical wellbeing. Moving patients every 2–3 hours by turning them side to side is crucial to avoiding bed sores as a result of being confined to a bed. Moving patients through the use of physical therapy also aids in preventing atelectasis, contractures or other orthopedic deformities which would interfere with a coma patient's recovery. Pneumonia is also common in coma patients due to their inability to swallow which can then lead to aspiration. A coma patient's lack of a gag reflex and use of a feeding tube can result in food, drink or other solid organic matter being lodged within their lower respiratory tract (from the trachea to the lungs). This trapping of matter in their lower respiratory tract can ultimately lead to infection, resulting in aspiration pneumonia. Coma patients may also deal with restlessness or seizures. As such, soft cloth restraints may be used to prevent them from pulling on tubes or dressings and side rails on the bed should be kept up to prevent patients from falling. Caregivers Coma has a wide variety of emotional reactions from the family members of the affected patients, as well as the primary care givers taking care of the patients. Research has shown that the severity of injury causing coma was found to have no significant impact compared to how much time has passed since the injury occurred. Common reactions, such as desperation, anger, frustration, and denial are possible. The focus of the patient care should be on creating an amicable relationship with the family members or dependents of a comatose patient as well as creating a rapport with the medical staff. Although there is heavy importance of a primary care taker, secondary care takers can play a supporting role to temporarily relieve the primary care taker's burden of tasks. Prognosis Comas can last from several days to, in particularly extreme cases, years. After this time, some patients gradually come out of the coma, some progress to a vegetative state, and others die. Some patients who have entered a vegetative state go on to regain a degree of awareness and in some cases, may remain in vegetative state for years or even decades (the longest recorded period being 42 years). Predicted chances of recovery will differ depending on which techniques were used to measure the patient's severity of neurological damage. Predictions of recovery are based on statistical rates, expressed as the level of chance the person has of recovering. Time is the best general predictor of a chance of recovery. For example, after four months of coma caused by brain damage, the chance of partial recovery is less than 15%, and the chance of full recovery is very low. The outcome for coma and vegetative state depends on the cause, location, severity and extent of neurological damage. A deeper coma alone does not necessarily mean a slimmer chance of recovery, similarly, milder comas do not ensure higher chances of recovery. The most common cause of death for a person in a vegetative state is secondary infection such as pneumonia, which can occur in patients who lie still for extended periods. Recovery People may emerge from a coma with a combination of physical, intellectual, and psychological difficulties that need special attention. It is common for coma patients to awaken in a profound state of confusion and suffer from dysarthria, the inability to articulate any speech. Recovery usually occurs gradually. In the first days, patients may only awaken for a few minutes, with increased duration of wakefulness as their recovery progresses and may eventually recover full awareness. That said, some patients may never progress beyond very basic responses. There are reports of people coming out of a coma after long periods of time. After 19 years in a minimally conscious state, Terry Wallis spontaneously began speaking and regained awareness of his surroundings. A brain-damaged man, trapped in a coma-like state for six years, was brought back to consciousness in 2003 by doctors who planted electrodes deep inside his brain. The method, called deep brain stimulation (DBS) successfully roused communication, complex movement and eating ability in the 38-year-old American man who suffered a traumatic brain injury. His injuries left him in a minimally conscious state (MCS), a condition akin to a coma but characterized by occasional, but brief, evidence of environmental and self-awareness that coma patients lack. Society and culture Research by Dr. Eelco Wijdicks on the depiction of comas in movies was published in Neurology in May 2006. Dr. Wijdicks studied 30 films (made between |
Call of Cthulhu virtual miniatures to be released on their augmented reality app Ardent Roleplay. Video Games Shadow of the Comet Shadow of the Comet (later repackaged as Call of Cthulhu: Shadow of the Comet) is an adventure game developed and released by Infogrames in 1993. The game is based on H. P. Lovecraft's Cthulhu Mythos and uses many elements from Lovecraft's The Dunwich Horror and The Shadow Over Innsmouth. A follow-up game, Prisoner of Ice, is not a direct sequel. Prisoner of Ice Prisoner of Ice (also Call of Cthulhu: Prisoner of Ice) is an adventure game developed and released by Infogrames for the PC and Macintosh computers in 1995 in America and Europe. It is based on H. P. Lovecraft's Cthulhu Mythos, particularly At the Mountains of Madness, and is a follow-up to Infogrames' earlier Shadow of the Comet. In 1997, the game was ported to the Sega Saturn and PlayStation exclusively in Japan. Dark Corners of the Earth A licensed first-person shooter adventure game by Headfirst Productions, based on Call of Cthulhu campaign Escape from Innsmouth and released by Bethesda Softworks in 2005/2006 for the PC and Xbox. The Wasted Land In April 2011, Chaosium and new developer Red Wasp Design announced a joint project to produce a mobile video game based on the Call of Cthulhu RPG, entitled Call of Cthulhu: The Wasted Land. The game was released on January 30, 2012. Cthulhu Chronicles In 2018, Metarcade produced Cthulhu Chronicles, a game for iOS with a campaign of nine mobile interactive fiction stories set in 1920s England based on Call of Cthulhu. The first five stories were released on July 10, 2018. Call of Cthulhu Call of Cthulhu is a survival horror role-playing video game developed by Cyanide and published by Focus Home Interactive for PlayStation 4, Xbox One and Windows. The game features a semi-open world environment and incorporates themes of Lovecraftian and psychological horror into a story which includes elements of investigation and stealth. It is inspired by H. P. Lovecraft's short story "The Call of Cthulhu". Reception Several reviews of various editions appeared in Space Gamer/Fantasy Gamer. In the March 1982 edition (No. 49), William A. Barton noted that there were some shortcomings resulting from an assumption by the designers that players would have access to rules from RuneQuest that were not in Call of Cthulhu, but otherwise Barton called the game "an excellent piece of work.... The worlds of H. P. Lovecraft are truly open for the fantasy gamer." In the October–November 1987 edition (No. 80), Lisa Cohen reviewed the 3rd edition, saying, ""This book can be for collectors of art, players, or anyone interested in knowledge about old time occult. It is the one reprint that is worth the money." Several reviews of various editions appeared in White Dwarf. In the August 1982 edition (Issue 32), Ian Bailey admired much about the first edition of the game; his only criticism was that the game was too "U.S. orientated and consequently any Keeper... who wants to set his game in the UK will have a lot of research to do." Bailey gave the game an above average rating of 9 out of 10, saying, "Call of Cthulhu is an excellent game and a welcome addition to the world of role-playing." In the August 1986 edition (Issue 80), Ashley Shepherd thought the inclusion of much material in the 3rd edition that had been previously published as supplementary books "makes the game incredibly good value." He concluded, "This package is going to keep Call of Cthulhu at the front of the fantasy game genre." Several reviews of various editions and supplements also appeared in Dragon. In the May 1982 edition (Issue 61), David Cook thought the rules were too complex for new gamers, but said, "It is a good game for experienced role-playing gamers and ambitious judges, especially if they like Lovecraft’s type of story." In the August 1987 edition (Issue 124), Ken Rolston reviewed the Terror Australis supplement for 3rd edition that introduced an Australian setting in the 1920s. Bambra thought that "Literate, macabre doom shambles from each page. Good reading, and a good campaign setting for COC adventures." In the October 1988 edition (Issue 138), Ken Rolston gave an overview of the 3rd edition, and placed it ahead of its competitors due to superior campaign setting, tone and atmosphere, the player characters as investigators, and the use of realistic player handouts such as authentic-looking newspaper clippings. Rolston concluded, "CoC is one of role-playing’s acknowledged classics. Its various supplements over the years have maintained an exceptional level of quality; several, including Shadows of Yog-Sothoth and Masks of Nyarlathotep, deserve consideration among the greatest pinnacles of the fantasy role-playing game design." In the June 1990 edition (Issue 158), Jim Bambra liked the updated setting of the 4th edition, placing the game firmly in Lovecraft's 1920s. He also liked the number of adventures included in the 192-page rulebook: "The fourth edition contains enough adventures to keep any group happily entertained and sanity blasted." However, while Cook questioned whether owners of the 2nd or 3rd edition would get good value for their money — "You lack only the car-chase rules and the improved layout of the three books in one. The rest of the material has received minor editing but no substantial changes" — Cook strongly recommended the new edition to newcomers, saying, "If you don’t already play CoC, all I can do is urge you to give it a try.... discover for yourself why it has made so many converts since its release." In the October 1992 edition (Issue 186), Rick Swan admitted that he was skeptical that the 5th edition would offer anything new, but instead found that the new edition benefited from "fresh material, judicious editing, and thorough polishes." He concluded, "Few RPGs exceed the CoC game’s scope or match its skillful integration of background and game systems. And there’s no game more fun." In a 1996 reader poll by Arcane magazine to determine the 50 most popular roleplaying games of all time, Call of Cthulhu was ranked 1st. Editor Paul Pettengale commented: "Call of Cthulhu is fully deserved of the title as the most popular roleplaying system ever - it's a game that doesn't age, is eminently playable, and which hangs together perfectly. The system, even though it's over ten years old, it still one of the very best you'll find in any roleplaying game. Also, there's not a referee in the land who could say they've read every Lovecraft inspired book or story going, so there's a pretty-well endless supply | to kill. The game does not use levels. CoC uses percentile dice (with a results ranging from 1 to 100) to determine success or failure. Every player statistic is intended to be compatible with the notion that there is a probability of success for a particular action given what the player is capable of doing. For example, an artist may have a 75% chance of being able to draw something (represented by having 75 in Art skill), and thus rolling a number under 75 would yield a success. Rolling or less of the skill level (1-15 in the example) would be a "special success" (or an "impale" for combat skills) and would yield some extra bonus to be determined by the keeper. For example, the artist character might draw especially well or especially fast, or catch some unapparent detail in the drawing. The players take the roles of ordinary people drawn into the realm of the mysterious: detectives, criminals, scholars, artists, war veterans, etc. Often, happenings begin innocently enough, until more and more of the workings behind the scenes are revealed. As the characters learn more of the true horrors of the world and the irrelevance of humanity, their sanity (represented by "Sanity Points", abbreviated SAN) inevitably withers away. The game includes a mechanism for determining how damaged a character's sanity is at any given point; encountering the horrific beings usually triggers a loss of SAN points. To gain the tools they need to defeat the horrors – mystic knowledge and magic – the characters may end up losing some of their sanity, though other means such as pure firepower or simply outsmarting one's opponents also exist. CoC has a reputation as a game in which it is quite common for a player character to die in gruesome circumstances or end up in a mental institution. Eventual triumph of the players is not guaranteed. History The original conception of Call of Cthulhu was Dark Worlds, a game commissioned by the publisher Chaosium but never published. Sandy Petersen contacted them regarding writing a supplement for their popular fantasy game RuneQuest set in Lovecraft's Dreamlands. He took over the writing of Call of Cthulhu, and the game was released in 1981. Petersen oversaw the first four editions with only minor changes to the system. Once he left, development was continued by Lynn Willis, who was credited as co-author in the fifth and sixth editions. After the death of Willis, Mike Mason became Call of Cthulhu line editor in 2013, continuing its development with Paul Fricker. Together they made the most significant rules alterations than in any previous edition, culminating in the release of the 7th edition in 2014. Editions Early releases For those grounded in the RPG tradition, the very first release of Call of Cthulhu created a brand new framework for table-top gaming. Rather than the traditional format established by Dungeons & Dragons, which often involved the characters wandering through caves or tunnels and fighting different types of monsters, Sandy Petersen introduced the concept of the Onion Skin: Interlocking layers of information and nested clues that lead the player characters from seemingly minor investigations into a missing person to discovering mind-numbingly awful, global conspiracies to destroy the world. Unlike its predecessor games, CoC assumed that most investigators would not survive, alive or sane, and that the only safe way to deal with the vast majority of nasty things described in the rule books was to run away. A well-run CoC campaign should engender a sense of foreboding and inevitable doom in its players. The style and setting of the game, in a relatively modern time period, created an emphasis on real-life settings, character research, and thinking one's way around trouble. The first book of Call of Cthulhu adventures was Shadows of Yog-Sothoth. In this work, the characters come upon a secret society's foul plot to destroy mankind, and pursue it first near to home and then in a series of exotic locations. This template was to be followed in many subsequent campaigns, including Fungi from Yuggoth (later known as Curse of Cthulhu and Day of the Beast), Spawn of Azathoth, and possibly the most highly acclaimed, Masks of Nyarlathotep. Shadows of Yog-Sothoth is important not only because it represents the first published addition to the boxed first edition of Call of Cthulhu, but because its format defined a new way of approaching a campaign of linked RPG scenarios involving actual clues for the would-be detectives amongst the players to follow and link in order to uncover the dastardly plots afoot. Its format has been used by every other campaign-length Call of Cthulhu publication. The standard of CoC scenarios was well received by independent reviewers. The Asylum and Other Tales, a series of stand alone articles released in 1983, rated an overall 9/10 in Issue 47 of White Dwarf magazine. The standard of the included 'clue' material varies from scenario to scenario, but reached its zenith in the original boxed versions of the Masks of Nyarlathotep and Horror on the Orient Express campaigns. Inside these one could find matchbooks and business cards apparently defaced by non-player characters, newspaper cuttings and (in the case of Orient Express) period passports to which players could attach their photographs, increasing the sense of immersion. Indeed, during the period that these supplements were produced, third party campaign publishers strove to emulate the quality of the additional materials, often offering separately-priced 'deluxe' clue packages for their campaigns. Additional milieux were provided by Chaosium with the release of Dreamlands, a boxed supplement containing additional rules needed for playing within the Lovecraft Dreamlands, a large map and a scenario booklet, and Cthulhu By Gaslight, another boxed set which moved the action from the 1920s to the 1890s. Cthulhu Now In 1987, Chaosium issued the supplement titled Cthulhu Now, a collection of rules, supplemental source materials and scenarios for playing Call of Cthulhu in the present day. This proved to be a very popular alternative milieu, so much so that much of the supplemental material is now included in the core rule book. Lovecraft Country Lovecraft Country was a line of supplements for Call of Cthulhu released in 1990. These supplements were overseen by Keith Herber and provided backgrounds and adventures set in Lovecraft's fictional towns of Arkham, Kingsport, Innsmouth, Dunwich, and their environs. The intent was to give investigators a common base, as well as to center the action on well-drawn characters with clear motivations. Terror Australis In 1987, Terror Australis: Call of Cthulhu in the Land Down Under was published. In 2018, a revised and updated version of the 1987 game was reissued, with about triple the content and two new games. It requires the Call of Cthulhu Keeper's Rulebook (7th Edition) and is usable with Pulp Cthulhu. Recent history In the years since the collapse of the Mythos collectible card game (production ceased in 1997), the release of CoC books has been very sporadic, with up to a year between releases. Chaosium struggled with near bankruptcy for many years before finally starting their upward climb again. 2005 was Chaosium's busiest year for many years, with 10 releases for the game. Chaosium took to marketing "monographs"—short books by individual writers with editing and layout provided out-of-house—directly to the consumer, allowing the company to gauge market response |
Critical and Democratic Theory is a quarterly peer-reviewed academic journal of critical and democratic theory and successor of Praxis International. It is currently edited by Jean L. Cohen, Cristina Lafont, and Hubertus Buchstein. Ertug Tombus is the managing | contribution, it is based at the New School in New York. Nadia Urbinati, Amy Allen, and Andreas Kalyvas are former co-editors. References External links Sociology journals Publications established in 1994 Quarterly journals Wiley-Blackwell academic journals English-language |
to the French, the island also saw active settlement by France. After the French ceded their claims to Newfoundland and the Acadian mainland to the British by the Treaty of Utrecht in 1713, the French relocated the population of Plaisance, Newfoundland, to Île Royale and the French garrison was established in the central eastern part at Sainte Anne. As the harbour at Sainte Anne experienced icing problems, it was decided to build a much larger fortification at Louisbourg to improve defences at the entrance to the Gulf of Saint Lawrence and to defend France's fishing fleet on the Grand Banks. The French also built the Louisbourg Lighthouse in 1734, the first lighthouse in Canada and one of the first in North America. In addition to Cape Breton Island, the French colony of Île Royale also included Île Saint-Jean, today called Prince Edward Island, and Les Îles-de-la-Madeleine. French and Indian War Louisbourg itself was one of the most important commercial and military centres in New France. Louisbourg was captured by New Englanders with British naval assistance in the Siege of Louisbourg (1745) and by British forces in 1758. The French population of Île Royale was deported to France after each siege. While French settlers returned to their homes in Île Royale after the Treaty of Aix-la-Chapelle was signed in 1748, the fortress was demolished after the second siege in 1758. Île Royale remained formally part of New France until it was ceded to Great Britain by the Treaty of Paris in 1763. It was then merged with the adjacent British colony of Nova Scotia (present day peninsular Nova Scotia and New Brunswick). Acadians who had been expelled from Nova Scotia and Île Royale were permitted to settle in Cape Breton beginning in 1764, and established communities in northwestern Cape Breton, near Chéticamp, and southern Cape Breton, on and near Isle Madame. Some of the first British-sanctioned settlers on the island following the Seven Years' War were Irish, although upon settlement they merged with local French communities to form a culture rich in music and tradition. From 1763 to 1784, the island was administratively part of the colony of Nova Scotia and was governed from Halifax. The first permanently settled Scottish community on Cape Breton Island was Judique, settled in 1775 by Michael Mor MacDonald. He spent his first winter using his upside-down boat for shelter, which is reflected in the architecture of the village's Community Centre. He composed a song about the area called "O 's àlainn an t-àite", or "O, Fair is the Place." American Revolution During the American Revolution, on 1 November 1776, John Paul Jones, the father of the American Navy, set sail in command of Alfred to free hundreds of American prisoners working in the area's coal mines. Although winter conditions prevented the freeing of the prisoners, the mission did result in the capture of Mellish, a vessel carrying a vital supply of winter clothing intended for John Burgoyne's troops in Canada. Major Timothy Hierlihy and his regiment on board HMS Hope worked in and protected the coal mines at Sydney Cape Breton from privateer attacks. Sydney, Cape Breton provided a vital supply of coal for Halifax throughout the war. The British began developing the mining site at Sydney Mines in 1777. On 14 May 1778, Major Hierlihy arrived at Cape Breton. While there, Hierlihy reported that he "beat off many piratical attacks, killed some and took other prisoners." A few years into the war, there was also a naval engagement between French ships and a British convoy off Sydney, Nova Scotia, near Spanish River (1781), Cape Breton. French ships, fighting with the Americans, were re-coaling and defeated a British convoy. Six French and 17 British sailors were killed, with many more wounded. Colony of Cape Breton In 1784, Britain split the colony of Nova Scotia into three separate colonies: New Brunswick, Cape Breton Island, and present-day peninsular Nova Scotia, in addition to the adjacent colonies of St. John's Island (renamed Prince Edward Island in 1798) and Newfoundland. The colony of Cape Breton Island had its capital at Sydney on its namesake harbour fronting on Spanish Bay and the Cabot Strait. Its first Lieutenant-Governor was Joseph Frederick Wallet DesBarres (1784–1787) and his successor was William Macarmick (1787). A number of United Empire Loyalists emigrated to the Canadian colonies, including Cape Breton. David Mathews, the former Mayor of New York City during the American Revolution, emigrated with his family to Cape Breton in 1783. He succeeded Macarmick as head of the colony and served from 1795 to 1798. From 1799 to 1807, the military commandant was John Despard, brother of Edward. An order forbidding the granting of land in Cape Breton, issued in 1763, was removed in 1784. The mineral rights to the island were given over to the Duke of York by an order-in-council. The British government had intended that the Crown take over the operation of the mines when Cape Breton was made a colony, but this was never done, probably because of the rehabilitation cost of the mines. The mines were in a neglected state, caused by careless operations dating back at least to the time of the final fall of Louisbourg in 1758. Large-scale shipbuilding began in the 1790s, beginning with schooners for local trade, moving in the 1820s to larger brigs and brigatines, mostly built for British ship owners. Shipbuilding peaked in the 1850s, marked in 1851 by the full-rigged shipLord Clarendon, which was the largest wooden ship ever built in Cape Breton. Merger with Nova Scotia In 1820, the colony of Cape Breton Island was merged for the second time with Nova Scotia. This development is one of the factors which led to large-scale industrial development in the Sydney Coal Field of eastern Cape Breton County. By the late 19th century, as a result of the faster shipping, expanding fishery and industrialization of the island, exchanges of people between the island of Newfoundland and Cape Breton increased, beginning a cultural exchange that continues to this day. The 1920s were some of the most violent times in Cape Breton. They were marked by several severe labour disputes. The famous murder of William Davis by strike breakers, and the seizing of the New Waterford power plant by striking miners led to a major union sentiment that persists to this day in some circles. William Davis Miners' Memorial Day continues to be celebrated in coal mining towns to commemorate the deaths of miners at the hands of the coal companies. 20th century The turn of the 20th century saw Cape Breton Island at the forefront of scientific achievement with the now-famous activities launched by inventors Alexander Graham Bell and Guglielmo Marconi. Following his successful invention of the telephone and being relatively wealthy, Bell acquired land near Baddeck in 1885. He chose the land, which he named Beinn Bgreagh, largely due to its resemblance to his early surroundings in Scotland. He established a summer estate complete with research laboratories, working with deaf people including Helen Keller, and continued to invent. Baddeck would be the site of his experiments with hydrofoil technologies as well as the Aerial Experiment Association, financed by his wife Mabel Gardiner Hubbard. These efforts resulted in the first powered flight in Canada when the AEA Silver Dart took off from the ice-covered waters of Bras d'Or Lake. Bell also built the forerunner to the iron lung and experimented with breeding sheep. Marconi's contributions to Cape Breton Island were also quite significant, as he used the island's geography to his advantage in transmitting the first North American trans-Atlantic radio message from a station constructed at Table Head in Glace Bay to a receiving station at Poldhu in Cornwall, England. Marconi's pioneering work in Cape Breton marked the beginning of modern radio technology. Marconi's station at Marconi Towers, on the outskirts of Glace Bay, became the chief communication centre for the Royal Canadian Navy in World War I through to the early years of World War II. Promotions for tourism beginning in the 1950s recognized the importance of the Scottish culture to the province, as the provincial government started encouraging the use of Gaelic once again. The establishment of funding for the Gaelic College of Celtic Arts and Crafts and formal Gaelic language courses in public schools are intended to address the near-loss of this culture to assimilation into Anglophone Canadian culture. In the 1960s, the Fortress of Louisbourg was partially reconstructed by Parks Canada, using the labour of unemployed coal miners. Since 2009, this National Historic Site of Canada has attracted an average of 90 000 visitors per year. Geography The irregularly-shaped rectangular island is about 100 km wide and 150 long, for a total of in area. It lies in the southeastern extremity of the Gulf of St. Lawrence. Cape Breton is separated from the Nova Scotia peninsula by the very deep Strait of Canso. The island is joined to the mainland by the Canso Causeway. Cape Breton Island is composed of rocky shores, rolling farmland, glacial valleys, barren headlands, highlands, woods and plateaus. Geology The island is characterized by a number of elevations of ancient crystalline and metamorphic rock rising up from the south to the north, and contrasted with eroded lowlands. The bedrock of blocks that developed in different places around the globe, at different times, and then were fused together via tectonics. Cape Breton is formed from three terranes. These are fragments of the earth's crust formed on a tectonic plate and attached by accretion or suture to crust lying on another plate. Each of these has its own distinctive geologic history, which is different from that of the surrounding areas. The southern half of the island formed from the Avalon terrane, which was once a microcontinent in the Paleozoic era. It is made up of volcanic rock that formed near what is now called Africa. Most of the northern half of the island is on the Bras d'Or terrane (part of the Ganderia terrane). It contains volcanic and sedimentary rock formed off the coast of what is now South America. The third terrane is the relatively small Blair River inlier on the far northwestern tip. It contains the oldest rock in the Maritimes, formed up to 1.6 billion years ago. These rocks, which can be seen in the Polletts Cove - Aspy Fault Wilderness Area north of Pleasant Bay, are likely part of the Canadian Shield, a large area of Precambrian igneous and metamorphic rock that forms the core of the North American continent. The Avalon and Bras d'Or terranes were pushed together about 500 million years ago when the supercontinent Gondwana was formed. The Blair River inlier was sandwiched in between the two when Laurussia was formed 450-360 million years ago, at which time the land was found in the tropics. This collision, which also formed the Appalachian Mountains. Associated rifting and faulting is now visible as the canyons of the Cape Breton Highlands. Then, during the Carboniferous period, the area was flooded, which created sedimentary rock layers such as sandstone, shale, gypsum, and conglomerate. Later, most of the island was tropical forest, which later formed coal deposits. Much later, the land was shaped by repeated ice ages, which left striations, till, U-shaped valleys, and carved the Bras d'Or Lake from the bedrock. Examples of U-shaped valleys are those of the Chéticamp, Grande Anse, and Clyburn River valleys. Other valleys have been eroded by water, forming V-shaped valleys and canyons. Cape Breton has many fault lines, but few earthquakes. Since the North American continent is moving westward, earthquakes tend to occur on the western edge of the continent. Climate The warm summer humid continental climate is moderated by the proximity of the cold, oftentimes polar Labrador Current and its warmer counterpart the Gulf Stream, both being dominant currents in the North Atlantic Ocean. Ecology Lowlands There are lowland areas in along the western shore, around Lake Ainslie, the Bras d'Or watershed, Boularderie Island, and the Sydney coalfield. They include salt marshes, coastal beaches, and freshwater wetlands. Starting in the 1800s, many areas were cleared for farming or timber. Many farms were abandoned from the 1920s to the 1950s with fields being reclaimed by white spruce, red maple, white birch, and balsam fir. Higher slopes are dominated by yellow birch and sugar maple. In sheltered areas with sun and drainage, Acadian forest is found. Wetter areas have tamarack, and black spruce. The weather station at Ingonish records more rain than anywhere else in Nova Scotia. Behind barrier beaches and dunes at Aspy Bay are salt marshes. The Aspy, Clyburn, and Ingonish rivers have all created floodplains which support populations of black ash, fiddle head fern, swamp loosestrife, swamp milkweed, southern twayblade, and bloodroot. Red sandstone and white gypsum cliffs can be observed throughout this area. Bedrock is Carboniferous sedimentary with limestone, shale, and sandstone. Many fluvial remains from are glaciation found here. Mining has been ongoing for centuries, and more than 500 mine openings can be found, mainly in the east. Karst topography is found in Dingwall, South Harbour, Plaster Provincial Park, along the Margaree and Middle Rivers, and along the north shore of Lake Ainslie. The presence of gypsum and limestone increases soil pH and produces some rich wetlands which support giant spear, tufted fen, and other mosses, as well as vascular plants like sedges. Cape Breton Hills This ecosystem is spread throughout Cape Breton and is defined as hills and slopes 150-300m above sea level, typically covered with Acadian forest. It includes North Mountain, Kelly’s Mountain, and East Bay Hills. Forests in this area were cleared for timber and agriculture and are now a mosaic of habitats depending on the local terrain, soils and microclimate. Typical species include ironwood, white ash, beech, sugar maple, red maple, and yellow birch. The understory can include striped maple, beaked hazelnut, fly honeysuckle, club mosses and ferns. Ephemerals are visible in the spring, such as Dutchman's breeches and spring beauty. In ravines, shade tolerant trees like hemlock, white pine, red spruce are found. Less well-drained areas are forested with balsam fir and black spruce. Highlands and the | was formed 450-360 million years ago, at which time the land was found in the tropics. This collision, which also formed the Appalachian Mountains. Associated rifting and faulting is now visible as the canyons of the Cape Breton Highlands. Then, during the Carboniferous period, the area was flooded, which created sedimentary rock layers such as sandstone, shale, gypsum, and conglomerate. Later, most of the island was tropical forest, which later formed coal deposits. Much later, the land was shaped by repeated ice ages, which left striations, till, U-shaped valleys, and carved the Bras d'Or Lake from the bedrock. Examples of U-shaped valleys are those of the Chéticamp, Grande Anse, and Clyburn River valleys. Other valleys have been eroded by water, forming V-shaped valleys and canyons. Cape Breton has many fault lines, but few earthquakes. Since the North American continent is moving westward, earthquakes tend to occur on the western edge of the continent. Climate The warm summer humid continental climate is moderated by the proximity of the cold, oftentimes polar Labrador Current and its warmer counterpart the Gulf Stream, both being dominant currents in the North Atlantic Ocean. Ecology Lowlands There are lowland areas in along the western shore, around Lake Ainslie, the Bras d'Or watershed, Boularderie Island, and the Sydney coalfield. They include salt marshes, coastal beaches, and freshwater wetlands. Starting in the 1800s, many areas were cleared for farming or timber. Many farms were abandoned from the 1920s to the 1950s with fields being reclaimed by white spruce, red maple, white birch, and balsam fir. Higher slopes are dominated by yellow birch and sugar maple. In sheltered areas with sun and drainage, Acadian forest is found. Wetter areas have tamarack, and black spruce. The weather station at Ingonish records more rain than anywhere else in Nova Scotia. Behind barrier beaches and dunes at Aspy Bay are salt marshes. The Aspy, Clyburn, and Ingonish rivers have all created floodplains which support populations of black ash, fiddle head fern, swamp loosestrife, swamp milkweed, southern twayblade, and bloodroot. Red sandstone and white gypsum cliffs can be observed throughout this area. Bedrock is Carboniferous sedimentary with limestone, shale, and sandstone. Many fluvial remains from are glaciation found here. Mining has been ongoing for centuries, and more than 500 mine openings can be found, mainly in the east. Karst topography is found in Dingwall, South Harbour, Plaster Provincial Park, along the Margaree and Middle Rivers, and along the north shore of Lake Ainslie. The presence of gypsum and limestone increases soil pH and produces some rich wetlands which support giant spear, tufted fen, and other mosses, as well as vascular plants like sedges. Cape Breton Hills This ecosystem is spread throughout Cape Breton and is defined as hills and slopes 150-300m above sea level, typically covered with Acadian forest. It includes North Mountain, Kelly’s Mountain, and East Bay Hills. Forests in this area were cleared for timber and agriculture and are now a mosaic of habitats depending on the local terrain, soils and microclimate. Typical species include ironwood, white ash, beech, sugar maple, red maple, and yellow birch. The understory can include striped maple, beaked hazelnut, fly honeysuckle, club mosses and ferns. Ephemerals are visible in the spring, such as Dutchman's breeches and spring beauty. In ravines, shade tolerant trees like hemlock, white pine, red spruce are found. Less well-drained areas are forested with balsam fir and black spruce. Highlands and the Northern Plateau The Highlands comprise a tableland in the northern portions of Inverness and Victoria counties. An extension of the Appalachian mountain chain, elevations average 350 metres at the edges of the plateau and rise to more than 500 metres at the centre. The area has broad, gently rolling hills bisected with deep valleys and steep-walled canyons. A majority of the land is a taiga of balsam fir, with some white birch, white spruce, mountain ash, and heart-leaf birch. The northern and western edges of the plateau, particularly at high elevations, resemble arctic tundra. Trees 30–90 high, overgrown with reindeer lichens, can be 150 years old. At very high elevations some areas are exposed bedrock without any vegetation apart from Cladonia lichens. There are many barrens, or heaths, dominated by bushy species of the Ericaceae family. Spruce, killed by spruce budworm in the late 1970s, has reestablished at lower elevations, but not at higher elevations due to moose browsing. Decomposition is slow, leaving thick layers of plant litter. Ground cover includes wood aster, twinflower, liverworts, wood sorrel, bluebead lily, goldthread, various ferns, and lily-of-the-valley, with bryophyte and large-leaved goldenrod at higher elevations. The understory can include striped maple, mountain ash, ferns, and mountain maple. Near water, bog birch, alder, and mountain-ash are found. There are many open wetlands populated with stunted tamarack and black spruce. Poor drainage has led to the formation of peatlands which can support tufted clubrush, Bartram’s serviceberry, coastal sedge, and bakeapple. Cape Breton Coastal The eastern shore is unique in that while not at a high elevation, it has a cool climate with much rain and fog, strong winds, and low summer temperatures. It is dominated by a boreal forest of black spruce and balsam fir. Sheltered areas support tolerant hardwoods such as white birch and red maple. Many salt marshes, fens, and bogs are found there. There are many beaches on the highly crenalated coastline. Unlike elsewhere on the island, these are rocky and support plants unlike those of sandy beaches. The coast provides habitat for common coast bird species like common eider, black legged kittiwake, black guillemot, whimbrel, and great cormorant. Hydrology Land is drained into the Gulf of Saint Lawrence via the rivers Aspy, Sydney, Mira, Framboise, Margaree, and Chéticamp. The largest freshwater lake is Lake Ainslie. Government Local government on the island is provided by the Cape Breton Regional Municipality, the Municipality of the County of Inverness, the Municipality of the County of Richmond, and the Municipality of the County of Victoria, along with the Town of Port Hawkesbury. The island has five Miꞌkmaq Indian reserves: Eskasoni (the largest in population and land area), Membertou, Wagmatcook, Waycobah, and Potlotek. Demographics The island's residents can be grouped into five main cultures: Scottish, Mi'kmaq, Acadian, Irish, English, with respective languages Scottish Gaelic, Mi'kmaq, French, and English. English is now the primary language, including a locally distinctive Cape Breton accent, while Mi'kmaq, Scottish Gaelic and Acadian French are still spoken in some communities. Later migrations of Black Loyalists, Italians, and Eastern Europeans mostly settled in the island's eastern part around the industrial Cape Breton region. Cape Breton Island's population has been in decline two decades with an increasing exodus in recent years due to economic conditions. According to the Census of Canada, the population of Cape Breton [Economic region] in 2016 / 2011 / 2006 / 1996 was 132,010 / 135,974 / 142,298 / 158,260. Religious groups Statistics Canada in 2001 reported a "religion" total of 145,525 for Cape Breton, including 5,245 with "no religious affiliation." Major categories included: Roman Catholic : 96,260 (includes Eastern Catholic, Polish National Catholic Church, Old Catholic) Protestant: 42,390 Christian, not included elsewhere: 580 Orthodox: 395 Jewish: 250 Muslim: 145 Economy Much of the recent economic history of Cape Breton Island can be tied to the coal industry. The island has two major coal deposits: the Sydney Coal Field in the southeastern part of the island along the Atlantic Ocean drove the Industrial Cape Breton economy throughout the 19th and 20th centuries—until after World War II, its industries were the largest private employers in Canada. the Inverness Coal Field in the western part of the island along the Gulf of St. Lawrence is significantly smaller but hosted several mines. Sydney has traditionally been the main port, with facilities in a large, sheltered, natural harbour. It is the island's largest commercial centre and home to the Cape Breton Post daily newspaper, as well as one television station, CJCB-TV (CTV), and several radio stations. The Marine Atlantic terminal at North Sydney is the terminal for large ferries traveling to Channel-Port aux Basques and seasonally to Argentia, both on the island of Newfoundland. Point Edward on the west side of Sydney Harbour is the location of Sydport, a former navy base () now converted to commercial use. The Canadian Coast Guard College is nearby at Westmount. Petroleum, bulk coal, and cruise ship facilities are also in Sydney Harbour. Glace Bay, the second largest urban community in population, was the island's main coal mining centre until its last mine closed in the 1980s. Glace Bay was the hub of the Sydney & Louisburg Railway and a major fishing port. At one time, Glace Bay was known as the largest town in Nova Scotia, based on population. Port Hawkesbury has risen to prominence since the completion of the Canso Causeway and Canso Canal created an artificial deep-water port, allowing extensive petrochemical, pulp and paper, and gypsum handling facilities to be established. The Strait of Canso is completely navigable to Seawaymax vessels, and Port Hawkesbury is open to the deepest-draught vessels on the world's oceans. Large marine vessels may also enter Bras d'Or Lake through the Great Bras d'Or channel, and small craft can use the Little Bras d'Or channel or St. Peters Canal. While commercial shipping no longer uses the St. Peters Canal, it remains an important waterway for recreational vessels. The industrial Cape Breton area faced several challenges with the closure of the Cape Breton Development Corporation's (DEVCO) coal mines and the Sydney Steel Corporation's (SYSCO) steel mill. In recent years, the Island's residents have tried to diversify the area economy by investing in tourism developments, call centres, and small businesses, as well as manufacturing ventures in fields such as auto parts, pharmaceuticals, and window glazings. While the Cape Breton Regional Municipality is in transition from an industrial to a service-based economy, the rest of Cape Breton Island outside the industrial area surrounding Sydney-Glace Bay has been more stable, with a mixture of fishing, forestry, small-scale agriculture, and tourism. Tourism in particular has grown throughout the post-Second World War era, especially the growth in vehicle-based touring, which was furthered by the creation of the Cabot Trail scenic drive. The scenery of the island is rivalled in northeastern North America by only Newfoundland; and Cape Breton Island tourism marketing places a heavy emphasis on its Scottish Gaelic heritage through events such as the Celtic Colours Festival, held each October, as well as promotions through the Gaelic College of Celtic Arts and Crafts. Whale-watching is a popular attraction for tourists. Whale-watching cruises are operated by vendors from Baddeck to Chéticamp. The most popular species of whale found in Cape Breton's waters is the pilot whale. The Cabot Trail is a scenic road circuit around and over the Cape Breton Highlands with spectacular coastal vistas; over 400,000 visitors drive the Cabot Trail each summer and fall. Coupled with the Fortress of Louisbourg, it has driven the growth of the tourism industry on the island in recent decades. The Condé Nast travel guide has rated Cape Breton Island as one of the world's best island destinations. Transport The island's primary east–west road is Highway 105, the Trans-Canada Highway, although Trunk 4 is also heavily used. Highway 125 is an important arterial route around Sydney Harbour in the Cape Breton Regional Municipality. The Cabot Trail, circling the Cape Breton Highlands, and Trunk 19, along the island's western coast, are important secondary roads. The Cape Breton and Central Nova Scotia Railway maintains railway connections between the port of Sydney to the Canadian National Railway in Truro. Cape Breton Island is served by several airports, the largest, the JA Douglas McCurdy Sydney Airport, situated on Trunk 4 between the communities of Sydney and Glace Bay, as well as smaller airports at Port Hawksbury, Margaree, and Baddeck. Culture Language Gaelic speakers in Cape Breton, as elsewhere in Nova Scotia, constituted a large proportion of the local population from the 18th century on. They brought with them a common culture of poetry, traditional songs and tales, music and dance, and used this to develop distinctive local traditions. Most Gaelic settlement in Nova Scotia happened between 1770 and 1840, with probably over 50,000 Gaelic speakers emigrating from the Scottish Highlands and the Hebrides to Nova Scotia and Prince Edward Island. Such emigration was facilitated by changes in Gaelic society and the economy, with sharp increases in rents, confiscation of land and disruption of local customs and rights. In Nova Scotia, poetry and song in Gaelic flourished. George Emmerson argues that an "ancient and rich" tradition of storytelling, song, and Gaelic poetry emerged during the 18th century and was transplanted from the Highlands of Scotland to Nova Scotia, where the language similarly took root there. The majority of those settling in Nova Scotia from the end of the 18th century through to middle of the next were from the Scottish Highlands, rather than the Lowlands, making the Highland tradition's impact more profound on the region. Gaelic settlement in Cape Breton began in earnest in the early nineteenth century. The Gaelic language became dominant from Colchester County in the west of Nova Scotia into Cape Breton County in the east. It was reinforced in Cape Breton in the first half of the 19th century with an influx of Highland Scots numbering approximately 50,000 as a result of the Highland Clearances. Gaelic speakers, however, tended to be poor; they were largely illiterate and had little access to education. This situation persisted into the early days of the twentieth century. In 1921 Gaelic was approved as an optional subject in the curriculum of Nova Scotia, but few teachers could be found and children were discouraged from using the language in schools. By 1931 the number of Gaelic speakers in Nova Scotia had fallen to approximately 25,000, mostly in discrete pockets. In Cape Breton it was still a majority language, but the proportion was falling. Children were no longer being raised with Gaelic. From 1939 on, attempts were made to strengthen its position in the public school system in Nova Scotia, but funding, official commitment and the availability of teachers continued to be a problem. By |
be divided into categories and identified three distinct themes: the "Dunsanian" (written in a similar style as Lord Dunsany), "Arkham" (occurring in Lovecraft's fictionalized New England setting), and "Cthulhu" (the cosmic tales) cycles. Writer Will Murray noted that while Lovecraft often used his fictional pantheon in the stories he ghostwrote for other authors, he reserved Arkham and its environs exclusively for those tales he wrote under his own name. Although the Mythos was not formalized or acknowledged between them, Lovecraft did correspond and share story elements with other contemporary writers including Clark Ashton Smith, Robert E. Howard, Robert Bloch, Frank Belknap Long, Henry Kuttner, Henry S. Whitehead, and Fritz Leibera group referred to as the "Lovecraft Circle." For example, Robert E. Howard's character Friedrich Von Junzt reads Lovecraft's Necronomicon in the short story "The Children of the Night" (1931), and in turn Lovecraft mentions Howard's Unaussprechlichen Kulten in the stories "Out of the Aeons" (1935) and "The Shadow Out of Time" (1936). Many of Howard's original unedited Conan stories also involve parts of the Cthulhu Mythos. Second stage Price denotes the second stage's commencement with August Derleth, with the principal difference between Lovecraft and Derleth being Derleth's use of hope and development of the idea that the Cthulhu mythos essentially represented a struggle between good and evil. Derleth is credited with creating the "Elder Gods". He stated: Price said the basis for Derleth's system is found in Lovecraft: "Was Derleth's use of the rubric 'Elder Gods' so alien to Lovecraft's in At the Mountains of Madness? Perhaps not. In fact, this very story, along with some hints from "The Shadow over Innsmouth", provides the key to the origin of the 'Derleth Mythos'. For in At the Mountains of Madness is shown the history of a conflict between interstellar races, first among them the Elder Ones and the Cthulhu-spawn. Derleth said Lovecraft wished for other authors to actively write about the Mythos as opposed to it being a discrete plot device within Lovecraft's own stories. Derleth expanded the boundaries of the Mythos by including any passing reference to another author's story elements by Lovecraft as part of the genre. Just as Lovecraft made passing reference to Clark Ashton Smith's Book of Eibon, Derleth in turn added Smith's Ubbo-Sathla to the Mythos. Derleth also attempted to connect the deities of the Mythos to the four elements ("air", "earth", "fire", and "water"), creating new beings representative of certain elements in order to legitimize his system of classification. Derleth created "Cthugha" as a sort of fire elemental when a fan, Francis Towner Laney, complained that he had neglected to include the element in his schema. Laney, the editor of The Acolyte, had categorized the Mythos in | literary successors. The name "Cthulhu" derives from the central creature in Lovecraft's seminal short story, "The Call of Cthulhu", first published in the pulp magazine Weird Tales in 1928. Richard L. Tierney, a writer who also wrote Mythos tales, later applied the term "Derleth Mythos" to distinguish Lovecraft's works from Derleth's later stories, which modify key tenets of the Mythos. Authors of Lovecraftian horror in particular frequently use elements of the Cthulhu Mythos. History In his essay "H. P. Lovecraft and the Cthulhu Mythos", Robert M. Price described two stages in the development of the Cthulhu Mythos. Price called the first stage the "Cthulhu Mythos proper." This stage was formulated during Lovecraft's lifetime and was subject to his guidance. The second stage was guided by August Derleth who, in addition to publishing Lovecraft's stories after his death, attempted to categorize and expand the Mythos. First stage An ongoing theme in Lovecraft's work is the complete irrelevance of mankind in the face of the cosmic horrors that apparently exist in the universe. Lovecraft made frequent references to the "Great Old Ones", a loose pantheon of ancient, powerful deities from space who once ruled the Earth and have since fallen into a deathlike sleep. While these monstrous deities were present in almost all of Lovecraft's published work (his second short story "Dagon", published in 1919, is considered the start of the mythos), the first story to really expand the pantheon of Great Old Ones and its themes is "The Call of Cthulhu", which was published in 1928. Lovecraft broke with other pulp writers of the time by having his main characters' minds deteriorate when afforded a glimpse of what exists outside their perceived reality. He emphasized the point by stating in the opening sentence of the story that "The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents." Writer Dirk W. Mosig noted that Lovecraft was a "mechanistic materialist" who embraced the philosophy of cosmic indifferentism and believed in a purposeless, mechanical, and uncaring universe. Human beings, |
the shots taken by remote cranes in the car-chase sequence of the 1985 film To Live and Die in L.A.. Some filmmakers place the camera on a boom arm simply to make it easier to move around between ordinary set-ups. Technique The major supplier of cranes in the cinema of the United States throughout the 1940s, 1950s, and 1960s was the Chapman Company (later Chapman-Leonard of North Hollywood), supplanted by dozens of similar manufacturers around the world. The traditional design provided seats for both the director and the camera operator, and sometimes a third seat for the cinematographer as well. Large weights on the back of the crane compensate for the weight of the people riding the crane and must be adjusted carefully to avoid the possibility of accidents. During the 1960s, the tallest crane was the Chapman Titan crane, a massive design over 20 feet high that won an Academy Scientific & Engineering award. Most such cranes were manually operated, requiring an experienced boom operator who knew how to vertically raise, lower, and "crab" the camera alongside actors while the crane platform rolled on separate tracks. The crane operator and camera operator had to precisely coordinate their moves so that focus, pan, and camera position all started and stopped at the same time, requiring great skill and rehearsal. Types Camera cranes may be small, medium, or large, depending on the load capacity and length of the loading arm. Historically, the first camera crane provided for lifting the chamber together with the operator, and sometimes an assistant. The range of motion of the boom was restricted because of the high load capacity and the need to ensure operator safety. In recent years a camera crane boom tripod with a remote control has become popular. It carries on the boom only a movie or television camera without an operator and allows shooting from difficult positions as a small load capacity makes it possible to achieve a long reach of the crane boom and relative freedom of movement. The operator controls the camera from the ground through a motorized panoramic head, using remote control and video surveillance by watching the image on the monitor. A separate category consists of telescopic camera cranes. These devices allow setting an arbitrary trajectory of the camera, eliminating the characteristic jib crane radial displacement that comes with traditional spanning shots. Large camera cranes are almost indistinguishable from the usual boom-type cranes, with the exception of special equipment for smoothly moving the boom and controlling noise. Small camera cranes and crane-trucks have a lightweight construction, often without a mechanical drive. The valves are controlled manually by balancing the load-specific counterweight, facilitating manipulation. To improve usability and repeatability of movement of the crane in different takes, the axis of rotation arrows are provided with limbs and a pointer. In some cases, the camera crane is mounted on a dolly for even greater camera | a half-circle pan shot from a crane for the 1935 Nazi propaganda film Triumph of the Will. A crane shot was used in Orson Welles' 1941 film Citizen Kane. The Western High Noon had a famous crane shot. The shot backs up and rises, in order to show Marshal Will Kane totally alone and isolated on the street. Director Dario Argento included an extensive scene in Tenebrae where the camera seemingly crawled over the walls and up a house wall, all in one seamless take. Due to its length, the tracking shot ended up being the production's most difficult and complex part to complete. The 1980 comedy-drama film The Stunt Man featured a crane throughout the production of the fictitious film-within-a-film, directed by eccentric director Peter O'Toole. The television comedy Second City Television (SCTV) uses the concept of the crane shot as comedic material. After using a crane shot in one of the first NBC-produced episodes, the network complained about the exorbitant cost of renting the crane. SCTV writers responded by making the "crane shot" a ubiquitous symbol of production excess while also lampooning network executives who care nothing about artistic vision and everything about the bottom line. At the end of the second season, an inebriated Johnny LaRue (John Candy) is given his very own crane by Santa Claus, implying he would be able to have a crane shot whenever he wanted it. In his film Sympathy for the Devil, Jean-Luc Godard used a crane for almost every shot in the movie, giving each scene a 360 degree tour of the tableau Godard presented to the viewer. In the final scene he even shows the crane he was able to rent on his limited budget by including it in the scene. This was one of his traits as a filmmaker—showing off his budget—as he did with Brigitte Bardot in Le Mepris (Contempt). Director Dennis Dugan frequently uses top-to-bottom crane shots in his comedy films. Orson Welles used a crane camera during the iconic opening of Touch of Evil. The camera perched on a Chapman crane begins on a close-up of a ticking time bomb and ends three-plus minutes later with a blinding explosion. The closing take of Richard Attenborough's film version of Oh! What a Lovely War begins with a single war grave, gradually pulling back to reveal hundreds of identical crosses. The 2004 Johnnie To film Breaking News opens with an elaborate seven-minute single-take crane shot. The 1964 film by Mikhail Kalatozov, I Am Cuba contains two of the most astonishing tracking shots ever attempted. References |
the boat to France for the Olympics, Liddell discovers the heats for his 100-metre race will be on a Sunday. Despite intense pressure from the Prince of Wales and the British Olympic Committee, he refuses to run the race because his Christian convictions prevent him from running on the Lord's Day. A solution is found thanks to Liddell's teammate Lindsay, who, having already won a silver medal in the 400 metres hurdles, offers to give his place in the 400-metre race on the following Thursday to Liddell, who gratefully accepts. Liddell's religious convictions in the face of national athletic pride make headlines around the world; he delivers a sermon at the Paris Church of Scotland that Sunday, and quotes from Isaiah 40, ending with "But they that wait upon the Lord shall renew their strength; they shall mount up with wings as eagles; they shall run, and not be weary; and they shall walk, and not faint." Abrahams is badly beaten by the heavily favoured United States runners in the 200 metre race. He knows his last chance for a medal will be the 100 metres. He competes in the race and wins. His coach Mussabini, who was barred from the stadium, is overcome that the years of dedication and training have paid off with an Olympic gold medal. Now Abrahams can get on with his life and reunite with his girlfriend Sybil, whom he had neglected for the sake of running. Before Liddell's race, the American coach remarks dismissively to his runners that Liddell has little chance of doing well in his now, far longer, 400 metre race. But one of the American runners, Jackson Scholz, hands Liddell a note of support, quoting 1 Samuel 2:30 "He that honors Me I will honor." Liddell defeats the American favourites and wins the gold medal. The British team returns home triumphant. A textual epilogue reveals that Abrahams married Sybil and became the elder statesman of British athletics while Liddell went on to do missionary work, with all of Scotland mourning his death in 1945 in Japanese-occupied China. Cast Other actors in smaller roles include John Young as Eric and Jennie's father Reverend J.D. Liddell, Yvonne Gilan as their mother Mary, Benny Young as their younger brother Rob, Yves Beneyton as French runner Géo André, Philip O'Brien as American coach George Collins, Patrick Doyle as Jimmie, and Ruby Wax as Bunty. Kenneth Branagh, who worked as a set gofer, appears as an extra in the Cambridge Society Day sequence. Stephen Fry has a likewise uncredited role as a Gilbert-and-Sullivan Club singer. Production Screenplay Producer David Puttnam was looking for a story in the mold of A Man for All Seasons (1966), regarding someone who follows his conscience, and felt sports provided clear situations in this sense. He discovered Eric Liddell's story by accident in 1977, when he happened upon a reference book on the Olympics while housebound from the flu in a rented house in Los Angeles. Screenwriter Colin Welland, commissioned by Puttnam, did an enormous amount of research for his Academy Award-winning script. Among other things, he took out advertisements in London newspapers seeking memories of the 1924 Olympics, went to the National Film Archives for pictures and footage of the 1924 Olympics, and interviewed everyone involved who was still alive. Welland just missed Abrahams, who died on 14 January 1978, but he did attend Abrahams' February 1978 memorial service, which inspired the present-day framing device of the film. Aubrey Montague's son saw Welland's newspaper ad and sent him copies of the letters his father had sent home – which gave Welland something to use as a narrative bridge in the film. Except for changes in the greetings of the letters from "Darling Mummy" to "Dear Mum" and the change from Oxford to Cambridge, all of the readings from Montague's letters are from the originals. Welland's original script also featured, in addition to Eric Liddell and Harold Abrahams, a third protagonist, 1924 Olympic gold medallist Douglas Lowe, who was presented as a privileged aristocratic athlete. However, Lowe refused to have anything to do with the film, and his character was written out and replaced by the fictional character of Lord Andrew Lindsay. Initial financing towards development costs was provided by Goldcrest Films, who then sold the project to Allied, but kept a percentage of the profits. Ian Charleson wrote Eric Liddell's speech to the post-race workingmen's crowd at the Scotland v. Ireland races. Charleson, who had studied the Bible intensively in preparation for the role, told director Hugh Hudson that he didn't feel the portentous and sanctimonious scripted speech was either authentic or inspiring. Hudson and Welland allowed him to write words he personally found inspirational instead. Puttnam chose Hugh Hudson, a multiple award-winning advertising and documentary filmmaker who had never helmed a feature film, to direct Chariots of Fire. Hudson and Puttnam had known each other since the 1960s, when Puttnam was an advertising executive and Hudson was making films for ad agencies. In 1977, Hudson had also been second-unit director on the Puttnam-produced film Midnight Express. Casting Director Hugh Hudson was determined to cast young, unknown actors in all the major roles of the film, and to back them up by using veterans like John Gielgud, Lindsay Anderson, and Ian Holm as their supporting cast. Hudson and producer David Puttnam did months of fruitless searching for the perfect actor to play Eric Liddell. They then saw Scottish stage actor Ian Charleson performing the role of Pierre in the Royal Shakespeare Company's production of Piaf, and knew immediately they had found their man. Unbeknownst to them, Charleson had heard about the film from his father, and desperately wanted to play the part, feeling it would "fit like a kid glove". Ben Cross, who plays Harold Abrahams, was discovered while playing Billy Flynn in Chicago. In addition to having a natural pugnaciousness, he had the desired ability to sing and play the piano. Cross was thrilled to be cast, and said he was moved to tears by the film's script. 20th Century Fox, which put up half of the production budget in exchange for distribution rights outside of North America, insisted on having a couple of notable American names in the cast. Thus the small parts of the two American champion runners, Jackson Scholz and Charlie Paddock, were cast with recent headliners: Brad Davis had recently starred in Midnight Express (also produced by Puttnam), and Dennis Christopher had recently starred, as a young bicycle racer, in the popular indie film Breaking Away. All of the actors portraying runners underwent a gruelling three-month training intensive with renowned running coach Tom McNab. This training and isolation of the actors also created a strong bond and sense of camaraderie among them. Filming The beach scenes showing the athletes running towards the Carlton Hotel at Broadstairs, Kent, were shot in Scotland on West Sands, St Andrews next to the 18th hole of the Old Course at St Andrews Links. A plaque now commemorates the filming. The lasting impact of these iconic scenes (as the athletes run in slow motion to Vangelis's music) prompted Broadstairs town council to commemorate them with their own seafront plaque. All of the Cambridge scenes were actually filmed at Hugh Hudson's alma mater Eton College, because Cambridge refused filming rights, fearing depictions of anti-Semitism. The Cambridge administration greatly regretted the decision after the film's enormous success. Liverpool Town Hall was the setting for the scenes depicting the British Embassy in Paris. The Colombes Olympic Stadium in Paris was represented by the Oval Sports Centre, Bebington, Merseyside. The nearby Woodside ferry terminal was used to represent the embarkation scenes set in Dover. The railway station scenes were filmed in York, using locomotives from the National Railway Museum. The scene depicting a performance of The Mikado was filmed in the Royal Court Theatre, Liverpool, with members of the D'Oyly Carte Opera Company who were on tour. Editing The film was slightly altered for the U.S. audience. A brief scene depicting a pre-Olympics cricket game between Abrahams, Liddell, Montague, and the rest of the British track team appears shortly after the beginning of the original film. For the American audience, this brief scene was deleted. In the U.S., to avoid the initial G rating, which had been strongly associated with children's films and might have hindered box office sales, a different scene was used – one depicting Abrahams and Montague arriving at a Cambridge railway station and encountering two First World War veterans who use an obscenity – in order to be given a PG rating. Soundtrack Although the film is a period piece, set in the 1920s, the Academy Award-winning original soundtrack composed by Vangelis uses a modern 1980s electronic sound, with a strong use of synthesizer and piano among other instruments. This was a departure from earlier period films, which employed sweeping orchestral instrumentals. The title theme of the film has been used in subsequent films and television shows during slow-motion segments. Vangelis, a Greek-born electronic composer who moved to Paris in the late 1960s, had been living in London since 1974. Director Hugh Hudson had collaborated with him on documentaries and commercials, and was also particularly impressed with his 1979 albums Opera Sauvage and China. David Puttnam also greatly admired Vangelis's body of work, having originally selected his compositions for his previous film Midnight Express. Hudson made the choice for Vangelis and for a modern score: "I knew we needed a piece which was anachronistic to the period to give it a feel of modernity. It was a risky idea but we went with it rather than have a period symphonic score." The soundtrack had a personal significance to Vangelis: After composing the iconic theme tune he told Puttnam, "My father is a runner, and this is an anthem to him." Hudson originally wanted Vangelis's 1977 tune "L'Enfant", from his Opera Sauvage album, to be the title theme of the film, and the beach running sequence was actually filmed with "L'Enfant" playing on loudspeakers for the runners to pace to. Vangelis finally convinced Hudson he could create a new and better piece for the film's main theme – and when he played the now-iconic "Chariots of Fire" theme for Hudson, it was agreed the new tune was unquestionably better. The "L'Enfant" melody still made it into the film: when the athletes reach Paris and enter the stadium, a brass band marches through the field, and first plays a modified, acoustic performance of the piece. Vangelis's electronic "L'Enfant" track eventually was used prominently in the 1982 film The Year of Living Dangerously. Some pieces of Vangelis's music in the film did not end up on the film's soundtrack album. One of them is the background music to the race Eric Liddell runs in the Scottish highlands. This piece is a version of "Hymne", the original version of which appears on Vangelis's 1979 album, Opéra sauvage. Various versions are also included on Vangelis's compilation albums Themes, Portraits, and Odyssey: The Definitive Collection, though none of these include the version used in the film. Five lively Gilbert and Sullivan tunes also appear in the soundtrack, and serve as jaunty period music which counterpoints Vangelis's modern electronic score. These are: "He is an Englishman" from H.M.S. Pinafore, "Three Little Maids from School Are We" from The Mikado, "With Catlike Tread" from The Pirates of Penzance, "The Soldiers of Our Queen" from Patience, and "There Lived a King" from The Gondoliers. The film also incorporates a major traditional work: "Jerusalem", sung by a British choir at the 1978 funeral of Harold Abrahams. The words, written by William Blake in 1804–08, were set to music by Parry in 1916 as a celebration of England. This hymn has been described as "England's unofficial national anthem", concludes the film and inspired its title. A handful of other traditional anthems and hymns and period-appropriate instrumental ballroom-dance music round out the film's soundtrack. Reception Since its release, Chariots of Fire has received generally positive reviews from critics. , the film holds an 82% "Certified Fresh" rating on the review aggregator website Rotten Tomatoes, based on 73 reviews, with a weighted average of 7.53/10. The site's consensus reads: "Decidedly slower and less limber than the Olympic runners at the center of its story, the film nevertheless manages to make effectively stirring use of its spiritual and patriotic themes." On Metacritic, the film has a score of 78 out of 100 based on 19 critics' reviews, indicating "generally favorable reviews". For its 2012 re-release, Kate Muir of The Times gave the film five stars, writing: "In a time when drug tests and synthetic fibres have replaced gumption and moral fibre, the tale of two runners competing against each other in the 1924 Olympics has a simple, undiminished power. From the opening scene of pale young men racing barefoot along the beach, full of hope and elation, backed by Vangelis's now famous anthem, the film is utterly compelling." The film was the highest-grossing British film for the year with theatrical rentals of £1,859,480. Accolades The film was nominated for seven Academy Awards, winning four (including Best Picture). When accepting his Oscar for Best Original Screenplay, Colin Welland famously announced "The British are coming". It was the first film released by Warner Bros. to win Best Picture since My Fair Lady in 1964. American Film Institute recognition 1998: AFI's 100 Years...100 Movies - Nominated 2005: AFI's 100 Years of Film Scores - Nominated 2006: AFI's 100 Years...100 Cheers - No. 100 2007: AFI's 100 Years...100 Movies (10th Anniversary Edition) - Nominated 2008: AFI's 10 Top 10 - Nominated Sports Movie Other honours BFI Top 100 British films (1999) – rank 19 Hot 100 No. 1 Hits of 1982 (USA) (8 May) – Vangelis, Chariots of Fire theme Historical accuracy Chariots of Fire is a film about achieving victory through self sacrifice and moral courage. | Mummy" to "Dear Mum" and the change from Oxford to Cambridge, all of the readings from Montague's letters are from the originals. Welland's original script also featured, in addition to Eric Liddell and Harold Abrahams, a third protagonist, 1924 Olympic gold medallist Douglas Lowe, who was presented as a privileged aristocratic athlete. However, Lowe refused to have anything to do with the film, and his character was written out and replaced by the fictional character of Lord Andrew Lindsay. Initial financing towards development costs was provided by Goldcrest Films, who then sold the project to Allied, but kept a percentage of the profits. Ian Charleson wrote Eric Liddell's speech to the post-race workingmen's crowd at the Scotland v. Ireland races. Charleson, who had studied the Bible intensively in preparation for the role, told director Hugh Hudson that he didn't feel the portentous and sanctimonious scripted speech was either authentic or inspiring. Hudson and Welland allowed him to write words he personally found inspirational instead. Puttnam chose Hugh Hudson, a multiple award-winning advertising and documentary filmmaker who had never helmed a feature film, to direct Chariots of Fire. Hudson and Puttnam had known each other since the 1960s, when Puttnam was an advertising executive and Hudson was making films for ad agencies. In 1977, Hudson had also been second-unit director on the Puttnam-produced film Midnight Express. Casting Director Hugh Hudson was determined to cast young, unknown actors in all the major roles of the film, and to back them up by using veterans like John Gielgud, Lindsay Anderson, and Ian Holm as their supporting cast. Hudson and producer David Puttnam did months of fruitless searching for the perfect actor to play Eric Liddell. They then saw Scottish stage actor Ian Charleson performing the role of Pierre in the Royal Shakespeare Company's production of Piaf, and knew immediately they had found their man. Unbeknownst to them, Charleson had heard about the film from his father, and desperately wanted to play the part, feeling it would "fit like a kid glove". Ben Cross, who plays Harold Abrahams, was discovered while playing Billy Flynn in Chicago. In addition to having a natural pugnaciousness, he had the desired ability to sing and play the piano. Cross was thrilled to be cast, and said he was moved to tears by the film's script. 20th Century Fox, which put up half of the production budget in exchange for distribution rights outside of North America, insisted on having a couple of notable American names in the cast. Thus the small parts of the two American champion runners, Jackson Scholz and Charlie Paddock, were cast with recent headliners: Brad Davis had recently starred in Midnight Express (also produced by Puttnam), and Dennis Christopher had recently starred, as a young bicycle racer, in the popular indie film Breaking Away. All of the actors portraying runners underwent a gruelling three-month training intensive with renowned running coach Tom McNab. This training and isolation of the actors also created a strong bond and sense of camaraderie among them. Filming The beach scenes showing the athletes running towards the Carlton Hotel at Broadstairs, Kent, were shot in Scotland on West Sands, St Andrews next to the 18th hole of the Old Course at St Andrews Links. A plaque now commemorates the filming. The lasting impact of these iconic scenes (as the athletes run in slow motion to Vangelis's music) prompted Broadstairs town council to commemorate them with their own seafront plaque. All of the Cambridge scenes were actually filmed at Hugh Hudson's alma mater Eton College, because Cambridge refused filming rights, fearing depictions of anti-Semitism. The Cambridge administration greatly regretted the decision after the film's enormous success. Liverpool Town Hall was the setting for the scenes depicting the British Embassy in Paris. The Colombes Olympic Stadium in Paris was represented by the Oval Sports Centre, Bebington, Merseyside. The nearby Woodside ferry terminal was used to represent the embarkation scenes set in Dover. The railway station scenes were filmed in York, using locomotives from the National Railway Museum. The scene depicting a performance of The Mikado was filmed in the Royal Court Theatre, Liverpool, with members of the D'Oyly Carte Opera Company who were on tour. Editing The film was slightly altered for the U.S. audience. A brief scene depicting a pre-Olympics cricket game between Abrahams, Liddell, Montague, and the rest of the British track team appears shortly after the beginning of the original film. For the American audience, this brief scene was deleted. In the U.S., to avoid the initial G rating, which had been strongly associated with children's films and might have hindered box office sales, a different scene was used – one depicting Abrahams and Montague arriving at a Cambridge railway station and encountering two First World War veterans who use an obscenity – in order to be given a PG rating. Soundtrack Although the film is a period piece, set in the 1920s, the Academy Award-winning original soundtrack composed by Vangelis uses a modern 1980s electronic sound, with a strong use of synthesizer and piano among other instruments. This was a departure from earlier period films, which employed sweeping orchestral instrumentals. The title theme of the film has been used in subsequent films and television shows during slow-motion segments. Vangelis, a Greek-born electronic composer who moved to Paris in the late 1960s, had been living in London since 1974. Director Hugh Hudson had collaborated with him on documentaries and commercials, and was also particularly impressed with his 1979 albums Opera Sauvage and China. David Puttnam also greatly admired Vangelis's body of work, having originally selected his compositions for his previous film Midnight Express. Hudson made the choice for Vangelis and for a modern score: "I knew we needed a piece which was anachronistic to the period to give it a feel of modernity. It was a risky idea but we went with it rather than have a period symphonic score." The soundtrack had a personal significance to Vangelis: After composing the iconic theme tune he told Puttnam, "My father is a runner, and this is an anthem to him." Hudson originally wanted Vangelis's 1977 tune "L'Enfant", from his Opera Sauvage album, to be the title theme of the film, and the beach running sequence was actually filmed with "L'Enfant" playing on loudspeakers for the runners to pace to. Vangelis finally convinced Hudson he could create a new and better piece for the film's main theme – and when he played the now-iconic "Chariots of Fire" theme for Hudson, it was agreed the new tune was unquestionably better. The "L'Enfant" melody still made it into the film: when the athletes reach Paris and enter the stadium, a brass band marches through the field, and first plays a modified, acoustic performance of the piece. Vangelis's electronic "L'Enfant" track eventually was used prominently in the 1982 film The Year of Living Dangerously. Some pieces of Vangelis's music in the film did not end up on the film's soundtrack album. One of them is the background music to the race Eric Liddell runs in the Scottish highlands. This piece is a version of "Hymne", the original version of which appears on Vangelis's 1979 album, Opéra sauvage. Various versions are also included on Vangelis's compilation albums Themes, Portraits, and Odyssey: The Definitive Collection, though none of these include the version used in the film. Five lively Gilbert and Sullivan tunes also appear in the soundtrack, and serve as jaunty period music which counterpoints Vangelis's modern electronic score. These are: "He is an Englishman" from H.M.S. Pinafore, "Three Little Maids from School Are We" from The Mikado, "With Catlike Tread" from The Pirates of Penzance, "The Soldiers of Our Queen" from Patience, and "There Lived a King" from The Gondoliers. The film also incorporates a major traditional work: "Jerusalem", sung by a British choir at the 1978 funeral of Harold Abrahams. The words, written by William Blake in 1804–08, were set to music by Parry in 1916 as a celebration of England. This hymn has been described as "England's unofficial national anthem", concludes the film and inspired its title. A handful of other traditional anthems and hymns and period-appropriate instrumental ballroom-dance music round out the film's soundtrack. Reception Since its release, Chariots of Fire has received generally positive reviews from critics. , the film holds an 82% "Certified Fresh" rating on the review aggregator website Rotten Tomatoes, based on 73 reviews, with a weighted average of 7.53/10. The site's consensus reads: "Decidedly slower and less limber than the Olympic runners at the center of its story, the film nevertheless manages to make effectively stirring use of its spiritual and patriotic themes." On Metacritic, the film has a score of 78 out of 100 based on 19 critics' reviews, indicating "generally favorable reviews". For its 2012 re-release, Kate Muir of The Times gave the film five stars, writing: "In a time when drug tests and synthetic fibres have replaced gumption and moral fibre, the tale of two runners competing against each other in the 1924 Olympics has a simple, undiminished power. From the opening scene of pale young men racing barefoot along the beach, full of hope and elation, backed by Vangelis's now famous anthem, the film is utterly compelling." The film was the highest-grossing British film for the year with theatrical rentals of £1,859,480. Accolades The film was nominated for seven Academy Awards, winning four (including Best Picture). When accepting his Oscar for Best Original Screenplay, Colin Welland famously announced "The British are coming". It was the first film released by Warner Bros. to win Best Picture since My Fair Lady in 1964. American Film Institute recognition 1998: AFI's 100 Years...100 Movies - Nominated 2005: AFI's 100 Years of Film Scores - Nominated 2006: AFI's 100 Years...100 Cheers - No. 100 2007: AFI's 100 Years...100 Movies (10th Anniversary Edition) - Nominated 2008: AFI's 10 Top 10 - Nominated Sports Movie Other honours BFI Top 100 British films (1999) – rank 19 Hot 100 No. 1 Hits of 1982 (USA) (8 May) – Vangelis, Chariots of Fire theme Historical accuracy Chariots of Fire is a film about achieving victory through self sacrifice and moral courage. While the producers' intent was to make a cinematic work that was historically authentic, the film was not intended to be historically accurate. Numerous liberties were taken with the actual historical chronology, the inclusion and exclusion of notable people, and the creation of fictional scenes for dramatic purpose, plot pacing and exposition. Characters The film depicts Abrahams as attending Gonville and Caius College, Cambridge with three other Olympic athletes: Henry Stallard, Aubrey Montague, and Lord Andrew Lindsay. Abrahams and Stallard were in fact students there and competed in the 1924 Olympics. Montague also competed in the Olympics as depicted, but he attended Oxford, not Cambridge. Aubrey Montague sent daily letters to his mother about his time at Oxford and the Olympics; these letters were the basis of Montague's narration in the film. The character of Lindsay was based partially on Lord Burghley, a significant figure in the history of British athletics. Although Burghley did attend Cambridge, he was not a contemporary of Harold Abrahams, as Abrahams was an undergraduate from 1919 to 1923 and Burghley was at Cambridge from 1923 to 1927. One scene in the film depicts the Burghley-based "Lindsay" as practising hurdles on his estate with full champagne glasses placed on each hurdle – this was something the wealthy Burghley did, although he used matchboxes instead of champagne glasses. The fictional character of Lindsay was created when Douglas Lowe, who was Britain's third athletics gold medallist in the 1924 Olympics, was not willing to be involved with the film. Another scene in the film recreates the Great Court Run, |
back and examine the dilemma as a whole. In practice, this equates to adhering to rule consequentialism when one can only reason on an intuitive level, and to act consequentialism when in a position to stand back and reason on a more critical level. This position can be described as a reconciliation between act consequentialism—in which the morality of an action is determined by that action's effects—and rule consequentialism—in which moral behavior is derived from following rules that lead to positive outcomes. The two-level approach to consequentialism is most often associated with R. M. Hare and Peter Singer. Motive consequentialism Another consequentialist version is motive consequentialism, which looks at whether the state of affairs that results from the motive to choose an action is better or at least as good as each of the alternative state of affairs that would have resulted from alternative actions. This version gives relevance to the motive of an act and links it to its consequences. An act can therefore not be wrong if the decision to act was based on a right motive. A possible inference is, that one can not be blamed for mistaken judgments if the motivation was to do good. Negative consequentialism Most consequentialist theories focus on promoting some sort of good consequences. However, negative utilitarianism lays out a consequentialist theory that focuses solely on minimizing bad consequences. One major difference between these two approaches is the agent's responsibility. Positive consequentialism demands that we bring about good states of affairs, whereas negative consequentialism requires that we avoid bad ones. Stronger versions of negative consequentialism will require active intervention to prevent bad and ameliorate existing harm. In weaker versions, simple forbearance from acts tending to harm others is sufficient. An example of this is the slippery-slope argument, which encourages others to avoid a specified act on the grounds that it may ultimately lead to undesirable consequences. Often "negative" consequentialist theories assert that reducing suffering is more important than increasing pleasure. Karl Popper, for example, claimed that "from the moral point of view, pain cannot be outweighed by pleasure." (While Popper is not a consequentialist per se, this is taken as a classic statement of negative utilitarianism.) When considering a theory of justice, negative consequentialists may use a statewide or global-reaching principle: the reduction of suffering (for the disadvantaged) is more valuable than increased pleasure (for the affluent or luxurious). Acts and omissions Since pure consequentialism holds that an action is to be judged solely by its result, most consequentialist theories hold that a deliberate action is no different from a deliberate decision not to act. This contrasts with the "acts and omissions doctrine", which is upheld by some medical ethicists and some religions: it asserts there is a significant moral distinction between acts and deliberate non-actions which lead to the same outcome. This contrast is brought out in issues such as voluntary euthanasia. Actualism and possibilism The normative status of an action depends on its consequences according to consequentialism. The consequences of the actions of an agent may include other actions by this agent. Actualism and possibilism disagree on how later possible actions impact the normative status of the current action by the same agent. Actualists assert that it is only relevant what the agent would actually do later for assessing the value of an alternative. Possibilists, on the other hand, hold that we should also take into account what the agent could do, even if she wouldn't do it. For example, assume that Gifre has the choice between two alternatives, eating a cookie or not eating anything. Having eaten the first cookie, Gifre could stop eating cookies, which is the best alternative. But after having tasted one cookie, Gifre would freely decide to continue eating cookies until the whole bag is finished, which would result in a terrible stomach ache and would be the worst alternative. Not eating any cookies at all, on the other hand, would be the second-best alternative. Now the question is: should Gifre eat the first cookie or not? Actualists are only concerned with the actual consequences. According to them, Gifre should not eat any cookies at all since it is better than the alternative leading to a stomach ache. Possibilists, however, contend that the best possible course of action involves eating the first cookie and this is therefore what Gifre should do. One counterintuitive consequence of actualism is that agents can avoid moral obligations simply by having an imperfect moral character. For example, a lazy person might justify rejecting a request to help a friend by arguing that, due to her lazy character, she wouldn't have done the work anyway, even if she had accepted the request. By rejecting the offer right away, she managed at least not to waste anyone's time. Actualists might even consider her behavior praiseworthy since she did what, according to actualism, she ought to have done. This seems to be a very easy way to "get off the hook" that is avoided by possibilism. But possibilism has to face the objection that in some cases it sanctions and even recommends what actually leads to the worst outcome. Douglas W. Portmore has suggested that these and other problems of actualism and possibilism can be avoided by constraining what counts as a genuine alternative for the agent. On his view, it is a requirement that the agent has rational control over the event in question. For example, eating only one cookie and stopping afterward only is an option for Gifre if she has the rational capacity to repress her temptation to continue eating. If the temptation is irrepressible then this course of action is not considered to be an option and is therefore not relevant when assessing what the best alternative is. Portmore suggests that, given this adjustment, we should prefer a view very closely associated with possibilism called maximalism. Issues Action guidance One important characteristic of many normative moral theories such as consequentialism is the ability to produce practical moral judgements. At the very least, any moral theory needs to define the standpoint from which the goodness of the consequences are to be determined. What is primarily at stake here is the responsibility of the agent. The ideal observer One common tactic among consequentialists, particularly those committed to an altruistic (selfless) account of consequentialism, is to employ an ideal, neutral observer from which moral judgements can be made. John Rawls, a critic of utilitarianism, argues that utilitarianism, in common with other forms of consequentialism, relies on the perspective of such an ideal observer. The particular characteristics of this ideal observer can vary from an omniscient observer, who would grasp all the consequences of any action, to an ideally informed observer, who knows as much as could reasonably be expected, but not necessarily all the circumstances or all the possible consequences. Consequentialist theories that adopt this paradigm hold that right action is the action that will bring about the best consequences from this ideal observer's perspective. The real observer In practice, it is very difficult, and at times arguably impossible, to adopt the point of view of an ideal observer. Individual moral agents do not know everything about their particular situations, and thus do not know all the possible consequences of their potential actions. For this reason, some theorists have argued that consequentialist theories can only require agents to choose the best action in line with what they know about the situation. However, if this approach is naïvely adopted, then moral agents who, for example, recklessly fail to reflect on their situation, and act in a way that brings about terrible results, could be said to be acting in a morally justifiable way. Acting in a situation without first informing oneself of the circumstances of the situation can lead to even the most well-intended actions yielding miserable consequences. As a result, it could be argued that there is a moral imperative for an agent to inform himself as much as possible about a situation before judging the appropriate course of action. This imperative, of course, is derived from consequential thinking: a better-informed agent is able to bring about better consequences. Consequences for whom Moral action always has consequences for certain people or things. Varieties of consequentialism can be differentiated by the beneficiary of the good consequences. That is, one might ask "Consequences for whom?" Agent-focused or agent-neutral A fundamental distinction can be drawn between theories which require that agents act for ends perhaps disconnected from their own interests and drives, and theories which permit that agents act for ends in which they have some personal interest or | exists in the forms of rule utilitarianism and rule egoism. Various theorists are split as to whether the rules are the only determinant of moral behavior or not. For example, Robert Nozick held that a certain set of minimal rules, which he calls "side-constraints," are necessary to ensure appropriate actions. There are also differences as to how absolute these moral rules are. Thus, while Nozick's side-constraints are absolute restrictions on behavior, Amartya Sen proposes a theory that recognizes the importance of certain rules, but these rules are not absolute. That is, they may be violated if strict adherence to the rule would lead to much more undesirable consequences. One of the most common objections to rule-consequentialism is that it is incoherent, because it is based on the consequentialist principle that what we should be concerned with is maximizing the good, but then it tells us not to act to maximize the good, but to follow rules (even in cases where we know that breaking the rule could produce better results). In Ideal Code, Real World, Brad Hooker avoids this objection by not basing his form of rule-consequentialism on the ideal of maximizing the good. He writes: [T]he best argument for rule-consequentialism is not that it derives from an overarching commitment to maximise the good. The best argument for rule-consequentialism is that it does a better job than its rivals of matching and tying together our moral convictions, as well as offering us help with our moral disagreements and uncertainties. Derek Parfit described Hooker's book as the "best statement and defence, so far, of one of the most important moral theories." State consequentialism State consequentialism, also known as Mohist consequentialism, is an ethical theory that evaluates the moral worth of an action based on how much it contributes to the welfare of a state. According to the Stanford Encyclopedia of Philosophy, Mohist consequentialism, dating back to the 5th century BCE, is the "world's earliest form of consequentialism, a remarkably sophisticated version based on a plurality of intrinsic goods taken as constitutive of human welfare." Unlike utilitarianism, which views utility as the sole moral good, "the basic goods in Mohist consequentialist thinking are...order, material wealth, and increase in population." During the time of Mozi, war and famine were common, and population growth was seen as a moral necessity for a harmonious society. The "material wealth" of Mohist consequentialism refers to basic needs, like shelter and clothing; and "order" refers to Mozi's stance against warfare and violence, which he viewed as pointless and a threat to social stability. In The Cambridge History of Ancient China, Stanford sinologist David Shepherd Nivison writes that the moral goods of Mohism "are interrelated: more basic wealth, then more reproduction; more people, then more production and wealth...if people have plenty, they would be good, filial, kind, and so on unproblematically." The Mohists believed that morality is based on "promoting the benefit of all under heaven and eliminating harm to all under heaven." In contrast to Jeremy Bentham's views, state consequentialism is not utilitarian because it is not hedonistic or individualistic. The importance of outcomes that are good for the community outweigh the importance of individual pleasure and pain. The term state consequentialism has also been applied to the political philosophy of the Confucian philosopher Xunzi. On the other hand, "legalist" Han Fei "is motivated almost totally from the ruler's point of view." Ethical egoism Ethical egoism can be understood as a consequentialist theory according to which the consequences for the individual agent are taken to matter more than any other result. Thus, egoism will prescribe actions that may be beneficial, detrimental, or neutral to the welfare of others. Some, like Henry Sidgwick, argue that a certain degree of egoism promotes the general welfare of society for two reasons: because individuals know how to please themselves best, and because if everyone were an austere altruist then general welfare would inevitably decrease. Ethical altruism Ethical altruism can be seen as a consequentialist theory which prescribes that an individual take actions that have the best consequences for everyone except for himself. This was advocated by Auguste Comte, who coined the term altruism, and whose ethics can be summed up in the phrase "Live for others." Two-level consequentialism The two-level approach involves engaging in critical reasoning and considering all the possible ramifications of one's actions before making an ethical decision, but reverting to generally reliable moral rules when one is not in a position to stand back and examine the dilemma as a whole. In practice, this equates to adhering to rule consequentialism when one can only reason on an intuitive level, and to act consequentialism when in a position to stand back and reason on a more critical level. This position can be described as a reconciliation between act consequentialism—in which the morality of an action is determined by that action's effects—and rule consequentialism—in which moral behavior is derived from following rules that lead to positive outcomes. The two-level approach to consequentialism is most often associated with R. M. Hare and Peter Singer. Motive consequentialism Another consequentialist version is motive consequentialism, which looks at whether the state of affairs that results from the motive to choose an action is better or at least as good as each of the alternative state of affairs that would have resulted from alternative actions. This version gives relevance to the motive of an act and links it to its consequences. An act can therefore not be wrong if the decision to act was based on a right motive. A possible inference is, that one can not be blamed for mistaken judgments if the motivation was to do good. Negative consequentialism Most consequentialist theories focus on promoting some sort of good consequences. However, negative utilitarianism lays out a consequentialist theory that focuses solely on minimizing bad consequences. One major difference between these two approaches is the agent's responsibility. Positive consequentialism demands that we bring about good states of affairs, whereas negative consequentialism requires that we avoid bad ones. Stronger versions of negative consequentialism will require active intervention to prevent bad and ameliorate existing harm. In weaker versions, simple forbearance from acts tending to harm others is sufficient. An example of this is the slippery-slope argument, which encourages others to avoid a specified act on the grounds that it may ultimately lead to undesirable consequences. Often "negative" consequentialist theories assert that reducing suffering is more important than increasing pleasure. Karl Popper, for example, claimed that "from the moral point of view, pain cannot be outweighed by pleasure." (While Popper is not a consequentialist per se, this is taken as a classic statement of negative utilitarianism.) When considering a theory of justice, negative consequentialists may use a statewide or global-reaching principle: the reduction of suffering (for the disadvantaged) is more valuable than increased pleasure (for the affluent or luxurious). Acts and omissions Since pure consequentialism holds that an action is to be judged solely by its result, most consequentialist theories hold that a deliberate action is no different from a deliberate decision not to act. This contrasts with the "acts and omissions doctrine", which is upheld by some medical ethicists and some religions: it asserts there is a significant moral distinction between acts and deliberate non-actions which lead to the same outcome. This contrast is brought out in issues such as voluntary euthanasia. Actualism and possibilism The normative status of an action depends on its consequences according to consequentialism. The consequences of the actions of an agent may include other actions by this agent. Actualism and possibilism disagree on how later possible actions impact the normative status of the current action by the same agent. Actualists assert that it is only relevant what the agent would actually do later for assessing the value of an alternative. Possibilists, on the other hand, hold that we should also take into account what the agent could do, even if she wouldn't do it. For example, assume that Gifre has the choice between two alternatives, eating a cookie or not eating anything. Having eaten the first cookie, Gifre could stop eating cookies, which is the best alternative. But after having tasted one cookie, Gifre would freely decide to continue eating cookies until the whole bag is finished, which would result in a terrible stomach ache and would be the worst alternative. Not eating any cookies at all, on the other hand, would be the second-best alternative. Now the question is: should Gifre eat the first cookie or not? Actualists are only concerned with the actual consequences. According to them, Gifre should not eat any cookies at all since it is better than the alternative leading to a stomach ache. Possibilists, however, contend that the best possible course of action involves eating the first cookie and this is therefore what Gifre should do. One counterintuitive consequence of actualism is that agents can avoid moral obligations simply by having an imperfect moral character. For example, a lazy person might justify rejecting a request to help a friend by arguing that, due to her lazy character, she wouldn't have done the work anyway, even if she had accepted the request. By rejecting the offer right away, she managed at least not to waste anyone's time. Actualists might even consider her behavior praiseworthy since she did what, according to actualism, she ought to have done. This seems to be a very easy way to "get off the hook" that is avoided by possibilism. But possibilism has to face the objection that in some cases it sanctions and even recommends what actually leads to the worst outcome. Douglas W. Portmore has suggested that these and other problems of actualism and possibilism can be avoided by constraining what counts as a genuine alternative for the agent. On his view, it is a requirement that the agent has rational control over the event in |
has been no official conscription in the country. Also the National Assembly has repeatedly rejected to reintroduce it due to popular resentment. However, in November 2006, it was reintroduced. Although mandatory for all males between the ages of 18 and 30 (with some sources stating up to age 35), less than 20% of those in the age group are recruited amidst a downsizing of the armed forces. China Universal conscription in China dates back to the State of Qin, which eventually became the Qin Empire of 221 BC. Following unification, historical records show that a total of 300,000 conscript soldiers and 500,000 conscript labourers constructed the Great Wall of China. In the following dynasties, universal conscription was abolished and reintroduced on numerous occasions. , universal military conscription is theoretically mandatory in the People's Republic of China, and reinforced by law. However, due to the large population of China and large pool of candidates available for recruitment, the People's Liberation Army has always had sufficient volunteers, so conscription has not been required in practice at all. Cyprus Military service in Cyprus has a deep rooted history entangled with the Cyprus problem. Military service in the Cypriot National Guard is mandatory for all male citizens of the Republic of Cyprus, as well as any male non-citizens born of a parent of Greek Cypriot descent, lasting from the January 1 of the year in which they turn 18 years of age to December 31, of the year in which they turn 50. (Efthymiou, 2016). All male residents of Cyprus who are of military age (16 and over) are required to obtain an exit visa from the Ministry of Defense. Currently, military conscription in Cyprus lasts 14 months. Denmark Conscription is known in Denmark since the Viking Age, where one man out of every 10 had to serve the king. Frederick IV of Denmark changed the law in 1710 to every 4th man. The men were chosen by the landowner and it was seen as a penalty. Since 12 February 1849, every physically fit man must do military service. According to §81 in the Constitution of Denmark, which was promulgated in 1849: Every male person able to carry arms shall be liable with his person to contribute to the defence of his country under such rules as are laid down by Statute. — Constitution of DenmarkThe legislation about compulsory military service is articulated in the Danish Law of Conscription. National service takes 4–12 months. It is possible to postpone the duty when one is still in full-time education. Every male turning 18 will be drafted to the 'Day of Defence', where they will be introduced to the Danish military and their health will be tested. Physically unfit persons are not required to do military service. It is only compulsory for men, while women are free to choose to join the Danish army. Almost all of the men have been volunteers in recent years, 96.9% of the total number of recruits having been volunteers in the 2015 draft. After lottery, one can become a conscientious objector. Total objection (refusal from alternative civilian service) results in up to 4 months jailtime according to the law. However, in 2014 a Danish man, who signed up for the service and objected later, got only 14 days of home arrest. In many countries the act of desertion (objection after signing up) is punished harder than objecting the compulsory service. Finland Conscription in Finland is part of a general compulsion for national military service for all adult males (; ) defined in the 127§ of the Constitution of Finland. Conscription can take the form of military or of civilian service. According to Finnish Defence Forces 2011 data slightly under 80% of Finnish males turned 30 had entered and finished the military service. The number of female volunteers to annually enter armed service had stabilised at approximately 300. The service period is 165, 255 or 347 days for the rank and file conscripts and 347 days for conscripts trained as NCOs or reserve officers. The length of civilian service is always twelve months. Those electing to serve unarmed in duties where unarmed service is possible serve either nine or twelve months, depending on their training. Any Finnish male citizen who refuses to perform both military and civilian service faces a penalty of 173 days in prison, minus any served days. Such sentences are usually served fully in prison, with no parole. Jehovah's Witnesses are no longer exempted from service as of February 27, 2019. The inhabitants of demilitarized Åland are exempt from military service. By the Conscription Act of 1951, they are, however, required to serve a time at a local institution, like the coast guard. However, until such service has been arranged, they are freed from service obligation. The non-military service of Åland has not been arranged since the introduction of the act, and there are no plans to institute it. The inhabitants of Åland can also volunteer for military service on the mainland. As of 1995, women are permitted to serve on a voluntary basis and pursue careers in the military after their initial voluntary military service. The military service takes place in Finnish Defence Forces or in the Finnish Border Guard. All services of the Finnish Defence Forces train conscripts. However, the Border Guard trains conscripts only in land-based units, not in coast guard detachments or in the Border Guard Air Wing. Civilian service may take place in the Civilian Service Center in Lapinjärvi or in an accepted non-profit organization of educational, social or medical nature. Germany Between 1956 and 2011 conscription was mandatory for all male citizens in the German federal armed forces (German: Bundeswehr), as well as for the Federal Border Guard (German: Bundesgrenzschutz) in the 1970s (see Border Guard Service). With the end of the Cold War the German government drastically reduced the size of its armed forces. The low demand for conscripts led to the suspension of compulsory conscription in 2011. Since then, only volunteer professionals serve in the Bundeswehr. Greece Since 1914 Greece has been enforcing mandatory military service, currently lasting 12 months (but historically up to 36 months) for all adult men. Citizens discharged from active service are normally placed in the reserve and are subject to periodic recalls of 1–10 days at irregular intervals. Universal conscription was introduced in Greece during the military reforms of 1909, although various forms of selective conscription had been in place earlier. In more recent years, conscription was associated with the state of general mobilisation declared on July 20, 1974 due to the crisis in Cyprus (the mobilisation was formally ended on December 18, 2002). The duration of military service has historically ranged between 12 and 36 months depending on various factors either particular to the conscript or the political situation in the Eastern Mediterranean. Although women are employed by the Greek army as officers and soldiers, they are not obliged to enlist. Soldiers receive no health insurance, but they are provided with medical support during their army service, including hospitalization costs. Greece enforces conscription for all male citizens aged between 19 and 45. In August 2009, duration of the mandatory service was reduced from 12 months as it was before to 9 months for the army, but remained at 12 months for the navy and the air force. The number of conscripts allocated to the latter two has been greatly reduced aiming at full professionalization. Nevertheless, mandatory military service at the army was once again raised to 12 months in March 2021, unless served in units in Evros or the North Aegean islands where duration was kept at 9 months. Although full professionalization is under consideration, severe financial difficulties and mismanagement, including delays and reduced rates in the hiring of professional soldiers, as well as widespread abuse of the deferment process, has resulted in the postponement of such a plan. Israel There is a mandatory military service for all men and women in Israel who are fit and 18 years old. Men must serve 30 months while women serve 24 months, with the vast majority of conscripts being Jewish. Some Israeli citizens are exempt from mandatory service: Non-Jewish Arab citizens permanent residents (non-civilian) such as the Druze of the Golan Heights Male Ultra-Orthodox Jews can apply for deferment to study in Yeshiva and the deferment tends to become an exemption, although some do opt to serve in the military Female religious Jews, as long as they declare they are unable to serve due to religious grounds. Most of whom opt for the alternative of volunteering in the national service Sherut Leumi All of the exempt above are eligible to volunteer to the Israel Defense Forces (IDF), as long as they declare so. Male Druze and male Circassian Israeli citizens are liable for conscription, in accordance with agreement set by their community leaders (their community leaders however signed a clause in which all female Druze and female Circassian are exempt from service). A few male Bedouin Israeli citizens choose to enlist to the Israeli military in every draft (despite their Muslim-Arab background that exempt them from conscription). South Korea Lithuania Lithuania abolished its conscription in 2008. In May 2015, the Lithuanian parliament voted to reintroduce conscription and the conscripts started their training in August 2015. From 2015 to 2017 there were enough volunteers to avoid drafting civilians. Luxembourg Luxembourg practiced military conscription from 1948 until 1967. Moldova Moldova, which currently has male conscription, has announced plans to abolish the practice. Moldova's Defense Ministry announced that a plan which stipulates the gradual elimination of military conscription will be implemented starting from the autumn of 2018. Netherlands Conscription, which was called "Service Duty" () in the Netherlands, was first employed in 1810 by French occupying forces. Napoleon's brother Louis Bonaparte, who was King of Holland from 1806 to 1810, had tried to introduce conscription a few years earlier, unsuccessfully. Every man aged 20 years or older had to enlist. By means of drawing lots it was decided who had to undertake service in the French army. It was possible to arrange a substitute against payment. Later on, conscription was used for all men over the age of 18. Postponement was possible, due to study, for example. Conscientious objectors could perform an alternative civilian service instead of military service. For various reasons, this forced military service was criticized at the end of the twentieth century. Since the Cold War was over, so was the direct threat of a war. Instead, the Dutch army was employed in more and more peacekeeping operations. The complexity and danger of these missions made the use of conscripts controversial. Furthermore, the conscription system was thought to be unfair as only men were drafted. In the European part of Netherlands, compulsory attendance has been officially suspended since 1 May 1997. Between 1991 and 1996, the Dutch armed forces phased out their conscript personnel and converted to an all-professional force. The last conscript troops were inducted in 1995, and demobilized in 1996. The suspension means that citizens are no longer forced to serve in the armed forces, as long as it is not required for the safety of the country. Since then, the Dutch army has become an all-professional force. However, to this day, every male and – from January 2020 onward – female citizen aged 17 gets a letter in which they are told that they have been registered but do not have to present themselves for service. Norway Conscription was constitutionally established the 12 apr 1907 with Kongeriket Norges Grunnlov § 119.. , Norway currently employs a weak form of mandatory military service for men and women. In practice recruits are not forced to serve, instead only those who are motivated are selected. About 60,000 Norwegians are available for conscription every year, but only 8,000 to 10,000 are conscripted. Since 1985, women have been able to enlist for voluntary service as regular recruits. On 14 June 2013 the Norwegian | en masse), was devised during the French Revolution, to enable the Republic to defend itself from the attacks of European monarchies. Deputy Jean-Baptiste Jourdan gave its name to the 5 September 1798 Act, whose first article stated: "Any Frenchman is a soldier and owes himself to the defense of the nation." It enabled the creation of the Grande Armée, what Napoleon Bonaparte called "the nation in arms", which overwhelmed European professional armies that often numbered only into the low tens of thousands. More than 2.6 million men were inducted into the French military in this way between the years 1800 and 1813. The defeat of the Prussian Army in particular shocked the Prussian establishment, which had believed it was invincible after the victories of Frederick the Great. The Prussians were used to relying on superior organization and tactical factors such as order of battle to focus superior troops against inferior ones. Given approximately equivalent forces, as was generally the case with professional armies, these factors showed considerable importance. However, they became considerably less important when the Prussian armies faced Napoleon's forces that outnumbered their own in some cases by more than ten to one. Scharnhorst advocated adopting the levée en masse, the military conscription used by France. The Krümpersystem was the beginning of short-term compulsory service in Prussia, as opposed to the long-term conscription previously used. In the Russian Empire, the military service time "owed" by serfs was 25 years at the beginning of the 19th century. In 1834 it was decreased to 20 years. The recruits were to be not younger than 17 and not older than 35. In 1874 Russia introduced universal conscription in the modern pattern, an innovation only made possible by the abolition of serfdom in 1861. New military law decreed that all male Russian subjects, when they reached the age of 20, were eligible to serve in the military for six years. In the decades prior to World War I universal conscription along broadly Prussian lines became the norm for European armies, and those modeled on them. By 1914 the only substantial armies still completely dependent on voluntary enlistment were those of Britain and the United States. Some colonial powers such as France reserved their conscript armies for home service while maintaining professional units for overseas duties. World Wars The range of eligible ages for conscripting was expanded to meet national demand during the World Wars. In the United States, the Selective Service System drafted men for World War I initially in an age range from 21 to 30 but expanded its eligibility in 1918 to an age range of 18 to 45. In the case of a widespread mobilization of forces where service includes homefront defense, ages of conscripts may range much higher, with the oldest conscripts serving in roles requiring lesser mobility. Expanded-age conscription was common during the Second World War: in Britain, it was commonly known as "call-up" and extended to age 51. Nazi Germany termed it Volkssturm ("People's Storm") and included children as young as 16 and men as old as 60. During the Second World War, both Britain and the Soviet Union conscripted women. The United States was on the verge of drafting women into the Nurse Corps because it anticipated it would need the extra personnel for its planned invasion of Japan. However, the Japanese surrendered and the idea was abandoned. Arguments against conscription Sexism Men's rights activists, feminists, and opponents of discrimination against men have criticized military conscription, or compulsory military service, as sexist. The National Coalition for Men, a men's rights group, sued the US Selective Service System in 2019, leading to it being declared unconstitutional by a US Federal Judge. The federal district judge's opinion was unanimously overturned by the U.S. Court of Appeals for the 5th Circuit. In September 2021, the House of Representatives passed the annual Defence Authorization Act, which included an amendment that states that "all Americans between the ages of 18 and 25 must register for selective service." This struck off the word "Male" which extends a potential draft to women; the bill passed the Senate with bipartisan support. The measure will go in effect one year after enactment of the new law if it survives. Feminists have argued that military conscription is sexist because wars serve the interests of what they view as the patriarchy, the military is a sexist institution, conscripts are therefore indoctrinated in sexism, and conscription of men normalizes violence by men as socially acceptable. Feminists have been organizers and participants in resistance to conscription in several countries. Conscription has also been criticized as, historically, only men have been subjected to conscription. Men who opt out or are deemed unfit for military service must often perform alternative service, such as Zivildienst in Austria and Switzerland, or pay extra taxes, whereas women do not have these obligations. Men who do not sign up for Selective Service in the US, are prohibited from eligibility for citizenship, financial aid, admissions to public colleges or universities, federal grants and loans, federal employment, and in some states, driving licenses. Involuntary servitude American libertarians oppose conscription and call for the abolition of the Selective Service System, believing that impressment of individuals into the armed forces is involuntary servitude. Ron Paul, a former presidential nominee of the U.S. Libertarian Party has said that conscription "is wrongly associated with patriotism, when it really represents slavery and involuntary servitude". The philosopher Ayn Rand opposed conscription, suggesting that "of all the statist violations of individual rights in a mixed economy, the military draft is the worst. It is an abrogation of rights. It negates man's fundamental right—the right to life—and establishes the fundamental principle of statism: that a man's life belongs to the state, and the state may claim it by compelling him to sacrifice it in battle." In 1917, a number of radicals and anarchists, including Emma Goldman, challenged the new draft law in federal court arguing that it was a direct violation of the Thirteenth Amendment's prohibition against slavery and involuntary servitude. However, the Supreme Court unanimously upheld the constitutionality of the draft act in the case of Arver v. United States on 7 January 1918. The decision said the Constitution gave Congress the power to declare war and to raise and support armies. The Court emphasized the principle of the reciprocal rights and duties of citizens: "It may not be doubted that the very conception of a just government in its duty to the citizen includes the reciprocal obligation of the citizen to render military service in case of need and the right to compel." Economic It can be argued that in a cost-to-benefit ratio, conscription during peacetime is not worthwhile. Months or years of service performed by the most fit and capable subtract from the productivity of the economy; add to this the cost of training them, and in some countries paying them. Compared to these extensive costs, some would argue there is very little benefit; if there ever was a war then conscription and basic training could be completed quickly, and in any case there is little threat of a war in most countries with conscription. In the United States, every male resident is required by law to register with the Selective Service System within 30 days following his 18th birthday and be available for a draft; this is often accomplished automatically by a motor vehicle department during licensing or by voter registration. According to Milton Friedman the cost of conscription can be related to the parable of the broken window in anti-draft arguments. The cost of the work, military service, does not disappear even if no salary is paid. The work effort of the conscripts is effectively wasted, as an unwilling workforce is extremely inefficient. The impact is especially severe in wartime, when civilian professionals are forced to fight as amateur soldiers. Not only is the work effort of the conscripts wasted and productivity lost, but professionally skilled conscripts are also difficult to replace in the civilian workforce. Every soldier conscripted in the army is taken away from his civilian work, and away from contributing to the economy which funds the military. This may be less a problem in an agrarian or pre-industrialized state where the level of education is generally low, and where a worker is easily replaced by another. However, this is potentially more costly in a post-industrial society where educational levels are high and where the workforce is sophisticated and a replacement for a conscripted specialist is difficult to find. Even direr economic consequences result if the professional conscripted as an amateur soldier is killed or maimed for life; his work effort and productivity are lost. Arguments for conscription Political and moral motives Jean Jacques Rousseau argued vehemently against professional armies since he believed that it was the right and privilege of every citizen to participate to the defense of the whole society and that it was a mark of moral decline to leave the business to professionals. He based his belief upon the development of the Roman Republic, which came to an end at the same time as the Roman Army changed from a conscript to a professional force. Similarly, Aristotle linked the division of armed service among the populace intimately with the political order of the state. Niccolò Machiavelli argued strongly for conscription and saw the professional armies, made up of mercenary units, as the cause of the failure of societal unity in Italy. Other proponents, such as William James, consider both mandatory military and national service as ways of instilling maturity in young adults. Some proponents, such as Jonathan Alter and Mickey Kaus, support a draft in order to reinforce social equality, create social consciousness, break down class divisions and allow young adults to immerse themselves in public enterprise. Charles Rangel called for the reinstatement of the draft during the Iraq War not because he seriously expected it to be adopted but to stress how the socioeconomic restratification meant that very few children of upper-class Americans served in the all-volunteer American armed forces. Economic and resource efficiency It is estimated by the British military that in a professional military, a company deployed for active duty in peacekeeping corresponds to three inactive companies at home. Salaries for each are paid from the military budget. In contrast, volunteers from a trained reserve are in their civilian jobs when they are not deployed. It was more financially beneficial for less-educated young Portuguese men born in 1967 to participate in conscription than to participate in the highly-competitive job market with men of the same age who continued to higher education. Drafting of women Throughout history, women have only been conscripted to join armed forces in a few countries, in contrast to the universal practice of conscription from among the male population. The traditional view has been that military service is a test of manhood and a rite of passage from boyhood into manhood. In recent years, this position has been challenged on the basis that it violates gender equality, and some countries, especially in Europe, have extended conscription obligations to women. Nations that in present-day actively draft women into military service are Bolivia, Chad, Eritrea, Israel, Mozambique, Norway, North Korea and Sweden. Finland introduced voluntary female conscription in 1995, giving women between the ages of 18-29 an option to complete their military service alongside men. Norway introduced female conscription in 2015, making it the first NATO member to have a legally compulsory national service for both men and women. In practice only motivated volunteers are selected to join the army in Norway. Sweden introduced female conscription in 2010, but it was not activated until 2017. This made Sweden the second nation in Europe to draft women, and the second in the world to draft women on the same formal terms as men. Israel has universal female conscription, although in practice women can avoid service by claiming a religious exemption and over a third of Israeli women do so. Sudanese law allows for conscription of women, but this is not implemented in practice. In the United Kingdom during World War II, beginning in 1941, women were brought into the scope of conscription but, as all women with dependent children were exempt and many women were informally left in occupations such as nursing or teaching, the number conscripted was relatively few. In the USSR, there was never conscription of women for the armed forces, but the severe disruption of normal life and the high proportion of civilians affected by World War II after the German invasion attracted many volunteers for "The Great Patriotic War". Medical doctors of both sexes could and would be conscripted (as officers). Also, the Soviet university education system required Department of Chemistry students of both sexes to complete an ROTC course in NBC defense, and such female reservist officers could be conscripted in times of war. The United States came close to drafting women into the Nurse Corps in preparation for a planned invasion of Japan. In 1981 in the United States, several men filed lawsuit in the case Rostker v. Goldberg, alleging that the Selective Service Act of 1948 violates the Due Process Clause of the Fifth Amendment by requiring that only men register with the Selective Service System (SSS). The Supreme Court eventually upheld the Act, stating that "the argument for registering women was based on considerations of equity, but Congress was entitled, in the exercise of its constitutional powers, to focus on the question of military need, rather than 'equity.'" In 2013 Judge Gray H. Miller of the United States District Court for the Southern District of Texas ruled that the Service's men-only requirement was unconstitutional, as while at the time Rostker was decided, women were banned from serving in combat, the situation had since changed with the 2013 and 2015 restriction removals. Miller's opinion was reversed by the Fifth Circuit, stating that only the Supreme Court could overturn the Supreme Court precedence from Rostker. The Supreme Court considered but declined to review the Fifth Circuit's ruling in June 2021. In an opinion authored by Justice Sonia Sotomayor and joined by Justices Stephen Breyer and Brett Kavanaugh, the three justices agreed that the male-only draft was likely unconstitutional given the changes in the military's stance on the roles, but because Congress had been reviewing and evaluating legislation to eliminate its male-only draft requirement via the National Commission on Military, National, and Public Service (NCMNPS) since 2016, it would have been inappropriate for the Court to act at that time. On October 1, 1999 in Taiwan, the Judicial Yuan of the Republic of China in its Interpretation 490 considered that the physical differences between males and females and the derived role differentiation in their respective social functions and lives would not make drafting only males a violation of the Constitution of the Republic of China. Though women are not conscripted in Taiwan, transsexual persons are exempt. In 2018 the Netherlands started including women in its draft registration system, although conscription is not currently enforced for either sex. Conscientious objection A conscientious objector is an individual whose personal beliefs are incompatible with military service, or, more often, with any role in the armed forces. In some countries, conscientious objectors have special legal status, which augments their conscription duties. For example, Sweden allows conscientious objectors to choose a service in the weapons-free civil defense. The reasons for refusing to serve in the military are varied. Some people are conscientious objectors for religious reasons. In particular, the members of the historic peace churches are pacifist by doctrine, and Jehovah's Witnesses, while not strictly pacifists, refuse to participate in the armed forces on the ground that they believe that Christians should be neutral in international conflicts. By country Austria Every male citizen of the Republic of Austria from the age of 17 up to 50, specialists up to 65 years is liable to military service. however besides mobilization conscription calls to a six-month long basic military training in the Bundesheer can be done up to the age of 35. For men refusing to undergo this training, |
and engineering from the University of Massachusetts Amherst in 1991. She was advised by Professor Thomas J. McCarthy on her doctorate. As an undergraduate, she was a member of the intercollegiate rowing crew and was a resident of Baker House. Military career Coleman continued to pursue her PhD at the University of Massachusetts Amherst as a second lieutenant. In 1988, she entered active duty at Wright-Patterson Air Force Base as a research chemist. During her work, she participated as a surface analysis consultant on the NASA Long Duration Exposure Facility experiment. In 1991, she received her doctorate in polymer science and engineering. She retired from the Air Force in November 2009 as a colonel. NASA career Coleman was selected by NASA in 1992 to join the NASA Astronaut Corps. In 1995, she was a member of the STS-73 crew on the scientific mission USML-2 with experiments including biotechnology, combustion science, and the physics of fluids. During the flight, she reported to Houston Mission Control that she had spotted an Unidentified flying object (UFO). She also trained for the mission STS-83 to be the backup for Donald A. Thomas; however, as he recovered on time, she did not fly that mission. STS-93 was Coleman's second space flight in 1999. She was mission specialist in charge of deploying the Chandra X-ray Observatory and its Inertial Upper Stage out of the shuttle's cargo bay. Coleman served as Chief of Robotics for the Astronaut Office, to include robotic arm operations and training for all Space Shuttle and International Space Station missions. In October 2004, Coleman served as an aquanaut during the NEEMO 7 mission aboard the Aquarius underwater laboratory, living and working underwater for eleven days. Coleman was assigned as a backup U.S. crew member for Expeditions 19, 20 and 21 and served as a backup crew member for Expeditions 24 and 25 as part of her training for Expedition 26. Coleman launched on December 15, 2010 (December 16, 2010 Baikonur time), aboard Soyuz TMA-20 to join the Expedition 26 mission aboard the International Space Station. She retired from NASA on December 1, 2016. Spaceflight experience STS-73 on Space Shuttle Columbia (October 20 to November 5, 1995) was the second United States Microgravity Laboratory (USML-2) mission. The mission focused on materials science, biotechnology, combustion science, the physics of fluids, and numerous scientific experiments housed in the pressurized Spacelab module. In completing her first space flight, Coleman orbited the Earth 256 times, traveled over 6 million miles, and logged a total of 15 days, 21 hours, 52 minutes and 21 seconds in space. STS-93 on Columbia (July 22 to 27, 1999) was a five-day mission during which Coleman was the lead mission specialist for the deployment of the Chandra X-ray Observatory. Designed to conduct comprehensive studies of the universe, the telescope will enable scientists to study exotic phenomena such as exploding stars, quasars, and black holes. Mission duration was 118 hours and 50 minutes. Soyuz TMA-20 / Expedition 26/27 (December 15, 2010, to | its Inertial Upper Stage out of the shuttle's cargo bay. Coleman served as Chief of Robotics for the Astronaut Office, to include robotic arm operations and training for all Space Shuttle and International Space Station missions. In October 2004, Coleman served as an aquanaut during the NEEMO 7 mission aboard the Aquarius underwater laboratory, living and working underwater for eleven days. Coleman was assigned as a backup U.S. crew member for Expeditions 19, 20 and 21 and served as a backup crew member for Expeditions 24 and 25 as part of her training for Expedition 26. Coleman launched on December 15, 2010 (December 16, 2010 Baikonur time), aboard Soyuz TMA-20 to join the Expedition 26 mission aboard the International Space Station. She retired from NASA on December 1, 2016. Spaceflight experience STS-73 on Space Shuttle Columbia (October 20 to November 5, 1995) was the second United States Microgravity Laboratory (USML-2) mission. The mission focused on materials science, biotechnology, combustion science, the physics of fluids, and numerous scientific experiments housed in the pressurized Spacelab module. In completing her first space flight, Coleman orbited the Earth 256 times, traveled over 6 million miles, and logged a total of 15 days, 21 hours, 52 minutes and 21 seconds in space. STS-93 on Columbia (July 22 to 27, 1999) was a five-day mission during which Coleman was the lead mission specialist for the deployment of the Chandra X-ray Observatory. Designed to conduct comprehensive studies of the universe, the telescope will enable scientists to study exotic phenomena such as exploding stars, quasars, and black holes. Mission duration was 118 hours and 50 minutes. Soyuz TMA-20 / Expedition 26/27 (December 15, 2010, to May 23, 2011) was an extended duration mission to the International Space Station. Personal Coleman is married to |
of some of the mucous glands. A buildup of mucus in the glands forms Nabothian cysts, usually less than about in diameter, which are considered physiological rather than pathological. Both gland openings and Nabothian cysts are helpful to identify the transformation zone. Function Fertility The cervical canal is a pathway through which sperm enter the uterus after being induced by estradiol after sexual intercourse, and some forms of artificial insemination. Some sperm remains in cervical crypts, infoldings of the endocervix, which act as a reservoir, releasing sperm over several hours and maximising the chances of fertilisation. A theory states the cervical and uterine contractions during orgasm draw semen into the uterus. Although the "upsuck theory" has been generally accepted for some years, it has been disputed due to lack of evidence, small sample size, and methodological errors. Some methods of fertility awareness, such as the Creighton model and the Billings method involve estimating a woman's periods of fertility and infertility by observing physiological changes in her body. Among these changes are several involving the quality of her cervical mucus: the sensation it causes at the vulva, its elasticity (Spinnbarkeit), its transparency, and the presence of ferning. Cervical mucus Several hundred glands in the endocervix produce 20–60 mg of cervical mucus a day, increasing to 600 mg around the time of ovulation. It is viscous because it contains large proteins known as mucins. The viscosity and water content varies during the menstrual cycle; mucus is composed of around 93% water, reaching 98% at midcycle. These changes allow it to function either as a barrier or a transport medium to spermatozoa. It contains electrolytes such as calcium, sodium, and potassium; organic components such as glucose, amino acids, and soluble proteins; trace elements including zinc, copper, iron, manganese, and selenium; free fatty acids; enzymes such as amylase; and prostaglandins. Its consistency is determined by the influence of the hormones estrogen and progesterone. At midcycle around the time of ovulation—a period of high estrogen levels— the mucus is thin and serous to allow sperm to enter the uterus and is more alkaline and hence more hospitable to sperm. It is also higher in electrolytes, which results in the "ferning" pattern that can be observed in drying mucus under low magnification; as the mucus dries, the salts crystallize, resembling the leaves of a fern. The mucus has a stretchy character described as Spinnbarkeit most prominent around the time of ovulation. At other times in the cycle, the mucus is thick and more acidic due to the effects of progesterone. This "infertile" mucus acts as a barrier to keep sperm from entering the uterus. Women taking an oral contraceptive pill also have thick mucus from the effects of progesterone. Thick mucus also prevents pathogens from interfering with a nascent pregnancy. A cervical mucus plug, called the operculum, forms inside the cervical canal during pregnancy. This provides a protective seal for the uterus against the entry of pathogens and against leakage of uterine fluids. The mucus plug is also known to have antibacterial properties. This plug is released as the cervix dilates, either during the first stage of childbirth or shortly before. It is visible as a blood-tinged mucous discharge. Childbirth The cervix plays a major role in childbirth. As the fetus descends within the uterus in preparation for birth, the presenting part, usually the head, rests on and is supported by the cervix. As labour progresses, the cervix becomes softer and shorter, begins to dilate, and withdraws to face the anterior of the body. The support the cervix provides to the fetal head starts to give way when the uterus begins its contractions. During childbirth, the cervix must dilate to a diameter of more than to accommodate the head of the fetus as it descends from the uterus to the vagina. In becoming wider, the cervix also becomes shorter, a phenomenon known as effacement. Along with other factors, midwives and doctors use the extent of cervical dilation to assist decision making during childbirth. Generally, the active first stage of labour, when the uterine contractions become strong and regular, begins when the cervical dilation is more than . The second phase of labor begins when the cervix has dilated to , which is regarded as its fullest dilation, and is when active pushing and contractions push the baby along the birth canal leading to the birth of the baby. The number of past vaginal deliveries is a strong factor in influencing how rapidly the cervix is able to dilate in labour. The time taken for the cervix to dilate and efface is one factor used in reporting systems such as the Bishop score, used to recommend whether interventions such as a forceps delivery, induction, or Caesarean section should be used in childbirth. Cervical incompetence is a condition in which shortening of the cervix due to dilation and thinning occurs, before term pregnancy. Short cervical length is the strongest predictor of preterm birth. Contraception Several methods of contraception involve the cervix. Cervical diaphragms are reusable, firm-rimmed plastic devices inserted by a woman prior to intercourse that cover the cervix. Pressure against the walls of the vagina maintain the position of the diaphragm, and it acts as a physical barrier to prevent the entry of sperm into the uterus, preventing fertilisation. Cervical caps are a similar method, although they are smaller and adhere to the cervix by suction. Diaphragms and caps are often used in conjunction with spermicides. In one year, 12% of women using the diaphragm will undergo an unintended pregnancy, and with optimal use this falls to 6%. Efficacy rates are lower for the cap, with 18% of women undergoing an unintended pregnancy, and 10–13% with optimal use. Most types of progestogen-only pills are effective as a contraceptive because they thicken cervical mucus, making it difficult for sperm to pass along the cervical canal. In addition, they may also sometimes prevent ovulation. In contrast, contraceptive pills that contain both oestrogen and progesterone, the combined oral contraceptive pills, work mainly by preventing ovulation. They also thicken cervical mucus and thin the lining of the uterus, enhancing their effectiveness. Clinical significance Cancer In 2008, cervical cancer was the third-most common cancer in women worldwide, with rates varying geographically from less than one to more than 50 cases per 100,000 women. It is a leading cause of cancer-related death in poor countries, where delayed diagnosis leading to poor outcomes is common. The introduction of routine screening has resulted in fewer cases of (and deaths from) cervical cancer, however this has mainly taken place in developed countries. Most developing countries have limited or no screening, and 85% of the global burden occurring there. Cervical cancer nearly always involves human papillomavirus (HPV) infection. HPV is a virus with numerous strains, several of which predispose to precancerous changes in the cervical epithelium, particularly in the transformation zone, which is the most common area for cervical cancer to start. HPV vaccines, such as Gardasil and Cervarix, reduce the incidence of cervical cancer, by inoculating against the viral strains involved in cancer development. Potentially precancerous changes in the cervix can be detected by cervical screening, using methods including a Pap smear (also called a cervical smear), in which epithelial cells are scraped from the surface of the cervix and examined under a microscope. The colposcope, an instrument used to see a magnified view of the cervix, was invented in 1925. The Pap smear was developed by Georgios Papanikolaou in 1928. A LEEP procedure using a heated loop of platinum to excise a patch of cervical tissue was developed by Aurel Babes in 1927. In some parts of the developed world including the UK, the Pap test has been superseded with liquid-based cytology. A cheap, cost-effective and practical alternative in poorer countries is visual inspection with acetic acid (VIA). Instituting and sustaining cytology-based programs in these regions can be difficult, due to the need for trained personnel, equipment and facilities and difficulties in follow-up. With VIA, results and treatment can be available on the same day. As a screening test, VIA is comparable to cervical cytology in accurately identifying precancerous lesions. A result of dysplasia is usually further investigated, such as by taking a cone biopsy, which may also remove the cancerous lesion. Cervical intraepithelial neoplasia is a possible result of the biopsy and represents dysplastic changes that may eventually progress to invasive cancer. Most cases of cervical cancer are detected in this way, without having caused any symptoms. When symptoms occur, they may include vaginal bleeding, discharge, or discomfort. Inflammation Inflammation of the cervix is referred to as cervicitis. This inflammation may be of the endocervix or ectocervix. When associated with the endocervix, it is associated with a mucous vaginal discharge and sexually transmitted infections such as chlamydia and gonorrhoea. As many as half of pregnant women having a gonorrheal infection of the cervix are asymptomatic. Other causes include overgrowth of the commensal flora of the vagina. When associated with the ectocervix, inflammation may be caused by the herpes simplex virus. Inflammation is often investigated through directly visualising the cervix using a speculum, which may appear whiteish due to exudate, and by taking a Pap smear and examining for causal bacteria. Special tests may be used to identify particular bacteria. If the inflammation is due to a bacterium, then antibiotics may be given as treatment. Anatomical abnormalities Cervical stenosis is an abnormally narrow cervical canal, typically associated with trauma caused by removal of tissue for investigation or treatment of cancer, or cervical cancer itself. Diethylstilbestrol, used from 1938 to 1971 to prevent preterm labour and miscarriage, is also strongly associated with the development of cervical stenosis and other abnormalities in the daughters of the exposed women. Other abnormalities include: vaginal adenosis, in which the squamous epithelium of the ectocervix becomes columnar; cancers such as clear cell adenocarcinomas; cervical ridges and hoods; and development of a cockscomb cervix appearance, which is the condition wherein, as the name suggests, the cervix of the uterus is shaped like a cockscomb. About one third of women born to diethylstilbestrol-treated mothers (i.e. in-utero exposure) develop a cockscomb cervix. Enlarged folds or ridges of cervical stroma (fibrous tissues) and epithelium constitute a cockscomb cervix. Similarly, | uterus begins its contractions. During childbirth, the cervix must dilate to a diameter of more than to accommodate the head of the fetus as it descends from the uterus to the vagina. In becoming wider, the cervix also becomes shorter, a phenomenon known as effacement. Along with other factors, midwives and doctors use the extent of cervical dilation to assist decision making during childbirth. Generally, the active first stage of labour, when the uterine contractions become strong and regular, begins when the cervical dilation is more than . The second phase of labor begins when the cervix has dilated to , which is regarded as its fullest dilation, and is when active pushing and contractions push the baby along the birth canal leading to the birth of the baby. The number of past vaginal deliveries is a strong factor in influencing how rapidly the cervix is able to dilate in labour. The time taken for the cervix to dilate and efface is one factor used in reporting systems such as the Bishop score, used to recommend whether interventions such as a forceps delivery, induction, or Caesarean section should be used in childbirth. Cervical incompetence is a condition in which shortening of the cervix due to dilation and thinning occurs, before term pregnancy. Short cervical length is the strongest predictor of preterm birth. Contraception Several methods of contraception involve the cervix. Cervical diaphragms are reusable, firm-rimmed plastic devices inserted by a woman prior to intercourse that cover the cervix. Pressure against the walls of the vagina maintain the position of the diaphragm, and it acts as a physical barrier to prevent the entry of sperm into the uterus, preventing fertilisation. Cervical caps are a similar method, although they are smaller and adhere to the cervix by suction. Diaphragms and caps are often used in conjunction with spermicides. In one year, 12% of women using the diaphragm will undergo an unintended pregnancy, and with optimal use this falls to 6%. Efficacy rates are lower for the cap, with 18% of women undergoing an unintended pregnancy, and 10–13% with optimal use. Most types of progestogen-only pills are effective as a contraceptive because they thicken cervical mucus, making it difficult for sperm to pass along the cervical canal. In addition, they may also sometimes prevent ovulation. In contrast, contraceptive pills that contain both oestrogen and progesterone, the combined oral contraceptive pills, work mainly by preventing ovulation. They also thicken cervical mucus and thin the lining of the uterus, enhancing their effectiveness. Clinical significance Cancer In 2008, cervical cancer was the third-most common cancer in women worldwide, with rates varying geographically from less than one to more than 50 cases per 100,000 women. It is a leading cause of cancer-related death in poor countries, where delayed diagnosis leading to poor outcomes is common. The introduction of routine screening has resulted in fewer cases of (and deaths from) cervical cancer, however this has mainly taken place in developed countries. Most developing countries have limited or no screening, and 85% of the global burden occurring there. Cervical cancer nearly always involves human papillomavirus (HPV) infection. HPV is a virus with numerous strains, several of which predispose to precancerous changes in the cervical epithelium, particularly in the transformation zone, which is the most common area for cervical cancer to start. HPV vaccines, such as Gardasil and Cervarix, reduce the incidence of cervical cancer, by inoculating against the viral strains involved in cancer development. Potentially precancerous changes in the cervix can be detected by cervical screening, using methods including a Pap smear (also called a cervical smear), in which epithelial cells are scraped from the surface of the cervix and examined under a microscope. The colposcope, an instrument used to see a magnified view of the cervix, was invented in 1925. The Pap smear was developed by Georgios Papanikolaou in 1928. A LEEP procedure using a heated loop of platinum to excise a patch of cervical tissue was developed by Aurel Babes in 1927. In some parts of the developed world including the UK, the Pap test has been superseded with liquid-based cytology. A cheap, cost-effective and practical alternative in poorer countries is visual inspection with acetic acid (VIA). Instituting and sustaining cytology-based programs in these regions can be difficult, due to the need for trained personnel, equipment and facilities and difficulties in follow-up. With VIA, results and treatment can be available on the same day. As a screening test, VIA is comparable to cervical cytology in accurately identifying precancerous lesions. A result of dysplasia is usually further investigated, such as by taking a cone biopsy, which may also remove the cancerous lesion. Cervical intraepithelial neoplasia is a possible result of the biopsy and represents dysplastic changes that may eventually progress to invasive cancer. Most cases of cervical cancer are detected in this way, without having caused any symptoms. When symptoms occur, they may include vaginal bleeding, discharge, or discomfort. Inflammation Inflammation of the cervix is referred to as cervicitis. This inflammation may be of the endocervix or ectocervix. When associated with the endocervix, it is associated with a mucous vaginal discharge and sexually transmitted infections such as chlamydia and gonorrhoea. As many as half of pregnant women having a gonorrheal infection of the cervix are asymptomatic. Other causes include overgrowth of the commensal flora of the vagina. When associated with the ectocervix, inflammation may be caused by the herpes simplex virus. Inflammation is often investigated through directly visualising the cervix using a speculum, which may appear whiteish due to exudate, and by taking a Pap smear and examining for causal bacteria. Special tests may be used to identify particular bacteria. If the inflammation is due to a bacterium, then antibiotics may be given as treatment. Anatomical abnormalities Cervical stenosis is an abnormally narrow cervical canal, typically associated with trauma caused by removal of tissue for investigation or treatment of cancer, or cervical cancer itself. Diethylstilbestrol, used from 1938 to 1971 to prevent preterm labour and miscarriage, is also strongly associated with the development of cervical stenosis and other abnormalities in the daughters of the exposed women. Other abnormalities include: vaginal adenosis, in which the squamous epithelium of the ectocervix becomes columnar; cancers such as clear cell adenocarcinomas; cervical ridges and hoods; and development of a cockscomb cervix appearance, which is the condition wherein, as the name suggests, the cervix of the uterus is shaped like a cockscomb. About one third of women born to diethylstilbestrol-treated mothers (i.e. in-utero exposure) develop a cockscomb cervix. Enlarged folds or ridges of cervical stroma (fibrous tissues) and epithelium constitute a cockscomb cervix. Similarly, cockscomb polyps lining the cervix are usually considered or grouped into the same overarching description. It is in and of itself considered a benign abnormality; its presence, however is usually indicative of DES exposure, and as such women who experience these abnormalities should be aware of their |
10. In this case, the first pass needs to gather information about declarations appearing after statements that they affect, with the actual translation happening during a subsequent pass. The disadvantage of compiling in a single pass is that it is not possible to perform many of the sophisticated optimizations needed to generate high quality code. It can be difficult to count exactly how many passes an optimizing compiler makes. For instance, different phases of optimization may analyse one expression many times but only analyse another expression once. Splitting a compiler up into small programs is a technique used by researchers interested in producing provably correct compilers. Proving the correctness of a set of small programs often requires less effort than proving the correctness of a larger, single, equivalent program. Three-stage compiler structure Regardless of the exact number of phases in the compiler design, the phases can be assigned to one of three stages. The stages include a front end, a middle end, and a back end. The front end scans the input and verifies syntax and semantics according to a specific source language. For statically typed languages it performs type checking by collecting type information. If the input program is syntactically incorrect or has a type error, it generates error and/or warning messages, usually identifying the location in the source code where the problem was detected; in some cases the actual error may be (much) earlier in the program. Aspects of the front end include lexical analysis, syntax analysis, and semantic analysis. The front end transforms the input program into an intermediate representation (IR) for further processing by the middle end. This IR is usually a lower-level representation of the program with respect to the source code. The middle end performs optimizations on the IR that are independent of the CPU architecture being targeted. This source code/machine code independence is intended to enable generic optimizations to be shared between versions of the compiler supporting different languages and target processors. Examples of middle end optimizations are removal of useless (dead code elimination) or unreachable code (reachability analysis), discovery and propagation of constant values (constant propagation), relocation of computation to a less frequently executed place (e.g., out of a loop), or specialization of computation based on the context, eventually producing the "optimized" IR that is used by the back end. The back end takes the optimized IR from the middle end. It may perform more analysis, transformations and optimizations that are specific for the target CPU architecture. The back end generates the target-dependent assembly code, performing register allocation in the process. The back end performs instruction scheduling, which re-orders instructions to keep parallel execution units busy by filling delay slots. Although most optimization problems are NP-hard, heuristic techniques for solving them are well-developed and currently implemented in production-quality compilers. Typically the output of a back end is machine code specialized for a particular processor and operating system. This front/middle/back-end approach makes it possible to combine front ends for different languages with back ends for different CPUs while sharing the optimizations of the middle end. Practical examples of this approach are the GNU Compiler Collection, Clang (LLVM-based C/C++ compiler), and the Amsterdam Compiler Kit, which have multiple front-ends, shared optimizations and multiple back-ends. Front end The front end analyzes the source code to build an internal representation of the program, called the intermediate representation (IR). It also manages the symbol table, a data structure mapping each symbol in the source code to associated information such as location, type and scope. While the frontend can be a single monolithic function or program, as in a scannerless parser, it was traditionally implemented and analyzed as several phases, which may execute sequentially or concurrently. This method is favored due to its modularity and separation of concerns. Most commonly today, the frontend is broken into three phases: lexical analysis (also known as lexing or scanning), syntax analysis (also known as scanning or parsing), and semantic analysis. Lexing and parsing comprise the syntactic analysis (word syntax and phrase syntax, respectively), and in simple cases, these modules (the lexer and parser) can be automatically generated from a grammar for the language, though in more complex cases these require manual modification. The lexical grammar and phrase grammar are usually context-free grammars, which simplifies analysis significantly, with context-sensitivity handled at the semantic analysis phase. The semantic analysis phase is generally more complex and written by hand, but can be partially or fully automated using attribute grammars. These phases themselves can be further broken down: lexing as scanning and evaluating, and parsing as building a concrete syntax tree (CST, parse tree) and then transforming it into an abstract syntax tree (AST, syntax tree). In some cases additional phases are used, notably line reconstruction and preprocessing, but these are rare. The main phases of the front end include the following: converts the input character sequence to a canonical form ready for the parser. Languages which strop their keywords or allow arbitrary spaces within identifiers require this phase. The top-down, recursive-descent, table-driven parsers used in the 1960s typically read the source one character at a time and did not require a separate tokenizing phase. Atlas Autocode and Imp (and some implementations of ALGOL and Coral 66) are examples of stropped languages whose compilers would have a Line Reconstruction phase. Preprocessing supports macro substitution and conditional compilation. Typically the preprocessing phase occurs before syntactic or semantic analysis; e.g. in the case of C, the preprocessor manipulates lexical tokens rather than syntactic forms. However, some languages such as Scheme support macro substitutions based on syntactic forms. Lexical analysis (also known as lexing or tokenization) breaks the source code text into a sequence of small pieces called lexical tokens. This phase can be divided into two stages: the scanning, which segments the input text into syntactic units called lexemes and assigns them a category; and the evaluating, which converts lexemes into a processed value. A token is a pair consisting of a token name and an optional token value. Common token categories may include identifiers, keywords, separators, operators, literals and comments, although the set of token categories varies in different programming languages. The lexeme syntax is typically a regular language, so a finite state automaton constructed from a regular expression can be used to recognize it. The software doing lexical analysis is called a lexical analyzer. This may not be a separate step—it can be combined with the parsing step in scannerless parsing, in which case parsing is done at the character level, not the token level. Syntax analysis (also known as parsing) involves parsing the token sequence to identify the syntactic structure of the program. This phase typically builds a parse tree, which replaces the linear sequence of tokens with a tree structure built according to the rules of a formal grammar which define the language's syntax. The parse tree is often analyzed, augmented, and transformed by later phases in the compiler. Semantic analysis adds semantic information to the parse tree and builds the symbol table. This phase performs semantic checks such as type checking (checking for type errors), or object binding (associating variable and function references with their definitions), or definite assignment (requiring all local variables to be initialized before use), rejecting incorrect programs or issuing warnings. Semantic analysis usually requires a complete parse tree, meaning that this phase logically follows the parsing phase, and logically precedes the code generation phase, though it is often possible to fold multiple phases into one pass over the code in a compiler implementation. Middle end The middle end, also known as optimizer, performs optimizations on the intermediate representation in order to improve the performance and the quality of the produced machine code. The middle end contains those optimizations that are independent of the CPU architecture being targeted. The main phases of the middle end include the following: Analysis: This is the gathering of program information from the intermediate representation derived from the input; data-flow analysis is used to build use-define chains, together with dependence analysis, alias analysis, pointer analysis, escape analysis, etc. Accurate analysis is the basis for any compiler optimization. The control-flow graph of every compiled function and the call graph of the program are usually also built during the analysis phase. Optimization: the intermediate language representation is transformed into functionally equivalent but faster (or smaller) forms. Popular optimizations are inline expansion, dead code elimination, constant propagation, loop transformation and even automatic parallelization. Compiler analysis is the prerequisite for any compiler optimization, and they tightly work together. For example, dependence analysis is crucial for loop transformation. The scope of compiler analysis and optimizations vary greatly; their scope may range from operating within a basic block, to whole procedures, or even the whole program. There is a trade-off between the granularity of the optimizations and the cost of compilation. For example, peephole optimizations are fast to perform during compilation but only affect a small local fragment of the code, and can be performed independently of the context in which the code fragment appears. In contrast, interprocedural optimization requires more compilation time and memory space, but enable optimizations that are only possible by considering the behavior of multiple functions simultaneously. Interprocedural analysis and optimizations are common in modern commercial compilers from HP, IBM, SGI, Intel, Microsoft, and Sun Microsystems. The free software GCC was criticized for a long time for lacking powerful interprocedural optimizations, but it is changing in this respect. Another open source compiler with full analysis and optimization infrastructure is Open64, which is used by many organizations for research and commercial purposes. Due to the extra time and space needed for compiler analysis and optimizations, some compilers skip them by default. Users have to use compilation options to explicitly tell the compiler which optimizations should be enabled. Back end The back end is responsible for the CPU architecture specific optimizations and for code generation. The main phases of the back end include the following: Machine dependent optimizations: optimizations that depend on the details of the CPU architecture that the compiler targets. A prominent example is peephole optimizations, which rewrites short sequences of assembler instructions into more efficient instructions. Code generation: the transformed intermediate language is translated into the output language, usually the native machine language of the system. This involves resource and storage decisions, such as deciding which variables to fit into registers and memory and the selection and scheduling of appropriate machine instructions along with their associated addressing modes (see also Sethi–Ullman algorithm). Debug data may also need to be generated to facilitate debugging. Compiler correctness Compiler correctness is the branch of software engineering that deals with trying to show that a compiler behaves according to its language specification. Techniques include developing the compiler using formal methods and using rigorous testing (often called compiler validation) on an existing compiler. Compiled versus interpreted languages Higher-level programming languages usually appear with a type of translation in mind: either designed as compiled language or interpreted language. However, in practice there is rarely anything about a language that requires it to be exclusively compiled or exclusively interpreted, although it is possible to design languages that rely on re-interpretation at run time. The categorization usually reflects the most popular or widespread implementations of a language – for instance, BASIC is sometimes called an interpreted language, and C a compiled one, despite the existence of BASIC compilers and C interpreters. Interpretation does not replace compilation completely. It only hides it from the user and makes it gradual. Even though an interpreter can itself be interpreted, a directly executed program is needed somewhere at the bottom of the execution stack (see machine language). Furthermore, for optimization compilers can contain interpreter functionality, and interpreters may include ahead of time compilation techniques. For example, where an expression can be executed during compilation and the results inserted into the output program, then it prevents it having to be recalculated each time the program runs, which can greatly speed up the final program. Modern trends toward just-in-time compilation and bytecode interpretation at times blur the traditional categorizations of compilers and interpreters even further. Some language specifications spell out that implementations must include a compilation facility; for example, Common Lisp. However, there is nothing inherent in the definition of Common Lisp that stops it from being interpreted. Other languages have features that are very easy to implement in an interpreter, but make writing a compiler much harder; for example, APL, SNOBOL4, and many scripting languages allow programs to construct arbitrary source code at runtime with regular string operations, and then execute that code by passing it to a special evaluation function. To implement these features in a compiled language, programs must usually be shipped with a runtime library that includes a version of the compiler itself. Types One classification of compilers is by the platform on which their generated code executes. This is known as the target platform. A native or hosted compiler is one whose output is intended to directly run on the same type of computer and operating system that the compiler itself runs on. The output of a cross compiler is designed to run on a different platform. Cross compilers are often used when developing software for embedded systems that are not intended to support a software development environment. The output of a compiler that produces code for a virtual machine (VM) may or may not be executed on the same platform as the compiler that produced it. For this reason, such compilers are not usually classified as native or cross compilers. The lower level language that is the target of a compiler may itself be a high-level programming language. C, viewed by some as a sort of portable assembly language, is frequently the target language of such compilers. For example, Cfront, the original compiler for C++, used C as its target language. The C code generated by such a compiler is usually not intended to be readable and maintained by humans, so indent style and creating pretty C intermediate code are ignored. Some of the features of C that make it a good target language include the #line directive, which can be generated by the compiler to support debugging of the original source, and the wide platform support available with C compilers. While a common compiler type outputs machine code, there are many other types: Source-to-source compilers are a type of compiler that takes a high-level language as its input and outputs a high-level language. For example, an automatic parallelizing compiler will frequently take in a high-level language program as an input and then transform the code and annotate it with parallel code annotations (e.g. OpenMP) or language constructs (e.g. Fortran's DOALL statements). Other terms for source-to-source compilers are language translator, language converter, or language rewriter. The last term is usually applied to translations that do not involve a change of language. Bytecode compilers compile to assembly language of a theoretical machine, like some Prolog implementations This Prolog machine is also known as the Warren Abstract Machine (or WAM). Bytecode compilers for Java, Python are also examples of this category. Just-in-time compilers (JIT compiler) defer compilation until runtime. JIT compilers exist for many modern languages including Python, JavaScript, Smalltalk, Java, Microsoft .NET's Common Intermediate Language | a set of small programs often requires less effort than proving the correctness of a larger, single, equivalent program. Three-stage compiler structure Regardless of the exact number of phases in the compiler design, the phases can be assigned to one of three stages. The stages include a front end, a middle end, and a back end. The front end scans the input and verifies syntax and semantics according to a specific source language. For statically typed languages it performs type checking by collecting type information. If the input program is syntactically incorrect or has a type error, it generates error and/or warning messages, usually identifying the location in the source code where the problem was detected; in some cases the actual error may be (much) earlier in the program. Aspects of the front end include lexical analysis, syntax analysis, and semantic analysis. The front end transforms the input program into an intermediate representation (IR) for further processing by the middle end. This IR is usually a lower-level representation of the program with respect to the source code. The middle end performs optimizations on the IR that are independent of the CPU architecture being targeted. This source code/machine code independence is intended to enable generic optimizations to be shared between versions of the compiler supporting different languages and target processors. Examples of middle end optimizations are removal of useless (dead code elimination) or unreachable code (reachability analysis), discovery and propagation of constant values (constant propagation), relocation of computation to a less frequently executed place (e.g., out of a loop), or specialization of computation based on the context, eventually producing the "optimized" IR that is used by the back end. The back end takes the optimized IR from the middle end. It may perform more analysis, transformations and optimizations that are specific for the target CPU architecture. The back end generates the target-dependent assembly code, performing register allocation in the process. The back end performs instruction scheduling, which re-orders instructions to keep parallel execution units busy by filling delay slots. Although most optimization problems are NP-hard, heuristic techniques for solving them are well-developed and currently implemented in production-quality compilers. Typically the output of a back end is machine code specialized for a particular processor and operating system. This front/middle/back-end approach makes it possible to combine front ends for different languages with back ends for different CPUs while sharing the optimizations of the middle end. Practical examples of this approach are the GNU Compiler Collection, Clang (LLVM-based C/C++ compiler), and the Amsterdam Compiler Kit, which have multiple front-ends, shared optimizations and multiple back-ends. Front end The front end analyzes the source code to build an internal representation of the program, called the intermediate representation (IR). It also manages the symbol table, a data structure mapping each symbol in the source code to associated information such as location, type and scope. While the frontend can be a single monolithic function or program, as in a scannerless parser, it was traditionally implemented and analyzed as several phases, which may execute sequentially or concurrently. This method is favored due to its modularity and separation of concerns. Most commonly today, the frontend is broken into three phases: lexical analysis (also known as lexing or scanning), syntax analysis (also known as scanning or parsing), and semantic analysis. Lexing and parsing comprise the syntactic analysis (word syntax and phrase syntax, respectively), and in simple cases, these modules (the lexer and parser) can be automatically generated from a grammar for the language, though in more complex cases these require manual modification. The lexical grammar and phrase grammar are usually context-free grammars, which simplifies analysis significantly, with context-sensitivity handled at the semantic analysis phase. The semantic analysis phase is generally more complex and written by hand, but can be partially or fully automated using attribute grammars. These phases themselves can be further broken down: lexing as scanning and evaluating, and parsing as building a concrete syntax tree (CST, parse tree) and then transforming it into an abstract syntax tree (AST, syntax tree). In some cases additional phases are used, notably line reconstruction and preprocessing, but these are rare. The main phases of the front end include the following: converts the input character sequence to a canonical form ready for the parser. Languages which strop their keywords or allow arbitrary spaces within identifiers require this phase. The top-down, recursive-descent, table-driven parsers used in the 1960s typically read the source one character at a time and did not require a separate tokenizing phase. Atlas Autocode and Imp (and some implementations of ALGOL and Coral 66) are examples of stropped languages whose compilers would have a Line Reconstruction phase. Preprocessing supports macro substitution and conditional compilation. Typically the preprocessing phase occurs before syntactic or semantic analysis; e.g. in the case of C, the preprocessor manipulates lexical tokens rather than syntactic forms. However, some languages such as Scheme support macro substitutions based on syntactic forms. Lexical analysis (also known as lexing or tokenization) breaks the source code text into a sequence of small pieces called lexical tokens. This phase can be divided into two stages: the scanning, which segments the input text into syntactic units called lexemes and assigns them a category; and the evaluating, which converts lexemes into a processed value. A token is a pair consisting of a token name and an optional token value. Common token categories may include identifiers, keywords, separators, operators, literals and comments, although the set of token categories varies in different programming languages. The lexeme syntax is typically a regular language, so a finite state automaton constructed from a regular expression can be used to recognize it. The software doing lexical analysis is called a lexical analyzer. This may not be a separate step—it can be combined with the parsing step in scannerless parsing, in which case parsing is done at the character level, not the token level. Syntax analysis (also known as parsing) involves parsing the token sequence to identify the syntactic structure of the program. This phase typically builds a parse tree, which replaces the linear sequence of tokens with a tree structure built according to the rules of a formal grammar which define the language's syntax. The parse tree is often analyzed, augmented, and transformed by later phases in the compiler. Semantic analysis adds semantic information to the parse tree and builds the symbol table. This phase performs semantic checks such as type checking (checking for type errors), or object binding (associating variable and function references with their definitions), or definite assignment (requiring all local variables to be initialized before use), rejecting incorrect programs or issuing warnings. Semantic analysis usually requires a complete parse tree, meaning that this phase logically follows the parsing phase, and logically precedes the code generation phase, though it is often possible to fold multiple phases into one pass over the code in a compiler implementation. Middle end The middle end, also known as optimizer, performs optimizations on the intermediate representation in order to improve the performance and the quality of the produced machine code. The middle end contains those optimizations that are independent of the CPU architecture being targeted. The main phases of the middle end include the following: Analysis: This is the gathering of program information from the intermediate representation derived from the input; data-flow analysis is used to build use-define chains, together with dependence analysis, alias analysis, pointer analysis, escape analysis, etc. Accurate analysis is the basis for any compiler optimization. The control-flow graph of every compiled function and the call graph of the program are usually also built during the analysis phase. Optimization: the intermediate language representation is transformed into functionally equivalent but faster (or smaller) forms. Popular optimizations are inline expansion, dead code elimination, constant propagation, loop transformation and even automatic parallelization. Compiler analysis is the prerequisite for any compiler optimization, and they tightly work together. For example, dependence analysis is crucial for loop transformation. The scope of compiler analysis and optimizations vary greatly; their scope may range from operating within a basic block, to whole procedures, or even the whole program. There is a trade-off between the granularity of the optimizations and the cost of compilation. For example, peephole optimizations are fast to perform during compilation but only affect a small local fragment of the code, and can be performed independently of the context in which the code fragment appears. In contrast, interprocedural optimization requires more compilation time and memory space, but enable optimizations that are only possible by considering the behavior of multiple functions simultaneously. Interprocedural analysis and optimizations are common in modern commercial compilers from HP, IBM, SGI, Intel, Microsoft, and Sun Microsystems. The free software GCC was criticized for a long time for lacking powerful interprocedural optimizations, but it is changing in this respect. Another open source compiler with full analysis and optimization infrastructure is Open64, which is used by many organizations for research and commercial purposes. Due to the extra time and space needed for compiler analysis and optimizations, some compilers skip them by default. Users have to use compilation options to explicitly tell the compiler which optimizations should be enabled. Back end The back end is responsible for the CPU architecture specific optimizations and for code generation. The main phases of the back end include the following: Machine dependent optimizations: optimizations that depend on the details of the CPU architecture that the compiler targets. A prominent example is peephole optimizations, which rewrites short sequences of assembler instructions into more efficient instructions. Code generation: the transformed intermediate language is translated into the output language, usually the native machine language of the system. This involves resource and storage decisions, such as deciding which variables to fit into registers and memory and the selection and scheduling of appropriate machine instructions along with their associated addressing modes (see also Sethi–Ullman algorithm). Debug data may also need to be generated to facilitate debugging. Compiler correctness Compiler correctness is the branch of software engineering that deals with trying to show that a compiler behaves according to its language specification. Techniques include developing the compiler using formal methods and using rigorous testing (often called compiler validation) on an existing compiler. Compiled versus interpreted languages Higher-level programming languages usually appear with a type of translation in mind: either designed as compiled language or interpreted language. However, in practice there is rarely anything about a language that requires it to be exclusively compiled or exclusively interpreted, although it is possible to design languages that rely on re-interpretation at run time. The categorization usually reflects the most popular or widespread implementations of a language – for instance, BASIC is sometimes called an interpreted language, and C a compiled one, despite the existence of BASIC compilers and C interpreters. Interpretation does not replace compilation completely. It only hides it from the user and makes it gradual. Even though an interpreter can itself be interpreted, a directly executed program is needed somewhere at the bottom of the execution stack (see machine language). Furthermore, for optimization compilers can contain interpreter functionality, and interpreters may include ahead of time compilation techniques. For example, where an expression can be executed during compilation and the results inserted into the output program, then it prevents it having to be recalculated each time the program runs, which can greatly speed up the final program. Modern trends toward just-in-time compilation and bytecode interpretation at times blur the traditional categorizations of compilers and interpreters even further. Some language specifications spell out that implementations must include a compilation facility; for example, Common Lisp. However, there is nothing inherent in the definition of Common Lisp that stops it from being interpreted. Other languages have features that are very easy to implement in an interpreter, but make writing a compiler much harder; for example, APL, SNOBOL4, and many scripting languages allow programs to construct arbitrary source code at runtime with regular string operations, and then execute that code by passing it to a special evaluation function. To implement these features in a compiled language, programs must usually be shipped with a runtime library that includes a version of the compiler itself. Types One classification of compilers is by the platform on which their generated code executes. This is known as the target platform. A native or hosted compiler is one whose output is intended to directly run on the same type of computer and operating system that the compiler itself runs on. The output of a cross compiler is designed to run on a different platform. Cross compilers are often used when developing software for embedded systems that are not intended to support a software development environment. The output of a compiler that produces code for a virtual machine (VM) may or may not be executed on the same platform as the compiler that produced it. For this reason, such compilers are not usually classified as native or cross compilers. The lower level language that is the target of a compiler may itself be a high-level programming language. C, viewed by some as a sort of portable assembly language, is frequently the target language of such compilers. For example, Cfront, the original compiler for C++, used C as its target language. The C code generated by such a compiler is usually not intended to be readable and maintained by humans, so indent style and creating pretty C intermediate code are ignored. Some of the features of C that make it a good target language include the #line directive, which can be generated by the compiler to support debugging of the original source, and the wide platform support available with C compilers. While a common compiler type outputs machine code, there are many other types: Source-to-source compilers are a type of compiler that takes a high-level language as its input and outputs a high-level language. For example, an automatic parallelizing compiler will frequently take in a high-level language program as an input and then transform the code and annotate it with parallel code annotations (e.g. OpenMP) or language constructs (e.g. Fortran's DOALL statements). Other terms for source-to-source compilers are language translator, language converter, or language rewriter. The last term is usually applied to translations that do not involve a change of language. Bytecode compilers compile to assembly language of a theoretical machine, like some Prolog implementations This Prolog machine is also known as the Warren Abstract Machine (or WAM). Bytecode compilers for Java, Python are also examples of this category. Just-in-time compilers (JIT compiler) defer compilation until runtime. JIT compilers exist for many modern languages including Python, JavaScript, Smalltalk, Java, Microsoft .NET's Common Intermediate Language |
copying down the same from dictation, and another hour of literary study. During the remainder of the day, the young castrati had to find time to practice their harpsichord playing, and to compose vocal music, either sacred or secular depending on their inclination. This demanding schedule meant that, if sufficiently talented, they were able to make a debut in their mid-teens with a perfect technique and a voice of a flexibility and power no woman or ordinary male singer could match. In the 1720s and 1730s, at the height of the craze for these voices, it has been estimated that upwards of 4,000 boys were castrated annually in the service of art. Many came from poor homes and were castrated by their parents in the hope that their child might be successful and lift them from poverty (this was the case with Senesino). There are, though, records of some young boys asking to be operated on to preserve their voices (e.g. Caffarelli, who was from a wealthy family: his grandmother gave him the income from two vineyards to pay for his studies). Caffarelli was also typical of many castrati in being famous for tantrums on and off-stage, and for amorous adventures with noble ladies. Some, as described by Casanova, preferred gentlemen (noble or otherwise). Only a small percentage of boys castrated to preserve their voices had successful careers on the operatic stage; the better "also-rans" sang in cathedral or church choirs, but because of their marked appearance and the ban on their marrying, there was little room for them in society outside a musical context. The castrati came in for a great amount of scurrilous and unkind abuse, and as their fame increased, so did the hatred of them. They were often castigated as malign creatures who lured men into homosexuality. There were homosexual castrati, as Casanova's accounts of 18th-century Italy bear witness. He mentions meeting an abbé whom he took for a girl in disguise, only later discovering that "she" was a famous castrato. In Rome in 1762 he attended a performance at which the prima donna was a castrato, "the favourite pathic" of Cardinal Borghese, who dined every evening with his protector. From his behaviour on stage "it was obvious that he hoped to inspire the love of those who liked him as a man, and probably would not have done so as a woman". Decline By the late 18th century, changes in operatic taste and social attitudes spelled the end for castrati. They lingered on past the end of the ancien régime (which their style of opera parallels), and two of their number, Pacchierotti and Crescentini, performed before Napoleon. The last great operatic castrato was Giovanni Battista Velluti (1781–1861), who performed the last operatic castrato role ever written: Armando in Il crociato in Egitto by Meyerbeer (Venice, 1824). Soon after this they were replaced definitively as the first men of the operatic stage by a new breed of heroic tenor, as first incarnated by the Frenchman Gilbert-Louis Duprez, the earliest so-called "king of the high Cs". His successors have included such singers as Enrico Tamberlik, Jean de Reszke, Francesco Tamagno, Enrico Caruso, Giovanni Martinelli, Beniamino Gigli, Jussi Björling, Franco Corelli and Luciano Pavarotti, among others. After the unification of Italy in 1861, "eviration" was officially made illegal (the new Italian state had adopted the previous penal code of the Kingdom of Sardinia which expressly forbade the practice). In 1878, Pope Leo XIII prohibited the hiring of new castrati by the church: only in the Sistine Chapel and in other papal basilicas in Rome did a few castrati linger. A group photo of the Sistine Choir taken in 1898 shows that by then only six remained (plus the Direttore Perpetuo, the fine soprano castrato Domenico Mustafà), and in 1902 a ruling was extracted from Pope Leo that no further castrati should be admitted. The official end to the castrati came on St. Cecilia's Day, 22 November 1903, when the new pope, Pius X, issued his motu proprio, Tra le Sollecitudini ('Amongst the Cares'), which contained this instruction: "Whenever ... it is desirable to employ the high voices of sopranos and contraltos, these parts must be taken by boys, according to the most ancient usage of the Church." The last Sistine castrato to survive was Alessandro Moreschi, the only castrato to have made solo recordings. While an interesting historical record, these discs of his give us only a glimpse of the castrato voice – although he had been renowned as "The Angel of Rome" at the beginning of his career, some would say he was past his prime when the recordings were made in 1902 and 1904 and he never attempted to sing opera. Domenico Salvatori, a castrato who was contemporary with Moreschi, made some ensemble recordings with him but has no surviving solo recordings. The recording technology of the day was not of modern high quality. Salvatori died in 1909; Moreschi retired officially in March 1913, and died in 1922. The Catholic Church's involvement in the castrato phenomenon has long been controversial, and there have recently been calls for it to issue an official apology for its role. As early as 1748, Pope Benedict XIV tried to ban castrati from churches, but such was their popularity at the time that he realised that doing so might result in a drastic decline in church attendance. The rumours of another castrato sequestered in the Vatican for the personal delectation of the Pontiff until as recently as 1959 have been proven false. The singer in question was a pupil of Moreschi's, Domenico Mancini, such a successful imitator of his teacher's voice that even Lorenzo Perosi, Direttore Perpetuo of the Sistine Choir from 1898 to 1956 and a strenuous opponent of the practice of castrato singers, thought he was a castrato. Mancini was in fact a moderately skilful falsettist and professional double bass player. Modern castrati and similar voices So-called "natural" or "endocrinological castrati" are born with hormonal anomalies, such as Klinefelter's syndrome and Kallmann's syndrome, or have undergone unusual physical or medical events during their early lives that reproduce the vocal effects of castration without being castrated. In simple terms, a male can retain his child voice if it never changes during puberty. The retained voice can be the treble voice shared by both sexes in childhood and is the same as boy soprano voice. But as evidence shows, many castratos, such as Senesino and Caffarelli, were actually altos (mezzo-soprano) – not sopranos. Jimmy Scott, Radu Marian and Javier Medina are examples of this type of high male voice via endocrinological diseases. Michael Maniaci is somewhat different, in that he has no hormonal or other anomalies, but claims that his voice did not "break" in the usual manner, leaving him still able to sing in the soprano register. Other uncastrated male adults sing soprano, generally using some form of falsetto but in a much higher range than most countertenors. Examples are Aris Christofellis, Jörg Waschinski, and Ghio Nannini. However, | "also-rans" sang in cathedral or church choirs, but because of their marked appearance and the ban on their marrying, there was little room for them in society outside a musical context. The castrati came in for a great amount of scurrilous and unkind abuse, and as their fame increased, so did the hatred of them. They were often castigated as malign creatures who lured men into homosexuality. There were homosexual castrati, as Casanova's accounts of 18th-century Italy bear witness. He mentions meeting an abbé whom he took for a girl in disguise, only later discovering that "she" was a famous castrato. In Rome in 1762 he attended a performance at which the prima donna was a castrato, "the favourite pathic" of Cardinal Borghese, who dined every evening with his protector. From his behaviour on stage "it was obvious that he hoped to inspire the love of those who liked him as a man, and probably would not have done so as a woman". Decline By the late 18th century, changes in operatic taste and social attitudes spelled the end for castrati. They lingered on past the end of the ancien régime (which their style of opera parallels), and two of their number, Pacchierotti and Crescentini, performed before Napoleon. The last great operatic castrato was Giovanni Battista Velluti (1781–1861), who performed the last operatic castrato role ever written: Armando in Il crociato in Egitto by Meyerbeer (Venice, 1824). Soon after this they were replaced definitively as the first men of the operatic stage by a new breed of heroic tenor, as first incarnated by the Frenchman Gilbert-Louis Duprez, the earliest so-called "king of the high Cs". His successors have included such singers as Enrico Tamberlik, Jean de Reszke, Francesco Tamagno, Enrico Caruso, Giovanni Martinelli, Beniamino Gigli, Jussi Björling, Franco Corelli and Luciano Pavarotti, among others. After the unification of Italy in 1861, "eviration" was officially made illegal (the new Italian state had adopted the previous penal code of the Kingdom of Sardinia which expressly forbade the practice). In 1878, Pope Leo XIII prohibited the hiring of new castrati by the church: only in the Sistine Chapel and in other papal basilicas in Rome did a few castrati linger. A group photo of the Sistine Choir taken in 1898 shows that by then only six remained (plus the Direttore Perpetuo, the fine soprano castrato Domenico Mustafà), and in 1902 a ruling was extracted from Pope Leo that no further castrati should be admitted. The official end to the castrati came on St. Cecilia's Day, 22 November 1903, when the new pope, Pius X, issued his motu proprio, Tra le Sollecitudini ('Amongst the Cares'), which contained this instruction: "Whenever ... it is desirable to employ the high voices of sopranos and contraltos, these parts must be taken by boys, according to the most ancient usage of the Church." The last Sistine castrato to survive was Alessandro Moreschi, the only castrato to have made solo recordings. While an interesting historical record, these discs of his give us only a glimpse of the castrato voice – although he had been renowned as "The Angel of Rome" at the beginning of his career, some would say he was past his prime when the recordings were made in 1902 and 1904 and he never attempted to sing opera. Domenico Salvatori, a castrato who was contemporary with Moreschi, made some ensemble recordings with him but has no surviving solo recordings. The recording technology of the day was not of modern high quality. Salvatori died in 1909; Moreschi retired officially in March 1913, and died in 1922. The Catholic Church's involvement in the castrato phenomenon has long been controversial, and there have recently been calls for it to issue an official apology for its role. As early as 1748, Pope Benedict XIV tried to ban castrati from churches, but such was their popularity at the time that he realised that doing so might result in a drastic decline in church attendance. The rumours of another castrato sequestered in the Vatican for the personal delectation of the Pontiff until as recently as 1959 have been proven false. The singer in question was a pupil of Moreschi's, Domenico Mancini, such a successful imitator of his teacher's voice that even Lorenzo Perosi, Direttore Perpetuo of the Sistine Choir from 1898 to 1956 and a strenuous opponent of the practice of castrato singers, thought he was a castrato. Mancini was in fact a moderately skilful falsettist and professional double bass player. Modern castrati and similar voices So-called "natural" or "endocrinological castrati" are born with hormonal anomalies, such as Klinefelter's syndrome and Kallmann's syndrome, or have undergone unusual physical or medical events during their early lives that reproduce the vocal effects of castration without being castrated. In simple terms, a male can retain his child voice if it never changes during puberty. The retained voice can be the treble voice shared by both sexes in childhood and is the same as boy soprano voice. But as evidence shows, many castratos, such as Senesino and Caffarelli, were actually altos (mezzo-soprano) – not sopranos. Jimmy Scott, Radu Marian and Javier Medina are examples of this type of high male voice via endocrinological diseases. Michael Maniaci is somewhat different, in that he has no hormonal or other anomalies, but claims that his voice did not "break" in the usual manner, leaving him still able to sing in the soprano register. Other uncastrated male adults sing soprano, generally using some form of falsetto but in a much higher range than most countertenors. Examples are Aris Christofellis, Jörg Waschinski, and Ghio Nannini. However, it is believed the castrati possessed more of a tenorial chest register (the aria "Navigante che non spera" in Leonardo Vinci's opera Il Medo, written for Farinelli, requires notes down to C3, 131 Hz). Similar low-voiced singing can be heard from the jazz vocalist Jimmy Scott, whose range matches approximately that used by female blues singers. High-pitched singer Jordan Smith has demonstrated having more of a tenorial chest register. Actor Chris Colfer has stated in interviews that when his voice began to change at puberty he sang in a high voice "constantly" in an effort to retain his range. Actor and singer Alex Newell has soprano range. Voice actor Walter Tetley may or may not have been a castrato; Bill Scott, a co-worker of Tetley's during their later work in television, once half-jokingly quipped that Tetley's mother "had him fixed" to protect the child star's voice-acting career. Tetley never personally divulged the exact reason for his condition, which left him with the voice of a preteen boy for his entire adult life. Agriculture professor George Washington Carver was also reputed to have been castrated and had a high, childlike voice and stunted growth even in adulthood. Notable castrati Loreto Vittori (1604–1670) Baldassare Ferri (1610–1680) Atto Melani (1626–1714) Giovanni Grossi ("Siface") (1653–1697) Pier Francesco Tosi (1654–1732) Nicolo Grimaldi ("Nicolini") (1673–1732) Antonio Bernacchi (1685–1756) Francesco Bernardi ("Senesino") (1686–1758) Valentino Urbani ("Valentini") (1690–1722) Giacinto Fontana ("Farfallino") (1692–1739) |
circle of players while reciting a rhyme. A new person is pointed at as each word is said. The player who is selected at the conclusion of the rhyme is "it" or "out". In an alternate version, the circle of players may each put two feet in and at the conclusion of the rhyme, that player removes one foot and the rhyme starts over with the next person. In this case, the first player that has both feet removed is "it" or "out". In theory a counting rhyme is determined entirely by the starting selection (and would result in a modulo operation), but in practice they are often accepted as random selections because the number of words has not been calculated beforehand, so the result is unknown until someone is selected. A variant of counting-out game, known as the Josephus problem, represents a famous theoretical problem in mathematics and computer science. Examples Several simple games can be played to select one person from a group, either as a straightforward winner, or as someone who is eliminated. Rock, Paper, Scissors, Odd or Even and Blue Shoe require no materials and are played using hand gestures, although with the former it is possible for a player to win or lose through skill rather than luck. Coin flipping and drawing straws are fair methods of randomly determining a player. Fizz Buzz is a spoken word game where if a player slips up and speaks a word out of | of counting out originated in the "superstitious practices of divination by lots." Many such methods involve one person pointing at each participant in a circle of players while reciting a rhyme. A new person is pointed at as each word is said. The player who is selected at the conclusion of the rhyme is "it" or "out". In an alternate version, the circle of players may each put two feet in and at the conclusion of the rhyme, that player removes one foot and the rhyme starts over with the next person. In this case, the first player that has both feet removed is "it" or "out". In theory a counting rhyme is determined entirely by the starting selection (and would result in a modulo operation), but in practice they are often accepted as random selections because the number of words has not been calculated beforehand, so the result is unknown until someone is selected. A variant of counting-out game, known as the Josephus problem, represents a famous theoretical problem in mathematics and computer science. Examples Several simple games can be played to select one person from a group, either as |
roughly a DES equivalent. This is one of the reasons why AES supports a 256-bit key length. Symmetric algorithm key lengths US Government export policy has long restricted the "strength" of cryptography that can be sent out of the country. For many years the limit was 40 bits. Today, a key length of 40 bits offers little protection against even a casual attacker with a single PC. In response, by the year 2000, most of the major US restrictions on the use of strong encryption were relaxed. However, not all regulations have been removed, and encryption registration with the U.S. Bureau of Industry and Security is still required to export "mass market encryption commodities, software and components with encryption exceeding 64 bits" (). IBM's Lucifer cipher was selected in 1974 as the base for what would become the Data Encryption Standard. Lucifer's key length was reduced from 128 bits to 56 bits, which the NSA and NIST argued was sufficient. The NSA has major computing resources and a large budget; some cryptographers including Whitfield Diffie and Martin Hellman complained that this made the cipher so weak that NSA computers would be able to break a DES key in a day through brute force parallel computing. The NSA disputed this, claiming that brute-forcing DES would take them "something like 91 years". However, by the late 90s, it became clear that DES could be cracked in a few days' time-frame with custom-built hardware such as could be purchased by a large corporation or government. The book Cracking DES (O'Reilly and Associates) tells of the successful attempt in 1998 to break 56-bit DES by a brute-force attack mounted by a cyber civil rights group with limited resources; see EFF DES cracker. Even before that demonstration, 56 bits was considered insufficient length for symmetric algorithm keys; DES has been replaced in many applications by Triple DES, which has 112 bits of security when used 168-bit keys (triple key). In 2002, Distributed.net and its volunteers broke a 64-bit RC5 key after several years effort, using about seventy thousand (mostly home) computers. The Advanced Encryption Standard published in 2001 uses key sizes of 128, 192 or 256 bits. Many observers consider 128 bits sufficient for the foreseeable future for symmetric algorithms of AES's quality until quantum computers become available. However, as of 2015, the U.S. National Security Agency has issued guidance that it plans to switch to quantum computing resistant algorithms and now requires 256-bit AES keys for data classified up to Top Secret. In 2003, the U.S. National Institute for Standards and Technology, NIST proposed phasing out 80-bit keys by 2015. At 2005, 80-bit keys were allowed only until 2010. Since 2015, NIST guidance says that "the use of keys that provide less than 112 bits of security strength for key agreement is now disallowed." NIST approved symmetric encryption algorithms include three-key Triple DES, and AES. Approvals for two-key Triple DES and Skipjack were withdrawn in 2015; the NSA's Skipjack algorithm used in its Fortezza program employs 80-bit keys. Asymmetric algorithm key lengths The effectiveness of public key cryptosystems depends on the intractability (computational and theoretical) of certain mathematical problems such as integer factorization. These problems are time-consuming to solve, but usually faster than trying all possible keys by brute force. Thus, asymmetric keys must be longer for equivalent resistance to attack than symmetric algorithm keys. The most common methods are assumed to be weak against sufficiently powerful quantum computers in the future. Since 2015, NIST recommends a minimum of 2048-bit keys for RSA, an update to the widely-accepted recommendation of a 1024-bit minimum since at least 2002. 1024-bit RSA keys are equivalent in strength to 80-bit symmetric keys, 2048-bit RSA keys to 112-bit symmetric keys, 3072-bit RSA keys to 128-bit symmetric keys, and 15360-bit RSA keys to 256-bit symmetric keys. In 2003, RSA Security claimed that 1024-bit keys were likely to become crackable some time between 2006 and 2010, while 2048-bit keys are sufficient until 2030. the largest RSA key publicly known to be cracked is RSA-250 with 829 bits. The Finite Field Diffie-Hellman algorithm has roughly the same key strength as RSA for the same key sizes. The work factor for breaking Diffie-Hellman is based on the discrete logarithm problem, which is related to the integer factorization problem on which RSA's strength is based. Thus, a 2048-bit Diffie-Hellman key has about the same strength as a 2048-bit RSA key. Elliptic-curve cryptography (ECC) is an alternative set of asymmetric algorithms that is equivalently secure with shorter keys, requiring only approximately twice the bits as the equivalent symmetric algorithm. A 256-bit ECDH key has approximately the same safety factor as a 128-bit AES key. A message encrypted with an elliptic key algorithm using a 109-bit long key was broken in 2004. The NSA previously recommended 256-bit ECC for protecting classified information up to the SECRET level, and 384-bit for TOP SECRET; In 2015 it announced plans to transition to quantum-resistant algorithms by 2024, and until then recommends 384-bit for all classified information. Effect of quantum computing attacks on key strength The two best known quantum computing attacks are based on Shor's algorithm and Grover's algorithm. Of the two, Shor's offers the greater risk to current security systems. Derivatives of Shor's algorithm are widely conjectured to be effective against all mainstream public-key algorithms including RSA, Diffie-Hellman and elliptic curve cryptography. According to Professor Gilles Brassard, an expert in quantum computing: "The time needed to factor an RSA integer is the same order as the time needed to use that same integer as modulus for a single RSA encryption. In other words, it takes no more time to break RSA on a quantum computer (up to a multiplicative constant) than to use it legitimately on a classical computer." The general consensus is that these public key algorithms are insecure at any key size if sufficiently large quantum computers capable of running Shor's algorithm become available. The implication of this attack is that all data encrypted using current standards based security systems such as the ubiquitous SSL used to protect e-commerce and Internet banking and SSH used to protect access to sensitive computing systems is at risk. Encrypted data protected using public-key algorithms can be archived and may be broken at a later time. Mainstream symmetric ciphers (such as AES or Twofish) and collision resistant hash functions (such as SHA) are widely conjectured to offer greater security against known quantum computing attacks. They are widely thought most vulnerable to Grover's algorithm. Bennett, Bernstein, Brassard, and Vazirani proved in 1996 that a brute-force key search on a quantum computer cannot be faster than roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case. Thus in the | when only one or a few common 1024-bit or smaller prime moduli are in use. This common practice allows large amounts of communications to be compromised at the expense of attacking a small number of primes. Brute-force attack Even if a symmetric cipher is currently unbreakable by exploiting structural weaknesses in its algorithm, it is possible to run through the entire space of keys in what is known as a brute-force attack. Since longer symmetric keys require exponentially more work to brute force search, a sufficiently long symmetric key makes this line of attack impractical. With a key of length n bits, there are 2n possible keys. This number grows very rapidly as n increases. The large number of operations (2128) required to try all possible 128-bit keys is widely considered out of reach for conventional digital computing techniques for the foreseeable future. However, experts anticipate alternative computing technologies that may have processing power superior to current computer technology. If a suitably sized quantum computer capable of running Grover's algorithm reliably becomes available, it would reduce a 128-bit key down to 64-bit security, roughly a DES equivalent. This is one of the reasons why AES supports a 256-bit key length. Symmetric algorithm key lengths US Government export policy has long restricted the "strength" of cryptography that can be sent out of the country. For many years the limit was 40 bits. Today, a key length of 40 bits offers little protection against even a casual attacker with a single PC. In response, by the year 2000, most of the major US restrictions on the use of strong encryption were relaxed. However, not all regulations have been removed, and encryption registration with the U.S. Bureau of Industry and Security is still required to export "mass market encryption commodities, software and components with encryption exceeding 64 bits" (). IBM's Lucifer cipher was selected in 1974 as the base for what would become the Data Encryption Standard. Lucifer's key length was reduced from 128 bits to 56 bits, which the NSA and NIST argued was sufficient. The NSA has major computing resources and a large budget; some cryptographers including Whitfield Diffie and Martin Hellman complained that this made the cipher so weak that NSA computers would be able to break a DES key in a day through brute force parallel computing. The NSA disputed this, claiming that brute-forcing DES would take them "something like 91 years". However, by the late 90s, it became clear that DES could be cracked in a few days' time-frame with custom-built hardware such as could be purchased by a large corporation or government. The book Cracking DES (O'Reilly and Associates) tells of the successful attempt in 1998 to break 56-bit DES by a brute-force attack mounted by a cyber civil rights group with limited resources; see EFF DES cracker. Even before that demonstration, 56 bits was considered insufficient length for symmetric algorithm keys; DES has been replaced in many applications by Triple DES, which has 112 bits of security when used 168-bit keys (triple key). In 2002, Distributed.net and its volunteers broke a 64-bit RC5 key after several years effort, using about seventy thousand (mostly home) computers. The Advanced Encryption Standard published in 2001 uses key sizes of 128, 192 or 256 bits. Many observers consider 128 bits sufficient for the foreseeable future for symmetric algorithms of AES's quality until quantum computers become available. However, as of 2015, the U.S. National Security Agency has issued guidance that it plans to switch to quantum computing resistant algorithms and now requires 256-bit AES keys for data classified up to Top Secret. In 2003, the U.S. National Institute for Standards and Technology, NIST proposed phasing out 80-bit keys by 2015. At 2005, 80-bit keys were allowed only until 2010. Since 2015, NIST guidance says that "the use of keys that provide less than 112 bits of security strength for key agreement is now disallowed." NIST approved symmetric encryption algorithms include three-key Triple DES, and AES. Approvals for two-key Triple DES and Skipjack were withdrawn in 2015; the NSA's Skipjack algorithm used in its Fortezza program employs 80-bit keys. Asymmetric algorithm key lengths The effectiveness of public key cryptosystems depends on the intractability (computational and theoretical) of certain mathematical problems such as integer factorization. These problems are time-consuming to solve, but usually faster than trying all possible keys by brute force. Thus, asymmetric keys must be longer for equivalent resistance to attack than symmetric algorithm keys. The most common methods are assumed to be weak against sufficiently powerful quantum computers in the future. Since 2015, NIST recommends a minimum of 2048-bit keys for RSA, an update to the widely-accepted recommendation of a 1024-bit minimum since at least 2002. 1024-bit RSA keys are equivalent in strength to 80-bit symmetric keys, 2048-bit RSA keys to 112-bit symmetric keys, 3072-bit RSA keys to |
may refer to different interventions, including "self-instructions (e.g. distraction, imagery, motivational self-talk), relaxation and/or biofeedback, development of adaptive coping strategies (e.g. minimizing negative or self-defeating thoughts), changing maladaptive beliefs about pain, and goal setting". Treatment is sometimes manualized, with brief, direct, and time-limited treatments for individual psychological disorders that are specific technique-driven. CBT is used in both individual and group settings, and the techniques are often adapted for self-help applications. Some clinicians and researchers are cognitively oriented (e.g. cognitive restructuring), while others are more behaviorally oriented (e.g. in vivo exposure therapy). Interventions such as imaginal exposure therapy combine both approaches. Related techniques CBT may be delivered in conjunction with a variety of diverse but related techniques such as exposure therapy, stress inoculation, cognitive processing therapy, cognitive therapy, metacognitive therapy, metacognitive training, relaxation training, dialectical behavior therapy, and acceptance and commitment therapy. Some practitioners promote a form of mindful cognitive therapy which includes a greater emphasis on self-awareness as part of the therapeutic process. Medical uses In adults, CBT has been shown to be an effective part of treatment plans for anxiety disorders, body dysmorphic disorder, depression, eating disorders, chronic low back pain, personality disorders, psychosis, schizophrenia, substance use disorders, and bipolar disorder. It is also effective as part of treatment plans in the adjustment, depression, and anxiety associated with fibromyalgia, and with post-spinal cord injuries. In children or adolescents, CBT is an effective part of treatment plans for anxiety disorders, body dysmorphic disorder, depression and suicidality, eating disorders and obesity, obsessive–compulsive disorder (OCD), and posttraumatic stress disorder (PTSD), as well as tic disorders, trichotillomania, and other repetitive behavior disorders. CBT has also been applied to a variety of childhood disorders, including depressive disorders and various anxiety disorders. Criticism of CBT sometimes focuses on implementations (such as the UK IAPT) which may result initially in low quality therapy being offered by poorly trained practitioners. However, evidence supports the effectiveness of CBT for anxiety and depression. Evidence suggests that the addition of hypnotherapy as an adjunct to CBT improves treatment efficacy for a variety of clinical issues. The United Kingdom's National Institute for Health and Care Excellence (NICE) recommends CBT in the treatment plans for a number of mental health difficulties, including PTSD, OCD, bulimia nervosa, and clinical depression. Patient age CBT is used to help people of all ages, but the therapy should be adjusted based on the age of the patient with whom the therapist is dealing. Older individuals in particular have certain characteristics that need to be acknowledged and the therapy altered to account for these differences thanks to age. Of the small number of studies examining CBT for the management of depression in older people, there is currently no strong support. Depression and anxiety disorders Cognitive behavioral therapy has been shown as an effective treatment for clinical depression. The American Psychiatric Association Practice Guidelines (April 2000) indicated that, among psychotherapeutic approaches, cognitive behavioral therapy and interpersonal psychotherapy had the best-documented efficacy for treatment of major depressive disorder. A 2001 meta-analysis comparing CBT and psychodynamic psychotherapy suggested the approaches were equally effective in the short term for depression. In contrast, a 2013 meta-analyses suggested that CBT, interpersonal therapy, and problem-solving therapy outperformed psychodynamic psychotherapy and behavioral activation in the treatment of depression. According to a 2004 review by INSERM of three methods, cognitive behavioral therapy was either proven or presumed to be an effective therapy on several mental disorders. This included depression, panic disorder, post-traumatic stress, and other anxiety disorders. CBT has been shown to be effective in the treatment of adults with anxiety disorders. Results from a 2018 systematic review found a high strength of evidence that CBT-exposure therapy can reduce PTSD symptoms and lead to the loss of a PTSD diagnosis. CBT has also been shown to be effective for posttraumatic stress disorder in very young children (3 to 6 years of age). A Cochrane review found low quality evidence that CBT may be more effective than other psychotherapies in reducing symptoms of posttraumatic stress disorder in children and adolescents. A systematic review of CBT in depression and anxiety disorders concluded that "CBT delivered in primary care, especially including computer- or Internet-based self-help programs, is potentially more effective than usual care and could be delivered effectively by primary care therapists." Some meta-analyses find CBT more effective than psychodynamic therapy and equal to other therapies in treating anxiety and depression. Theoretical approaches One etiological theory of depression is Aaron T. Beck's cognitive theory of depression. His theory states that depressed people think the way they do because their thinking is biased towards negative interpretations. According to this theory, depressed people acquire a negative schema of the world in childhood and adolescence as an effect of stressful life events, and the negative schema is activated later in life when the person encounters similar situations. Beck also described a negative cognitive triad. The cognitive triad is made up of the depressed individual's negative evaluations of themselves, the world, and the future. Beck suggested that these negative evaluations derive from the negative schemata and cognitive biases of the person. According to this theory, depressed people have views such as "I never do a good job", "It is impossible to have a good day", and "things will never get better". A negative schema helps give rise to the cognitive bias, and the cognitive bias helps fuel the negative schema. Beck further proposed that depressed people often have the following cognitive biases: arbitrary inference, selective abstraction, overgeneralization, magnification, and minimization. These cognitive biases are quick to make negative, generalized, and personal inferences of the self, thus fueling the negative schema. A basic concept in some CBT treatments used in anxiety disorders is in vivo exposure. CBT-exposure therapy refers to the direct confrontation of feared objects, activities, or situations by a patient. For example, a woman with PTSD who fears the location where she was assaulted may be assisted by her therapist in going to that location and directly confronting those fears. Likewise, a person with a social anxiety disorder who fears public speaking may be instructed to directly confront those fears by giving a speech. This "two-factor" model is often credited to O. Hobart Mowrer. Through exposure to the stimulus, this harmful conditioning can be "unlearned" (referred to as extinction and habituation). Specialised forms of CBT CBT-SP, an adaptation of CBT for suicide prevention (SP), was specifically designed for treating youths who are severely depressed and who have recently attempted suicide within the past 90 days, and was found to be effective, feasible, and acceptable. Acceptance and commitment therapy (ACT) is a specialist branch of CBT (sometimes referred to as contextual CBT). ACT uses mindfulness and acceptance interventions and has been found to have a greater longevity in therapeutic outcomes. In a study with anxiety, CBT and ACT improved similarly across all outcomes from pre-to post-treatment. However, during a 12-month follow-up, ACT proved to be more effective, showing that it is a highly viable lasting treatment model for anxiety disorders. Computerized CBT (CCBT) has been proven to be effective by randomized controlled and other trials in treating depression and anxiety disorders, including children. Some research has found similar effectiveness to an intervention of informational websites and weekly telephone calls. CCBT was found to be equally effective as face-to-face CBT in adolescent anxiety. Combined with other treatments Studies have provided evidence that when examining animals and humans, that glucocorticoids may lead to a more successful extinction learning during exposure therapy for anxiety disorders. For instance, glucocorticoids can prevent aversive learning episodes from being retrieved and heighten reinforcement of memory traces creating a non-fearful reaction in feared situations. A combination of glucocorticoids and exposure therapy may be a better-improved treatment for treating people with anxiety disorders. Prevention For anxiety disorders, use of CBT with people at risk has significantly reduced the number of episodes of generalized anxiety disorder and other anxiety symptoms, and also given significant improvements in explanatory style, hopelessness, and dysfunctional attitudes. In another study, 3% of the group receiving the CBT intervention developed generalized anxiety disorder by 12 months postintervention compared with 14% in the control group. Subthreshold panic disorder sufferers were found to significantly benefit from use of CBT. Use of CBT was found to significantly reduce social anxiety prevalence. For depressive disorders, a stepped-care intervention (watchful waiting, CBT and medication if appropriate) achieved a 50% lower incidence rate in a patient group aged 75 or older. Another depression study found a neutral effect compared to personal, social, and health education, and usual school provision, and included a comment on potential for increased depression scores from people who have received CBT due to greater self recognition and acknowledgement of existing symptoms of depression and negative thinking styles. A further study also saw a neutral result. A meta-study of the Coping with Depression course, a cognitive behavioral intervention delivered by a psychoeducational method, saw a 38% reduction in risk of major depression. Bipolar disorder Many studies show CBT, combined with pharmacotherapy, is effective in improving depressive symptoms, mania severity and psychosocial functioning with mild to moderate effects, and that it is better than medication alone. INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including bipolar disorder. This included schizophrenia, depression, bipolar disorder, panic disorder, post-traumatic stress, anxiety disorders, bulimia, anorexia, personality disorders and alcohol dependency. Psychosis In long-term psychoses, CBT is used to complement medication and is adapted to meet individual needs. Interventions particularly related to these conditions include exploring reality testing, changing delusions and hallucinations, examining factors which precipitate relapse, and managing relapses. Meta-analyses confirm the effectiveness of metacognitive training (MCT) for the improvement of positive symptoms (e.g., delusions). For people at risk of psychosis, in 2014 the UK National Institute for Health and Care Excellence (NICE) recommended preventive CBT. Schizophrenia INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including schizophrenia. A Cochrane review reported CBT had "no effect on long‐term risk of relapse" and no additional effect above standard care. A 2015 systematic review investigated the effects of CBT compared with other psychosocial therapies for people with schizophrenia and determined that there is no clear advantage over other, often less expensive, interventions but acknowledged that better quality evidence is needed before firm conclusions can be drawn. Addiction and substance use disorders Pathological and problem gambling CBT is also used for pathological and problem gambling. The percentage of people who problem gamble is 1–3% around the world. Cognitive behavioral therapy develops skills for relapse prevention and someone can learn to control their mind and manage high-risk cases. There is evidence of efficacy of CBT for treating pathological and problem gambling at immediate follow up, however the longer term efficacy of CBT for it is currently unknown. Smoking cessation CBT looks at the habit of smoking cigarettes as a learned behavior, which later evolves into a coping strategy to handle daily stressors. Since smoking is often easily accessible and quickly allows the user to feel good, it can take precedence over other coping strategies, and eventually work its way into everyday life during non-stressful events as well. CBT aims to target the function of the behavior, as it can vary between individuals, and works to inject other coping mechanisms in place of smoking. CBT also aims to support individuals suffering from strong cravings, which are a major reported reason for relapse during treatment. In a 2008 controlled study out of Stanford University School of Medicine, suggested CBT may be an effective tool to help maintain abstinence. The results of 304 random adult participants were tracked over the course of one year. During this program, some participants were provided medication, CBT, 24-hour phone support, or some combination of the three methods. At 20 weeks, the participants who received CBT had a 45% abstinence rate, versus non-CBT participants, who had a 29% abstinence rate. Overall, the study concluded that emphasizing cognitive and behavioral strategies to support smoking cessation can help individuals build tools for long term smoking abstinence. Mental health history can affect the outcomes of treatment. Individuals with a history of depressive disorders had a lower rate of success when using CBT alone to combat smoking addiction. A Cochrane review was unable to find evidence of any difference between CBT and hypnosis for smoking cessation. While this may be evidence of no effect, further research may uncover an effect of CBT for smoking cessation. Substance use disorders Studies have shown CBT to be an effective treatment for substance use disorders. For individuals with substance use disorders, CBT aims to reframe maladaptive thoughts, such as denial, minimizing and catastrophizing thought patterns, with healthier narratives. Specific techniques include identifying potential triggers and developing coping mechanisms to manage high-risk situations. Research has shown CBT to be particularly effective when combined with other therapy-based treatments or medication. INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including alcohol dependency. Internet addiction Research has identified Internet addiction as a new clinical disorder that causes relational, occupational, and social problems. Cognitive behavioral therapy (CBT) has been suggested as the treatment of choice for Internet addiction, and addiction recovery in general has used CBT as part of treatment planning. Eating disorders Though many forms of treatment can support individuals with eating disorders, CBT is proven to be a more effective treatment than medications and interpersonal psychotherapy alone. CBT aims to combat major causes of distress such as negative cognitions surrounding body weight, shape and size. CBT therapists also work with individuals to regulate strong emotions and thoughts that lead to dangerous compensatory behaviors. CBT is the first line of treatment for bulimia nervosa, and Eating Disorder Non-Specific. While there is evidence to support the efficacy of CBT for bulimia nervosa and binging, the evidence is somewhat variable and limited by small study sizes. INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including bulimia and anorexia nervosa. With autistic adults Emerging evidence for cognitive behavioral interventions aimed at reducing symptoms of depression, anxiety, and obsessive-compulsive disorder in autistic adults without intellectual disability has been identified through a systematic review. While the research was focused on adults, cognitive behavioral interventions have also been beneficial to autistic children. Other uses Evidence suggests a possible role for CBT in the treatment of attention deficit hyperactivity disorder (ADHD), hypochondriasis, and bipolar disorder, but more study is needed and results should be interpreted with caution. CBT can have a therapeutic effects on easing symptoms of anxiety and depression in people with Alzheimer's disease. CBT has been studied as an aid in the treatment of anxiety associated with stuttering. Initial studies have shown CBT to be effective in reducing social anxiety in adults who stutter, but not in reducing stuttering frequency. There is some evidence that CBT is superior in the long-term to benzodiazepines and the nonbenzodiazepines in the treatment and management of insomnia. Computerized CBT (CCBT) has been proven to be effective by randomized controlled and other trials in treating insomnia. Some research has found similar effectiveness to an intervention of informational websites and weekly telephone calls. CCBT was found to be equally effective as face-to-face CBT in insomnia. A Cochrane review of interventions aimed at preventing psychological stress in healthcare workers found that CBT was more effective than no intervention but no more effective than alternative stress-reduction interventions. Cochrane Reviews have found no convincing evidence that CBT training helps foster care providers manage difficult behaviors in the youths under their care, nor was it helpful in treating people who abuse their intimate partners. CBT has been applied in both clinical and non-clinical environments to treat disorders such as personality disorders and behavioral problems. INSERM's 2004 review found that CBT is an effective therapy for personality disorders. Individuals with medical conditions In the case of people with metastatic breast cancer, data is limited but CBT and other psychosocial interventions might help with psychological outcomes and pain management. A 2015 Cochrane review also found that CBT for symptomatic management of non-specific chest pain is probably effective in the short term. However, the findings were limited by small trials and the evidence was considered of questionable quality. Cochrane reviews have found no evidence that CBT is effective for tinnitus, although there appears to be an effect on management of associated depression and quality of life in this condition. CBT combined with hypnosis and distraction reduces self-reported pain in children. There is limited evidence to support its use in coping with the impact of multiple sclerosis, sleep disturbances related to aging, and dysmenorrhea, but more study is needed and results should be interpreted with caution. Previously CBT has been considered as moderately effective for treating chronic fatigue syndrome. however a National Institutes of Health Pathways to Prevention Workshop stated that in respect of improving treatment options for ME/CFS that the modest benefit from cognitive behavioral therapy should be studied as an adjunct to other methods. The Centres for Disease Control advice on the treatment of ME/CFS makes no reference to CBT while the National Institute for Health and Care Excellence states that cognitive behavioural therapy (CBT) has sometimes been assumed to be a cure for ME/CFS, however, it should only be offered to support people who live with ME/CFS to manage their symptoms, improve their functioning and reduce the distress associated with having a chronic illness." Methods of access Therapist A typical CBT programme would consist of face-to-face sessions between patient and therapist, made up of 6–18 sessions of around an hour each with a gap of 1–3 weeks between sessions. This initial programme might be followed by some booster sessions, for instance after one month and three months. CBT has also been found to be effective if patient and therapist type in real time to each other over computer links. Cognitive-behavioral therapy is most closely allied with the scientist–practitioner model in which clinical practice and research are informed by a scientific perspective, clear operationalization of the problem, and an emphasis on measurement, including measuring changes in cognition and behavior and the attainment of goals. These are often met through "homework" assignments in which the patient and the therapist work together to craft an assignment to complete before the next session. The completion of these assignments – which can be as simple as a person suffering from depression attending some kind of social event – indicates a dedication to treatment compliance and a desire to change. The therapists can then logically gauge the next step of treatment based on how thoroughly the patient completes the assignment. Effective cognitive behavioral therapy is dependent on a therapeutic alliance between the healthcare practitioner and the person seeking assistance. Unlike many other forms of psychotherapy, the patient is very involved in CBT. For example, an anxious patient may be asked to talk to a stranger as a homework assignment, but if that is too difficult, he or she can work out an easier assignment first. The therapist needs to be flexible | a more successful extinction learning during exposure therapy for anxiety disorders. For instance, glucocorticoids can prevent aversive learning episodes from being retrieved and heighten reinforcement of memory traces creating a non-fearful reaction in feared situations. A combination of glucocorticoids and exposure therapy may be a better-improved treatment for treating people with anxiety disorders. Prevention For anxiety disorders, use of CBT with people at risk has significantly reduced the number of episodes of generalized anxiety disorder and other anxiety symptoms, and also given significant improvements in explanatory style, hopelessness, and dysfunctional attitudes. In another study, 3% of the group receiving the CBT intervention developed generalized anxiety disorder by 12 months postintervention compared with 14% in the control group. Subthreshold panic disorder sufferers were found to significantly benefit from use of CBT. Use of CBT was found to significantly reduce social anxiety prevalence. For depressive disorders, a stepped-care intervention (watchful waiting, CBT and medication if appropriate) achieved a 50% lower incidence rate in a patient group aged 75 or older. Another depression study found a neutral effect compared to personal, social, and health education, and usual school provision, and included a comment on potential for increased depression scores from people who have received CBT due to greater self recognition and acknowledgement of existing symptoms of depression and negative thinking styles. A further study also saw a neutral result. A meta-study of the Coping with Depression course, a cognitive behavioral intervention delivered by a psychoeducational method, saw a 38% reduction in risk of major depression. Bipolar disorder Many studies show CBT, combined with pharmacotherapy, is effective in improving depressive symptoms, mania severity and psychosocial functioning with mild to moderate effects, and that it is better than medication alone. INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including bipolar disorder. This included schizophrenia, depression, bipolar disorder, panic disorder, post-traumatic stress, anxiety disorders, bulimia, anorexia, personality disorders and alcohol dependency. Psychosis In long-term psychoses, CBT is used to complement medication and is adapted to meet individual needs. Interventions particularly related to these conditions include exploring reality testing, changing delusions and hallucinations, examining factors which precipitate relapse, and managing relapses. Meta-analyses confirm the effectiveness of metacognitive training (MCT) for the improvement of positive symptoms (e.g., delusions). For people at risk of psychosis, in 2014 the UK National Institute for Health and Care Excellence (NICE) recommended preventive CBT. Schizophrenia INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including schizophrenia. A Cochrane review reported CBT had "no effect on long‐term risk of relapse" and no additional effect above standard care. A 2015 systematic review investigated the effects of CBT compared with other psychosocial therapies for people with schizophrenia and determined that there is no clear advantage over other, often less expensive, interventions but acknowledged that better quality evidence is needed before firm conclusions can be drawn. Addiction and substance use disorders Pathological and problem gambling CBT is also used for pathological and problem gambling. The percentage of people who problem gamble is 1–3% around the world. Cognitive behavioral therapy develops skills for relapse prevention and someone can learn to control their mind and manage high-risk cases. There is evidence of efficacy of CBT for treating pathological and problem gambling at immediate follow up, however the longer term efficacy of CBT for it is currently unknown. Smoking cessation CBT looks at the habit of smoking cigarettes as a learned behavior, which later evolves into a coping strategy to handle daily stressors. Since smoking is often easily accessible and quickly allows the user to feel good, it can take precedence over other coping strategies, and eventually work its way into everyday life during non-stressful events as well. CBT aims to target the function of the behavior, as it can vary between individuals, and works to inject other coping mechanisms in place of smoking. CBT also aims to support individuals suffering from strong cravings, which are a major reported reason for relapse during treatment. In a 2008 controlled study out of Stanford University School of Medicine, suggested CBT may be an effective tool to help maintain abstinence. The results of 304 random adult participants were tracked over the course of one year. During this program, some participants were provided medication, CBT, 24-hour phone support, or some combination of the three methods. At 20 weeks, the participants who received CBT had a 45% abstinence rate, versus non-CBT participants, who had a 29% abstinence rate. Overall, the study concluded that emphasizing cognitive and behavioral strategies to support smoking cessation can help individuals build tools for long term smoking abstinence. Mental health history can affect the outcomes of treatment. Individuals with a history of depressive disorders had a lower rate of success when using CBT alone to combat smoking addiction. A Cochrane review was unable to find evidence of any difference between CBT and hypnosis for smoking cessation. While this may be evidence of no effect, further research may uncover an effect of CBT for smoking cessation. Substance use disorders Studies have shown CBT to be an effective treatment for substance use disorders. For individuals with substance use disorders, CBT aims to reframe maladaptive thoughts, such as denial, minimizing and catastrophizing thought patterns, with healthier narratives. Specific techniques include identifying potential triggers and developing coping mechanisms to manage high-risk situations. Research has shown CBT to be particularly effective when combined with other therapy-based treatments or medication. INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including alcohol dependency. Internet addiction Research has identified Internet addiction as a new clinical disorder that causes relational, occupational, and social problems. Cognitive behavioral therapy (CBT) has been suggested as the treatment of choice for Internet addiction, and addiction recovery in general has used CBT as part of treatment planning. Eating disorders Though many forms of treatment can support individuals with eating disorders, CBT is proven to be a more effective treatment than medications and interpersonal psychotherapy alone. CBT aims to combat major causes of distress such as negative cognitions surrounding body weight, shape and size. CBT therapists also work with individuals to regulate strong emotions and thoughts that lead to dangerous compensatory behaviors. CBT is the first line of treatment for bulimia nervosa, and Eating Disorder Non-Specific. While there is evidence to support the efficacy of CBT for bulimia nervosa and binging, the evidence is somewhat variable and limited by small study sizes. INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including bulimia and anorexia nervosa. With autistic adults Emerging evidence for cognitive behavioral interventions aimed at reducing symptoms of depression, anxiety, and obsessive-compulsive disorder in autistic adults without intellectual disability has been identified through a systematic review. While the research was focused on adults, cognitive behavioral interventions have also been beneficial to autistic children. Other uses Evidence suggests a possible role for CBT in the treatment of attention deficit hyperactivity disorder (ADHD), hypochondriasis, and bipolar disorder, but more study is needed and results should be interpreted with caution. CBT can have a therapeutic effects on easing symptoms of anxiety and depression in people with Alzheimer's disease. CBT has been studied as an aid in the treatment of anxiety associated with stuttering. Initial studies have shown CBT to be effective in reducing social anxiety in adults who stutter, but not in reducing stuttering frequency. There is some evidence that CBT is superior in the long-term to benzodiazepines and the nonbenzodiazepines in the treatment and management of insomnia. Computerized CBT (CCBT) has been proven to be effective by randomized controlled and other trials in treating insomnia. Some research has found similar effectiveness to an intervention of informational websites and weekly telephone calls. CCBT was found to be equally effective as face-to-face CBT in insomnia. A Cochrane review of interventions aimed at preventing psychological stress in healthcare workers found that CBT was more effective than no intervention but no more effective than alternative stress-reduction interventions. Cochrane Reviews have found no convincing evidence that CBT training helps foster care providers manage difficult behaviors in the youths under their care, nor was it helpful in treating people who abuse their intimate partners. CBT has been applied in both clinical and non-clinical environments to treat disorders such as personality disorders and behavioral problems. INSERM's 2004 review found that CBT is an effective therapy for personality disorders. Individuals with medical conditions In the case of people with metastatic breast cancer, data is limited but CBT and other psychosocial interventions might help with psychological outcomes and pain management. A 2015 Cochrane review also found that CBT for symptomatic management of non-specific chest pain is probably effective in the short term. However, the findings were limited by small trials and the evidence was considered of questionable quality. Cochrane reviews have found no evidence that CBT is effective for tinnitus, although there appears to be an effect on management of associated depression and quality of life in this condition. CBT combined with hypnosis and distraction reduces self-reported pain in children. There is limited evidence to support its use in coping with the impact of multiple sclerosis, sleep disturbances related to aging, and dysmenorrhea, but more study is needed and results should be interpreted with caution. Previously CBT has been considered as moderately effective for treating chronic fatigue syndrome. however a National Institutes of Health Pathways to Prevention Workshop stated that in respect of improving treatment options for ME/CFS that the modest benefit from cognitive behavioral therapy should be studied as an adjunct to other methods. The Centres for Disease Control advice on the treatment of ME/CFS makes no reference to CBT while the National Institute for Health and Care Excellence states that cognitive behavioural therapy (CBT) has sometimes been assumed to be a cure for ME/CFS, however, it should only be offered to support people who live with ME/CFS to manage their symptoms, improve their functioning and reduce the distress associated with having a chronic illness." Methods of access Therapist A typical CBT programme would consist of face-to-face sessions between patient and therapist, made up of 6–18 sessions of around an hour each with a gap of 1–3 weeks between sessions. This initial programme might be followed by some booster sessions, for instance after one month and three months. CBT has also been found to be effective if patient and therapist type in real time to each other over computer links. Cognitive-behavioral therapy is most closely allied with the scientist–practitioner model in which clinical practice and research are informed by a scientific perspective, clear operationalization of the problem, and an emphasis on measurement, including measuring changes in cognition and behavior and the attainment of goals. These are often met through "homework" assignments in which the patient and the therapist work together to craft an assignment to complete before the next session. The completion of these assignments – which can be as simple as a person suffering from depression attending some kind of social event – indicates a dedication to treatment compliance and a desire to change. The therapists can then logically gauge the next step of treatment based on how thoroughly the patient completes the assignment. Effective cognitive behavioral therapy is dependent on a therapeutic alliance between the healthcare practitioner and the person seeking assistance. Unlike many other forms of psychotherapy, the patient is very involved in CBT. For example, an anxious patient may be asked to talk to a stranger as a homework assignment, but if that is too difficult, he or she can work out an easier assignment first. The therapist needs to be flexible and willing to listen to the patient rather than acting as an authority figure. Computerized or Internet-delivered (CCBT) Computerized cognitive behavioral therapy (CCBT) has been described by NICE as a "generic term for delivering CBT via an interactive computer interface delivered by a personal computer, internet, or interactive voice response system", instead of face-to-face with a human therapist. It is also known as internet-delivered cognitive behavioral therapy or ICBT. CCBT has potential to improve access to evidence-based therapies, and to overcome the prohibitive costs and lack of availability sometimes associated with retaining a human therapist. In this context, it is important not to confuse CBT with 'computer-based training', which nowadays is more commonly referred to as e-Learning. CCBT has been found in meta-studies to be cost-effective and often cheaper than usual care, including for anxiety. Studies have shown that individuals with social anxiety and depression experienced improvement with online CBT-based methods. A review of current CCBT research in the treatment of OCD in children found this interface to hold great potential for future treatment of OCD in youths and adolescent populations. Additionally, most internet interventions for posttraumatic stress disorder use CCBT. CCBT is also predisposed to treating mood disorders amongst non-heterosexual populations, who may avoid face-to-face therapy from fear of stigma. However presently CCBT programs seldom cater to these populations. In February 2006 NICE recommended that CCBT be made available for use within the NHS across England and Wales for patients presenting with mild-to-moderate depression, rather than immediately opting for antidepressant medication, and CCBT is made available by some health systems. The 2009 NICE guideline recognized that there are likely to be a number of computerized CBT products that are useful to patients, but removed endorsement of any specific product. Smartphone app-delivered Another new method of access is the use of mobile app or smartphone applications to deliver self-help or guided CBT. Technology companies are developing mobile-based artificial intelligence chatbot applications in delivering CBT as an early intervention to support mental health, to build psychological resilience, and to promote emotional well-being. Artificial intelligence (AI) text-based conversational application delivered securely and privately over smartphone devices have the ability to scale globally and offer contextual and always-available support. Active research is underway including real-world data studies that measure effectiveness and engagement of text-based smartphone chatbot apps for delivery of CBT using a text-based conversational interface. Reading self-help materials Enabling patients to read self-help CBT guides has been shown to be effective by some studies. However one study found a negative effect in patients who tended to ruminate, and another meta-analysis found that the benefit was only significant when the self-help was guided (e.g. by a medical professional). Group educational course Patient participation in group courses has been shown to be effective. In a meta-analysis reviewing evidence-based treatment of OCD in children, individual CBT was found to be more efficacious than group CBT. Types Brief cognitive behavioral therapy Brief cognitive behavioral therapy (BCBT) is a form of CBT which has been developed for situations in which there are time constraints on the therapy sessions. BCBT takes place over a couple of sessions that can last up to 12 accumulated hours by design. This technique was first implemented and developed on soldiers overseas in active duty by David M. Rudd to prevent suicide. Breakdown of treatment Orientation Commitment to treatment Crisis response and safety planning Means restriction Survival kit Reasons for living card Model of suicidality Treatment journal Lessons learned Skill focus Skill development worksheets Coping cards Demonstration Practice Skill refinement Relapse prevention Skill generalization Skill refinement Cognitive emotional behavioral therapy Cognitive emotional behavioral therapy (CEBT) is a form of CBT developed initially for individuals with eating disorders but now used with a range of problems including anxiety, depression, obsessive compulsive disorder (OCD), post-traumatic stress disorder (PTSD) and anger problems. It combines aspects of CBT and dialectical behavioral therapy and aims to improve understanding and tolerance of emotions in order to facilitate the therapeutic process. It is frequently used as a "pretreatment" to prepare and better equip individuals for longer-term therapy. Structured cognitive behavioral training Structured cognitive-behavioral training (SCBT) is a cognitive-based process with core philosophies that draw heavily from CBT. Like CBT, SCBT asserts that behavior is inextricably related to beliefs, thoughts, and emotions. SCBT also builds on core CBT philosophy by incorporating other well-known modalities in the fields of behavioral health and psychology: most notably, Albert Ellis's rational emotive behavior therapy. SCBT differs from CBT in two distinct ways. First, SCBT is delivered in a highly regimented format. Second, SCBT is a predetermined and finite training process that becomes personalized by the input of the participant. SCBT is designed to bring a participant to a specific result in a specific period of time. SCBT has been used to challenge addictive behavior, particularly with substances such as tobacco, alcohol and food, and to manage diabetes and subdue stress and anxiety. SCBT has also been used in the field of criminal psychology in the effort to reduce recidivism. Moral reconation therapy Moral reconation therapy, a type of CBT used to help felons overcome antisocial personality disorder (ASPD), slightly decreases the risk of further offending. It is generally implemented in a group format because of the risk of offenders with ASPD being given one-on-one therapy reinforces narcissistic behavioral characteristics, and can be used in correctional or outpatient settings. Groups usually meet weekly for two to six months. Stress inoculation training This type of therapy uses a blend of cognitive, behavioral, and certain humanistic training techniques to target the stressors of the client. This usually is used to help clients better cope with their stress or anxiety after stressful events. This is a three-phase process that trains the client to use skills that they already have to better adapt to their current stressors. The first phase is an interview phase that includes psychological testing, client self-monitoring, and a variety of reading materials. This allows the therapist to individually tailor the training process to the client. Clients learn how to categorize problems into emotion-focused or problem-focused so that they can better treat their negative situations. This phase ultimately prepares the client to eventually confront and reflect upon their current reactions to stressors, before looking at ways to change their reactions and emotions to their stressors. The focus is conceptualization. The second phase emphasizes the aspect of skills acquisition and rehearsal that continues from the earlier phase of conceptualization. The client is taught skills that help them cope with their stressors. These skills are then practised in the space of therapy. These skills involve self-regulation, problem-solving, interpersonal communication skills, etc. The third and final phase is the application and following through of the skills learned in the training process. This gives the client opportunities to apply their learned skills to a wide range of stressors. Activities include role-playing, imagery, modeling, etc. In the end, the client will have been trained on a preventive basis to inoculate personal, chronic, and future stressors by breaking down their stressors into problems they will address in long-term, short-term, and intermediate coping goals. Activity-guided CBT: Group-knitting A newly developed group therapy model based on Cognitive Behavioral Therapy (CBT) integrates knitting into the therapeutical process and has been proven to yield reliable and promising results. The foundation for this novel approach to CBT is the frequently emphasized notion that therapy success depends on the embeddedness of the therapy method in the patients' natural routine. Similar to standard group-based Cognitive Behavioural Therapy, patients meet once a week in a group of 10 to 15 patients and knit together under the instruction of a trained psychologist or mental health professional. Central for the therapy is the patient's imaginative ability to assign each part of the wool to a certain thought. During the therapy, the wool is carefully knitted, creating a knitted piece of any form. This therapeutical process teaches the patient to meaningfully align thought, by (physically) creating a coherent knitted piece. Moreover, since CBT emphasizes the behavior as a result of cognition, the knitting illustrates how thoughts (which are tried to be imaginary tight to the wool) materialize into the reality surrounding us. Mindfulness-based cognitive behavioral hypnotherapy Mindfulness-based cognitive behavioral hypnotherapy (MCBH) is a form of CBT focusing on awareness in reflective approach with addressing of subconscious tendencies. It is more the process that contains basically three phases that are used for achieving wanted goals. Unified Protocol The Unified Protocol for Transdiagnostic Treatment of Emotional Disorders (UP) is a form of CBT, developed by David H. Barlow and researchers at Boston University, that can be applied to a range of and anxiety disorders. The rationale is that anxiety and depression disorders often occur together due to common underlying causes and can efficiently be treated together. The UP includes a common set of components: Psycho-education Cognitive reappraisal Emotion regulation Changing behaviour The UP has been shown to produce equivalent results to single-diagnosis protocols for specific disorders, such as OCD and social anxiety disorder. Several studies have shown that the UP is easier to disseminate as compared to single-diagnosis protocols. Criticisms Relative effectiveness The research conducted for CBT has been a topic of sustained controversy. While some researchers write that CBT is more effective than other treatments, |
Qieyun rime book (601 CE), and a late period in the 10th century, reflected by rhyme tables such as the Yunjing constructed by ancient Chinese philologists as a guide to the Qieyun system. These works define phonological categories, but with little hint of what sounds they represent. Linguists have identified these sounds by comparing the categories with pronunciations in modern varieties of Chinese, borrowed Chinese words in Japanese, Vietnamese, and Korean, and transcription evidence. The resulting system is very complex, with a large number of consonants and vowels, but they are probably not all distinguished in any single dialect. Most linguists now believe it represents a diasystem encompassing 6th-century northern and southern standards for reading the classics. Classical and literary forms The relationship between spoken and written Chinese is rather complex ("diglossia"). Its spoken varieties have evolved at different rates, while written Chinese itself has changed much less. Classical Chinese literature began in the Spring and Autumn period. Rise of northern dialects After the fall of the Northern Song dynasty and subsequent reign of the Jin (Jurchen) and Yuan (Mongol) dynasties in northern China, a common speech (now called Old Mandarin) developed based on the dialects of the North China Plain around the capital. The Zhongyuan Yinyun (1324) was a dictionary that codified the rhyming conventions of new sanqu verse form in this language. Together with the slightly later Menggu Ziyun, this dictionary describes a language with many of the features characteristic of modern Mandarin dialects. Up to the early 20th century, most Chinese people only spoke their local variety. Thus, as a practical measure, officials of the Ming and Qing dynasties carried out the administration of the empire using a common language based on Mandarin varieties, known as Guānhuà (/, literally "language of officials"). For most of this period, this language was a koiné based on dialects spoken in the Nanjing area, though not identical to any single dialect. By the middle of the 19th century, the Beijing dialect had become dominant and was essential for any business with the imperial court. In the 1930s, a standard national language, Guóyǔ (/ ; "national language") was adopted. After much dispute between proponents of northern and southern dialects and an abortive attempt at an artificial pronunciation, the National Language Unification Commission finally settled on the Beijing dialect in 1932. The People's Republic founded in 1949 retained this standard but renamed it pǔtōnghuà (/; "common speech"). The national language is now used in education, the media, and formal situations in both Mainland China and Taiwan. Because of their colonial and linguistic history, the language used in education, the media, formal speech, and everyday life in Hong Kong and Macau is the local Cantonese, although the standard language, Mandarin, has become very influential and is being taught in schools. Influence Historically, the Chinese language has spread to its neighbors through a variety of means. Northern Vietnam was incorporated into the Han empire in 111 BCE, marking the beginning of a period of Chinese control that ran almost continuously for a millennium. The Four Commanderies were established in northern Korea in the first century BCE, but disintegrated in the following centuries. Chinese Buddhism spread over East Asia between the 2nd and 5th centuries CE, and with it the study of scriptures and literature in Literary Chinese. Later Korea, Japan, and Vietnam developed strong central governments modeled on Chinese institutions, with Literary Chinese as the language of administration and scholarship, a position it would retain until the late 19th century in Korea and (to a lesser extent) Japan, and the early 20th century in Vietnam. Scholars from different lands could communicate, albeit only in writing, using Literary Chinese. Although they used Chinese solely for written communication, each country had its own tradition of reading texts aloud, the so-called Sino-Xenic pronunciations. Chinese words with these pronunciations were also extensively imported into the Korean, Japanese and Vietnamese languages, and today comprise over half of their vocabularies. This massive influx led to changes in the phonological structure of the languages, contributing to the development of moraic structure in Japanese and the disruption of vowel harmony in Korean. Borrowed Chinese morphemes have been used extensively in all these languages to coin compound words for new concepts, in a similar way to the use of Latin and Ancient Greek roots in European languages. Many new compounds, or new meanings for old phrases, were created in the late 19th and early 20th centuries to name Western concepts and artifacts. These coinages, written in shared Chinese characters, have then been borrowed freely between languages. They have even been accepted into Chinese, a language usually resistant to loanwords, because their foreign origin was hidden by their written form. Often different compounds for the same concept were in circulation for some time before a winner emerged, and sometimes the final choice differed between countries. The proportion of vocabulary of Chinese origin thus tends to be greater in technical, abstract, or formal language. For example, in Japan, Sino-Japanese words account for about 35% of the words in entertainment magazines, over half the words in newspapers, and 60% of the words in science magazines. Vietnam, Korea, and Japan each developed writing systems for their own languages, initially based on Chinese characters, but later replaced with the hangul alphabet for Korean and supplemented with kana syllabaries for Japanese, while Vietnamese continued to be written with the complex chữ nôm script. However, these were limited to popular literature until the late 19th century. Today Japanese is written with a composite script using both Chinese characters (kanji) and kana. Korean is written exclusively with hangul in North Korea (although knowledge of the supplementary Chinese characters - hanja - is still required), and hanja are increasingly rarely used in South Korea. As a result of former French colonization, Vietnamese switched to a Latin-based alphabet. Examples of loan words in English include "tea", from Hokkien (Min Nan) (), "dim sum", from Cantonese dim2 sam1 () and "kumquat", from Cantonese gam1gwat1 (). Varieties Jerry Norman estimated that there are hundreds of mutually unintelligible varieties of Chinese. These varieties form a dialect continuum, in which differences in speech generally become more pronounced as distances increase, though the rate of change varies immensely. Generally, mountainous South China exhibits more linguistic diversity than the North China Plain. In parts of South China, a major city's dialect may only be marginally intelligible to close neighbors. For instance, Wuzhou is about upstream from Guangzhou, but the Yue variety spoken there is more like that of Guangzhou than is that of Taishan, southwest of Guangzhou and separated from it by several rivers. In parts of Fujian the speech of neighboring counties or even villages may be mutually unintelligible. Until the late 20th century, Chinese emigrants to Southeast Asia and North America came from southeast coastal areas, where Min, Hakka, and Yue dialects are spoken. The vast majority of Chinese immigrants to North America up to the mid-20th century spoke the Taishan dialect, from a small coastal area southwest of Guangzhou. Grouping Local varieties of Chinese are conventionally classified into seven dialect groups, largely on the basis of the different evolution of Middle Chinese voiced initials: Mandarin, including Standard Chinese, Pekingese, Sichuanese, and also the Dungan language spoken in Central Asia Wu, including Shanghainese, Suzhounese, and Wenzhounese Gan Xiang Min, including Fuzhounese, Hainanese, Hokkien and Teochew Hakka Yue, including Cantonese and Taishanese The classification of Li Rong, which is used in the Language Atlas of China (1987), distinguishes three further groups: Jin, previously included in Mandarin. Huizhou, previously included in Wu. Pinghua, previously included in Yue. Some varieties remain unclassified, including Danzhou dialect (spoken in Danzhou, on Hainan Island), Waxianghua (spoken in western Hunan) and Shaozhou Tuhua (spoken in northern Guangdong). Standard Chinese Standard Chinese, often called Mandarin, is the official standard language of China, de facto official language of Taiwan, and one of the four official languages of Singapore (where it is called "Huáyŭ" / or simply Chinese). Standard Chinese is based on the Beijing dialect, the dialect of Mandarin as spoken in Beijing. The governments of both China and Taiwan intend for speakers of all Chinese speech varieties to use it as a common language of communication. Therefore, it is used in government agencies, in the media, and as a language of instruction in schools. In China and Taiwan, diglossia has been a common feature. For example, in addition to Standard Chinese, a resident of Shanghai might speak Shanghainese; and, if they grew up elsewhere, then they are also likely to be fluent in the particular dialect of that local area. A native of Guangzhou may speak both Cantonese and Standard Chinese. In addition to Mandarin, most Taiwanese also speak Taiwanese Hokkien (commonly "Taiwanese" ), Hakka, or an Austronesian language. A Taiwanese may commonly mix pronunciations, phrases, and words from Mandarin and other Taiwanese languages, and this mixture is considered normal in daily or informal speech. Due to their traditional cultural ties to Guangdong province and colonial histories, Cantonese is used as the standard variant of Chinese in Hong Kong and Macau instead. Nomenclature The official Chinese designation for the major branches of Chinese is fāngyán (, literally "regional speech"), whereas the more closely related varieties within these are called dìdiǎn fāngyán (/ "local speech"). Conventional English-language usage in Chinese linguistics is to use dialect for the speech of a particular place (regardless of status) and dialect group for a regional grouping such as Mandarin or Wu. Because varieties from different groups are not mutually intelligible, some scholars prefer to describe Wu and others as separate languages. Jerry Norman called this practice misleading, pointing out that Wu, which itself contains many mutually unintelligible varieties, could not be properly called a single language under the same criterion, and that the same is true for each of the other groups. Mutual intelligibility is considered by some linguists to be the main criterion for determining whether varieties are separate languages or dialects of a single language, although others do not regard it as decisive, particularly when cultural factors interfere as they do with Chinese. As explains, linguists often ignore mutual intelligibility when varieties share intelligibility with a central variety (i.e. prestige variety, such as Standard Mandarin), as the issue requires some careful handling when mutual intelligibility is inconsistent with language identity. John DeFrancis argues that it is inappropriate to refer to Mandarin, Wu and so on as "dialects" because the mutual unintelligibility between them is too great. On the other hand, he also objects to considering them as separate languages, as it incorrectly implies a set of disruptive "religious, economic, political, and other differences" between speakers that exist, for example, between French Catholics and English Protestants in Canada, but not between speakers of Cantonese and Mandarin in China, owing to China's near-uninterrupted history of centralized government. Because of the difficulties involved in determining the difference between language and dialect, other terms have been proposed. These include vernacular, lect, regionalect, topolect, and variety. Most Chinese people consider the spoken varieties as one single language because speakers share a common culture and history, as well as a shared national identity and a common written form. Phonology The phonological structure of each syllable consists of a nucleus that has a vowel (which can be a monophthong, diphthong, or even a triphthong in certain varieties), preceded by an onset (a single consonant, or consonant+glide; zero onset is also possible), and followed (optionally) by a coda consonant; a syllable also carries a tone. There are some instances where a vowel is not used as a nucleus. An example of this is in Cantonese, where the nasal sonorant consonants and can stand alone as their own syllable. In Mandarin much more than in other spoken varieties, most syllables tend to be open syllables, meaning they have no coda (assuming that a final glide is not analyzed as a coda), but syllables that do have codas are restricted to nasals , , , the retroflex approximant , and voiceless stops , , , or . Some varieties allow most of these codas, whereas others, such as Standard Chinese, are limited to only , and . The number of sounds in the different spoken dialects varies, but in general there has been a tendency to a reduction in sounds from Middle Chinese. The Mandarin dialects in particular have experienced a dramatic decrease in sounds and so have far more multisyllabic words than most other spoken varieties. The total number of syllables in some varieties is therefore only about a thousand, including tonal variation, which is only about an eighth as many as English. Tones All varieties of spoken Chinese use tones to distinguish words. A few dialects of north China may have as few as three tones, while some dialects in south China have up to 6 or 12 tones, depending on how one counts. One exception from this is Shanghainese which has reduced the set of tones to a two-toned pitch accent system much like modern Japanese. A very common example used to illustrate the use of tones in Chinese is the application of the four tones of Standard Chinese (along with the neutral tone) to the syllable ma. The tones are exemplified by the following five Chinese words: Standard Cantonese, in contrast, has six tones. Historically, finals that end in a stop consonant were considered to be "checked tones" and thus counted separately for a total of nine tones. However, they are considered to be duplicates in modern linguistics and are no longer counted as such: Grammar Chinese is often described as a "monosyllabic" language. However, this is only partially correct. It is largely accurate when describing Classical Chinese and Middle Chinese; in Classical Chinese, for example, perhaps 90% of words correspond to a single syllable and a single character. In the modern varieties, it is usually the case that a morpheme (unit of meaning) is a single syllable; in contrast, English has many multi-syllable morphemes, both bound and free, such as "seven", "elephant", "para-" and "-able". Some of the conservative southern varieties of modern Chinese have largely monosyllabic words, especially among the more basic vocabulary. In modern Mandarin, however, most nouns, adjectives and verbs are largely disyllabic. A significant cause of this is phonological attrition. Sound change over time has steadily reduced the number of possible syllables. In modern Mandarin, there are now only about 1,200 possible syllables, including tonal distinctions, compared with about 5,000 in Vietnamese (still largely monosyllabic) and over 8,000 in English. This phonological collapse has led to a corresponding increase in the number of homophones. As an example, the small Langenscheidt Pocket Chinese Dictionary lists six words that are commonly pronounced as shí (tone 2): 'ten'; / 'real, actual'; / 'know (a person), recognize'; 'stone'; / 'time'; 'food, eat'. These were all pronounced differently in Early Middle Chinese; in William H. Baxter's transcription they were , , , , and respectively. They are still pronounced differently in today's Cantonese; in Jyutping they are sap9, sat9, sik7, sek9, si4, sik9. In modern spoken Mandarin, however, tremendous ambiguity would result if all of these words could be used as-is; Yuen Ren Chao's modern poem Lion-Eating Poet in the Stone Den exploits this, consisting of 92 characters all pronounced shi. As such, most of these words have been replaced (in speech, if not in writing) with a longer, less-ambiguous compound. Only the first one, 'ten', normally appears as such when spoken; the rest are normally replaced with, respectively, shíjì / (lit. 'actual-connection'); rènshi / (lit. 'recognize-know'); shítou / (lit. 'stone-head'); shíjiān / (lit. 'time-interval'); shíwù (lit. 'foodstuff'). In each case, the homophone was disambiguated by adding another morpheme, typically either a synonym or a generic word of some sort (for example, 'head', 'thing'), the purpose of which is simply to indicate which of the possible meanings of the other, homophonic syllable should be selected. However, when one of the above words forms part of a compound, the disambiguating syllable is generally dropped and the resulting word is still disyllabic. For example, shí alone, not shítou /, appears in compounds meaning 'stone-', for example, shígāo 'plaster' (lit. 'stone cream'), shíhuī 'lime' (lit. 'stone dust'), shíkū 'grotto' (lit. 'stone cave'), shíyīng 'quartz' (lit. 'stone flower'), shíyóu 'petroleum' (lit. 'stone oil'). Most modern varieties of Chinese have the tendency to form new words through disyllabic, trisyllabic and tetra-character compounds. In some cases, monosyllabic words have become disyllabic without compounding, as in kūlong from kǒng 孔; this is especially common in Jin. Chinese morphology is strictly bound to a set number of syllables with a fairly rigid construction. Although many of these single-syllable morphemes (zì, ) can stand alone as individual words, they more often than not form multi-syllabic compounds, known as cí (/), which more closely resembles the traditional Western notion of a word. A Chinese cí ('word') can consist of more than one character-morpheme, usually two, but there can be three or more. For example: / 'cloud' , /, / 'hamburger' 'I, me' 'people, human, mankind' 'The Earth' / 'lightning' / 'dream' All varieties of modern Chinese are analytic languages, in that they depend on syntax (word order and sentence structure) rather than morphology—i.e., changes in form of a word—to indicate the word's function in a sentence. In other words, Chinese has very few grammatical inflections—it possesses no tenses, no voices, no numbers (singular, plural; though there are plural markers, for example for personal pronouns), and only a few articles (i.e., equivalents to "the, a, an" in English). They make heavy use of grammatical particles to indicate aspect and mood. In Mandarin Chinese, this involves the use of particles like le (perfective), hái / ('still'), yǐjīng / ('already'), and so on. Chinese has a subject–verb–object word order, and like many other languages of East Asia, makes frequent use of the topic–comment construction to form sentences. Chinese also has an extensive system of classifiers and measure words, another trait shared with neighboring languages like Japanese and Korean. Other notable grammatical features common to all the spoken varieties of Chinese include the use of serial verb construction, pronoun dropping and the related subject dropping. Although the grammars of the spoken varieties | Nations. The written form, using the logograms known as Chinese characters, is shared by literate speakers of mutually unintelligible dialects. Since the 1950s, simplified Chinese characters have been promoted for use by the government of the People's Republic of China, while Singapore officially adopted simplified characters in 1976. Traditional characters remain in use in Taiwan, Hong Kong, Macau, and other countries with significant overseas Chinese speaking communities such as Malaysia (which although adopted simplified characters as the de facto standard in the 1980s, traditional characters still remain in widespread use). Classification Linguists classify all varieties of Chinese as part of the Sino-Tibetan language family, together with Burmese, Tibetan and many other languages spoken in the Himalayas and the Southeast Asian Massif. Although the relationship was first proposed in the early 19th century and is now broadly accepted, reconstruction of Sino-Tibetan is much less developed than that of families such as Indo-European or Austroasiatic. Difficulties have included the great diversity of the languages, the lack of inflection in many of them, and the effects of language contact. In addition, many of the smaller languages are spoken in mountainous areas that are difficult to reach and are often also sensitive border zones. Without a secure reconstruction of proto-Sino-Tibetan, the higher-level structure of the family remains unclear. A top-level branching into Chinese and Tibeto-Burman languages is often assumed, but has not been convincingly demonstrated. History The first written records appeared over 3,000 years ago during the Shang dynasty. As the language evolved over this period, the various local varieties became mutually unintelligible. In reaction, central governments have repeatedly sought to promulgate a unified standard. Old and Middle Chinese The earliest examples of Chinese are divinatory inscriptions on oracle bones from around 1250 BCE in the late Shang dynasty. Old Chinese was the language of the Western Zhou period (1046–771 BCE), recorded in inscriptions on bronze artifacts, the Classic of Poetry and portions of the Book of Documents and I Ching. Scholars have attempted to reconstruct the phonology of Old Chinese by comparing later varieties of Chinese with the rhyming practice of the Classic of Poetry and the phonetic elements found in the majority of Chinese characters. Although many of the finer details remain unclear, most scholars agree that Old Chinese differs from Middle Chinese in lacking retroflex and palatal obstruents but having initial consonant clusters of some sort, and in having voiceless nasals and liquids. Most recent reconstructions also describe an atonal language with consonant clusters at the end of the syllable, developing into tone distinctions in Middle Chinese. Several derivational affixes have also been identified, but the language lacks inflection, and indicated grammatical relationships using word order and grammatical particles. Middle Chinese was the language used during Northern and Southern dynasties and the Sui, Tang, and Song dynasties (6th through 10th centuries CE). It can be divided into an early period, reflected by the Qieyun rime book (601 CE), and a late period in the 10th century, reflected by rhyme tables such as the Yunjing constructed by ancient Chinese philologists as a guide to the Qieyun system. These works define phonological categories, but with little hint of what sounds they represent. Linguists have identified these sounds by comparing the categories with pronunciations in modern varieties of Chinese, borrowed Chinese words in Japanese, Vietnamese, and Korean, and transcription evidence. The resulting system is very complex, with a large number of consonants and vowels, but they are probably not all distinguished in any single dialect. Most linguists now believe it represents a diasystem encompassing 6th-century northern and southern standards for reading the classics. Classical and literary forms The relationship between spoken and written Chinese is rather complex ("diglossia"). Its spoken varieties have evolved at different rates, while written Chinese itself has changed much less. Classical Chinese literature began in the Spring and Autumn period. Rise of northern dialects After the fall of the Northern Song dynasty and subsequent reign of the Jin (Jurchen) and Yuan (Mongol) dynasties in northern China, a common speech (now called Old Mandarin) developed based on the dialects of the North China Plain around the capital. The Zhongyuan Yinyun (1324) was a dictionary that codified the rhyming conventions of new sanqu verse form in this language. Together with the slightly later Menggu Ziyun, this dictionary describes a language with many of the features characteristic of modern Mandarin dialects. Up to the early 20th century, most Chinese people only spoke their local variety. Thus, as a practical measure, officials of the Ming and Qing dynasties carried out the administration of the empire using a common language based on Mandarin varieties, known as Guānhuà (/, literally "language of officials"). For most of this period, this language was a koiné based on dialects spoken in the Nanjing area, though not identical to any single dialect. By the middle of the 19th century, the Beijing dialect had become dominant and was essential for any business with the imperial court. In the 1930s, a standard national language, Guóyǔ (/ ; "national language") was adopted. After much dispute between proponents of northern and southern dialects and an abortive attempt at an artificial pronunciation, the National Language Unification Commission finally settled on the Beijing dialect in 1932. The People's Republic founded in 1949 retained this standard but renamed it pǔtōnghuà (/; "common speech"). The national language is now used in education, the media, and formal situations in both Mainland China and Taiwan. Because of their colonial and linguistic history, the language used in education, the media, formal speech, and everyday life in Hong Kong and Macau is the local Cantonese, although the standard language, Mandarin, has become very influential and is being taught in schools. Influence Historically, the Chinese language has spread to its neighbors through a variety of means. Northern Vietnam was incorporated into the Han empire in 111 BCE, marking the beginning of a period of Chinese control that ran almost continuously for a millennium. The Four Commanderies were established in northern Korea in the first century BCE, but disintegrated in the following centuries. Chinese Buddhism spread over East Asia between the 2nd and 5th centuries CE, and with it the study of scriptures and literature in Literary Chinese. Later Korea, Japan, and Vietnam developed strong central governments modeled on Chinese institutions, with Literary Chinese as the language of administration and scholarship, a position it would retain until the late 19th century in Korea and (to a lesser extent) Japan, and the early 20th century in Vietnam. Scholars from different lands could communicate, albeit only in writing, using Literary Chinese. Although they used Chinese solely for written communication, each country had its own tradition of reading texts aloud, the so-called Sino-Xenic pronunciations. Chinese words with these pronunciations were also extensively imported into the Korean, Japanese and Vietnamese languages, and today comprise over half of their vocabularies. This massive influx led to changes in the phonological structure of the languages, contributing to the development of moraic structure in Japanese and the disruption of vowel harmony in Korean. Borrowed Chinese morphemes have been used extensively in all these languages to coin compound words for new concepts, in a similar way to the use of Latin and Ancient Greek roots in European languages. Many new compounds, or new meanings for old phrases, were created in the late 19th and early 20th centuries to name Western concepts and artifacts. These coinages, written in shared Chinese characters, have then been borrowed freely between languages. They have even been accepted into Chinese, a language usually resistant to loanwords, because their foreign origin was hidden by their written form. Often different compounds for the same concept were in circulation for some time before a winner emerged, and sometimes the final choice differed between countries. The proportion of vocabulary of Chinese origin thus tends to be greater in technical, abstract, or formal language. For example, in Japan, Sino-Japanese words account for about 35% of the words in entertainment magazines, over half the words in newspapers, and 60% of the words in science magazines. Vietnam, Korea, and Japan each developed writing systems for their own languages, initially based on Chinese characters, but later replaced with the hangul alphabet for Korean and supplemented with kana syllabaries for Japanese, while Vietnamese continued to be written with the complex chữ nôm script. However, these were limited to popular literature until the late 19th century. Today Japanese is written with a composite script using both Chinese characters (kanji) and kana. Korean is written exclusively with hangul in North Korea (although knowledge of the supplementary Chinese characters - hanja - is still required), and hanja are increasingly rarely used in South Korea. As a result of former French colonization, Vietnamese switched to a Latin-based alphabet. Examples of loan words in English include "tea", from Hokkien (Min Nan) (), "dim sum", from Cantonese dim2 sam1 () and "kumquat", from Cantonese gam1gwat1 (). Varieties Jerry Norman estimated that there are hundreds of mutually unintelligible varieties of Chinese. These varieties form a dialect continuum, in which differences in speech generally become more pronounced as distances increase, though the rate of change varies immensely. Generally, mountainous South China exhibits more linguistic diversity than the North China Plain. In parts of South China, a major city's dialect may only be marginally intelligible to close neighbors. For instance, Wuzhou is about upstream from Guangzhou, but the Yue variety spoken there is more like that of Guangzhou than is that of Taishan, southwest of Guangzhou and separated from it by several rivers. In parts of Fujian the speech of neighboring counties or even villages may be mutually unintelligible. Until the late 20th century, Chinese emigrants to Southeast Asia and North America came from southeast coastal areas, where Min, Hakka, and Yue dialects are spoken. The vast majority of Chinese immigrants to North America up to the mid-20th century spoke the Taishan dialect, from a small coastal area southwest of Guangzhou. Grouping Local varieties of Chinese are conventionally classified into seven dialect groups, largely on the basis of the different evolution of Middle Chinese voiced initials: Mandarin, including Standard Chinese, Pekingese, Sichuanese, and also the Dungan language spoken in Central Asia Wu, including Shanghainese, Suzhounese, and Wenzhounese Gan Xiang Min, including Fuzhounese, Hainanese, Hokkien and Teochew Hakka Yue, including Cantonese and Taishanese The classification of Li Rong, which is used in the Language Atlas of China (1987), distinguishes three further groups: Jin, previously included in Mandarin. Huizhou, previously included in Wu. Pinghua, previously included in Yue. Some varieties remain unclassified, including Danzhou dialect (spoken in Danzhou, on Hainan Island), Waxianghua (spoken in western Hunan) and Shaozhou Tuhua (spoken in northern Guangdong). Standard Chinese Standard Chinese, often called Mandarin, is the official standard language of China, de facto official language of Taiwan, and one of the four official languages of Singapore (where it is called "Huáyŭ" / or simply Chinese). Standard Chinese is based on the Beijing dialect, the dialect of Mandarin as spoken in Beijing. The governments of both China and Taiwan intend for speakers of all Chinese speech varieties to use it as a common language of communication. Therefore, it is used in government agencies, in the media, and as a language of instruction in schools. In China and Taiwan, diglossia has been a common feature. For example, in addition to Standard Chinese, a resident of Shanghai might speak Shanghainese; and, if they grew up elsewhere, then they are also likely to be fluent in the particular dialect of that local area. A native of Guangzhou may speak both Cantonese and Standard Chinese. In addition to Mandarin, most Taiwanese also speak Taiwanese Hokkien (commonly "Taiwanese" ), Hakka, or an Austronesian language. A Taiwanese may commonly mix pronunciations, phrases, and words from Mandarin and other Taiwanese languages, and this mixture is considered normal in daily or informal speech. Due to their traditional cultural ties to Guangdong province and colonial histories, Cantonese is used as the standard variant of Chinese in Hong Kong and Macau instead. Nomenclature The official Chinese designation for the major branches of Chinese is fāngyán (, literally "regional speech"), whereas the more closely related varieties within these are called dìdiǎn fāngyán (/ "local speech"). Conventional English-language usage in Chinese linguistics is to use dialect for the speech of a particular place (regardless of status) and dialect group for a regional grouping such as Mandarin or Wu. Because varieties from different groups are not mutually intelligible, some scholars prefer to describe Wu and others as separate languages. Jerry Norman called this practice misleading, pointing out that Wu, which itself contains many mutually unintelligible varieties, could not be properly called a single language under the same criterion, and that the same is true for each of the other groups. Mutual intelligibility is considered by some linguists to be the main criterion for determining whether varieties are separate languages or dialects of a single language, although others do not regard it as decisive, particularly when cultural factors interfere as they do with Chinese. As explains, linguists often ignore mutual intelligibility when varieties share intelligibility with a central variety (i.e. prestige variety, such as Standard Mandarin), as the issue requires some careful handling when mutual intelligibility is inconsistent with language identity. John DeFrancis argues that it is inappropriate to refer to Mandarin, Wu and so on as "dialects" because the mutual unintelligibility between them is too great. On the other hand, he also objects to considering them as separate languages, as it incorrectly implies a set of disruptive "religious, economic, political, and other differences" between speakers that exist, for example, between French Catholics and English Protestants in Canada, but not between speakers of Cantonese and Mandarin in China, owing to China's near-uninterrupted history of centralized government. Because of the difficulties involved in determining the difference between language and dialect, other terms have been proposed. These include vernacular, lect, regionalect, topolect, and variety. Most Chinese people consider the spoken varieties as one single language because speakers share a common culture and history, as well as a shared national identity and a common written form. Phonology The phonological structure of each syllable consists of a nucleus that has a vowel (which can be a monophthong, diphthong, or even a triphthong in certain varieties), preceded by an onset (a single consonant, or consonant+glide; zero onset is also possible), and followed (optionally) by a coda consonant; a syllable also carries a tone. There are some instances where a vowel is not used as a nucleus. An example of this is in Cantonese, where the nasal sonorant consonants and can stand alone as their own syllable. In Mandarin much more than in other spoken varieties, most syllables tend to be open syllables, meaning they have no coda (assuming that a final glide is not analyzed as a coda), but syllables that do have codas are restricted to nasals , , , the retroflex approximant , and voiceless stops , , , or . Some varieties allow most of these codas, whereas others, such as Standard Chinese, are limited to only , and . The number of sounds in the different spoken dialects varies, but in general there has been a tendency to a reduction in sounds from Middle Chinese. The Mandarin dialects in particular have experienced a dramatic decrease in sounds and so have far more multisyllabic words than most other spoken varieties. The total number of syllables in some varieties is therefore only about a thousand, including tonal variation, which is only about an eighth as many as English. Tones All varieties of spoken Chinese use tones to distinguish words. A few dialects of north China may have as few as three tones, while some dialects in south China have up to 6 or 12 tones, depending on how one counts. One exception from this is Shanghainese which has reduced the set of tones to a two-toned pitch accent system much like modern Japanese. A very common example used to illustrate the use of tones in Chinese is the application of the four tones of Standard Chinese (along with the neutral tone) to the syllable ma. The tones are exemplified by the following five Chinese words: Standard Cantonese, in contrast, has six tones. Historically, finals that end in a stop consonant were considered to be "checked tones" and thus counted separately for a total of nine tones. However, they are considered to be duplicates in modern linguistics and are no longer counted as such: Grammar Chinese is often described as a "monosyllabic" language. However, this is only partially correct. It is largely accurate when describing Classical Chinese and Middle Chinese; in Classical Chinese, for example, perhaps 90% of words correspond to a single syllable and a single character. In the modern varieties, it is usually the case that a morpheme (unit of meaning) is a single syllable; in contrast, English has many multi-syllable morphemes, both bound and free, such as "seven", "elephant", "para-" and "-able". Some of the conservative southern varieties of modern Chinese have largely monosyllabic words, especially among the more basic vocabulary. In modern Mandarin, however, most nouns, adjectives and verbs are largely disyllabic. A significant cause of this is phonological attrition. Sound change over time has steadily reduced the number of possible syllables. In modern Mandarin, there are now only about 1,200 possible syllables, including tonal distinctions, compared with about 5,000 in Vietnamese (still largely monosyllabic) and over 8,000 in English. This phonological collapse has led to a corresponding increase in the number of homophones. As an example, the small Langenscheidt Pocket Chinese Dictionary lists six words that are commonly pronounced as shí (tone 2): 'ten'; / 'real, actual'; / 'know (a person), recognize'; 'stone'; / 'time'; 'food, eat'. These were all pronounced differently in Early Middle Chinese; in William H. Baxter's transcription they were , , , , and respectively. They are still pronounced differently in today's Cantonese; in Jyutping they are sap9, sat9, sik7, sek9, si4, sik9. In modern spoken Mandarin, however, tremendous ambiguity would result if all of these words could be used as-is; Yuen Ren Chao's modern poem Lion-Eating Poet in the Stone Den exploits this, consisting of 92 characters all pronounced shi. As such, most of these words have been replaced (in speech, if not in writing) with a longer, less-ambiguous compound. Only the first one, 'ten', normally appears as such when spoken; the rest are normally replaced with, respectively, shíjì / (lit. 'actual-connection'); rènshi / (lit. 'recognize-know'); shítou / (lit. 'stone-head'); shíjiān / (lit. 'time-interval'); shíwù (lit. 'foodstuff'). In each case, the homophone was disambiguated by adding another morpheme, typically either a synonym |
except a set of isolated points are known as meromorphic functions. On the other hand, the functions and are not holomorphic anywhere on the complex plane, as can be shown by their failure to satisfy the Cauchy–Riemann conditions (see below). An important property of holomorphic functions is the relationship between the partial derivatives of their real and imaginary components, known as the Cauchy–Riemann conditions. If , defined by where is holomorphic on a region then for all , In terms of the real and imaginary parts of the function, u and v, this is equivalent to the pair of equations and , where the subscripts indicate partial differentiation. However, the Cauchy–Riemann conditions do not characterize holomorphic functions, without additional continuity conditions (see Looman–Menchoff theorem). Holomorphic functions exhibit some remarkable features. For instance, Picard's theorem asserts that the range of an entire function can take only three possible forms: or for some In other words, if two distinct complex numbers and are not in the range of an entire function then is a constant function. Moreover, a holomorphic function on a connected open set is determined by its restriction to any nonempty open subset. Major results One of the central tools in complex analysis is the line integral. The line integral around a closed path of a function that is holomorphic everywhere inside the area bounded by the closed path is always zero, as is stated by the Cauchy integral theorem. The values of such a holomorphic function inside a disk can be computed by a path integral on the disk's boundary (as shown in Cauchy's integral formula). Path integrals in the complex plane are often used to determine complicated real integrals, and here the theory of residues among others is applicable (see methods of contour integration). A "pole" (or isolated singularity) of a function is a point where the function's value becomes unbounded, or "blows up". If a function has such a pole, then one can compute the function's residue there, which can be used to compute path integrals involving the function; this is the content of the powerful residue theorem. The remarkable behavior of holomorphic functions near essential singularities is described by Picard's theorem. Functions that have only poles but no essential singularities are called meromorphic. Laurent series are the complex-valued equivalent to Taylor series, but can be used to study the behavior of functions near singularities through infinite sums of more well understood functions, such as polynomials. A bounded function that is holomorphic in the entire complex plane must be constant; this is Liouville's theorem. It can be used to provide a natural and short proof for the fundamental theorem of algebra which states that the field of complex numbers is algebraically closed. If a function is holomorphic throughout a connected domain then its values are fully determined by its values on any smaller subdomain. The function on the larger domain is said to be analytically continued from its values on the smaller domain. This allows the extension of the definition of functions, such as the Riemann zeta function, which are initially defined in terms of infinite sums that converge only on limited domains to almost the entire complex plane. Sometimes, as in the case of the natural logarithm, it is impossible to analytically continue a holomorphic function to a non-simply connected domain in the complex plane but it is possible to extend it to a holomorphic function on a closely related surface known as a Riemann surface. All this refers to complex analysis in one variable. There is also a very rich theory of complex analysis in more than one complex dimension in which the analytic properties such as power series expansion carry over whereas most of the geometric properties of holomorphic functions in one complex dimension (such as conformality) do not carry over. The Riemann mapping theorem about the conformal relationship of certain domains | One of the central tools in complex analysis is the line integral. The line integral around a closed path of a function that is holomorphic everywhere inside the area bounded by the closed path is always zero, as is stated by the Cauchy integral theorem. The values of such a holomorphic function inside a disk can be computed by a path integral on the disk's boundary (as shown in Cauchy's integral formula). Path integrals in the complex plane are often used to determine complicated real integrals, and here the theory of residues among others is applicable (see methods of contour integration). A "pole" (or isolated singularity) of a function is a point where the function's value becomes unbounded, or "blows up". If a function has such a pole, then one can compute the function's residue there, which can be used to compute path integrals involving the function; this is the content of the powerful residue theorem. The remarkable behavior of holomorphic functions near essential singularities is described by Picard's theorem. Functions that have only poles but no essential singularities are called meromorphic. Laurent series are the complex-valued equivalent to Taylor series, but can be used to study the behavior of functions near singularities through infinite sums of more well understood functions, such as polynomials. A bounded function that is holomorphic in the entire complex plane must be constant; this is Liouville's theorem. It can be used to provide a natural and short proof for the fundamental theorem of algebra which states that the field of complex numbers is algebraically closed. If a function is holomorphic throughout a connected domain then its values are fully determined by its values on any smaller subdomain. The function on the larger domain is said to be analytically continued from its values on the smaller domain. This allows the extension of the definition of functions, such as the Riemann zeta function, which are initially defined in terms of infinite sums that converge only on limited domains to almost the entire complex plane. Sometimes, as in the case of the natural logarithm, it is impossible to analytically continue a holomorphic function to a non-simply connected domain in the complex plane but it is possible to extend it to a holomorphic function on a closely related surface known as a Riemann surface. All this refers to complex analysis in one variable. There is also a very rich theory of complex analysis in more than one complex dimension in which the analytic properties such as power series expansion carry over whereas most of the geometric properties of holomorphic functions in one complex dimension (such as conformality) do not carry over. The Riemann mapping theorem about the conformal relationship of certain domains in the complex plane, which may be the most important result in the one-dimensional theory, fails dramatically in higher dimensions. A major application of certain complex spaces is in quantum mechanics as wave functions. See also Analytic continuation Vector calculus Complex dynamics List of complex analysis topics Monodromy theorem Real analysis Runge's theorem References Ablowitz, M. J. & A. S. Fokas, Complex Variables: Introduction and Applications (Cambridge, 2003). Ahlfors, L., Complex Analysis (McGraw-Hill, 1953). Cartan, H., Théorie élémentaire des fonctions analytiques d'une ou plusieurs variables complexes. (Hermann, 1961). English translation, Elementary Theory of Analytic Functions of One or Several Complex Variables. (Addison-Wesley, 1963). Carathéodory, C., Funktionentheorie. (Birkhäuser, 1950). English translation, Theory of Functions of a Complex Variable (Chelsea, 1954). [2 volumes.] Carrier, G. F., M. Krook, & C. E. Pearson, Functions of a Complex Variable: Theory and Technique. (McGraw-Hill, 1966). Conway, J. B., Functions of One Complex Variable. (Springer, 1973). Fisher, S., Complex Variables. (Wadsworth & Brooks/Cole, 1990). Forsyth, A., Theory of Functions of a Complex Variable (Cambridge, 1893). Freitag, E. & R. Busam, Funktionentheorie. (Springer, 1995). English translation, Complex Analysis. (Springer, 2005). Goursat, E., Cours d'analyse mathématique, tome 2. (Gauthier-Villars, 1905). English translation, A course of mathematical analysis, vol. 2, part 1: Functions of a complex variable. (Ginn, 1916). Henrici, P., Applied and Computational Complex Analysis (Wiley). [Three volumes: 1974, 1977, 1986.] Kreyszig, E., Advanced Engineering Mathematics. (Wiley, 1962). Lavrentyev, M. & B. Shabat, Методы теории функций комплексного переменного. (Methods of the Theory of Functions of a Complex Variable). (1951, in Russian). Markushevich, A. I., Theory of Functions of a Complex Variable, (Prentice-Hall, 1965). [Three volumes.] Marsden & Hoffman, Basic Complex Analysis. (Freeman, 1973). Needham, T., Visual Complex Analysis. (Oxford, 1997). http://usf.usfca.edu/vca/ Remmert, R., Theory of Complex Functions. (Springer, 1990). Rudin, W., Real and Complex Analysis. (McGraw-Hill, 1966). Shaw, W. T., Complex |
of the other six powers enabled him to proclaim himself the First Emperor (Qin Shi Huang). Imperial China Because the Qin First Emperor declared himself "emperor" (huangdi) in 221 BC, and rulers continued to use this term until the final Qing emperor abdicated in 1911 CE, this period is conventionally called Imperial China. It is sometimes divided into three sub-periods: Early, Middle, and Late. Major events in the Early sub-period include the Qin unification of China and their replacement by the Han, the First Split followed by the Jin unification, and the loss of north China. The Middle sub-period was marked by the Sui unification and their supplementation by the Tang, the Second Split, and the Song unification. The Late sub-period included the Yuan, Ming, and Qing dynasties. Qin dynasty (221–206 BC) Though the unified reign of the First Qin Emperor lasted only 12 years, he managed to subdue great parts of what constitutes the core of the Han Chinese homeland and to unite them under a tightly centralized Legalist government seated at Xianyang (close to modern Xi'an). The doctrine of Legalism that guided the Qin emphasized strict adherence to a legal code and the absolute power of the emperor. This philosophy, while effective for expanding the empire in a military fashion, proved unworkable for governing it in peacetime. The Qin Emperor presided over the brutal silencing of political opposition, including the event known as the burning of books and burying of scholars. This would be the impetus behind the later Han synthesis incorporating the more moderate schools of political governance. Major contributions of the Qin include the concept of a centralized government, and the unification and development of the legal code, the written language, measurement, and currency of China after the tribulations of the Spring and Autumn and Warring States periods. Even something as basic as the length of axles for carts—which need to match ruts in the roads—had to be made uniform to ensure a viable trading system throughout the empire. Also as part of its centralization, the Qin connected the northern border walls of the states it defeated, making the first, though rough, version of the Great Wall of China. The Qin Empire's economy was based on the grain taxes paid by its subjects as well as the labor they served during the agricultural off-season. This is now well understood because large numbers of Qin administrative texts have been excavated. Qin's conquest and colonization of the Yangzi Valley played an important role in bringing this area under the control of Chinese empires. The tribes of the north, collectively called the Wu Hu by the Qin, were free from Chinese rule during the majority of the dynasty. Prohibited from trading with Qin dynasty peasants, the Xiongnu tribe living in the Ordos region in northwest China often raided them instead, prompting the Qin to retaliate. After a military campaign led by General Meng Tian, the region was conquered in 215 BC and agriculture was established; the peasants, however, were discontented and later revolted. The succeeding Han dynasty also expanded into the Ordos due to overpopulation, but depleted their resources in the process. Indeed, this was true of the dynasty's borders in multiple directions; modern Inner Mongolia, Xinjiang, Tibet, Manchuria, and regions to the southeast were foreign to the Qin, and even areas over which they had military control were culturally distinct. After Qin Shi Huang's death the Qin government drastically deteriorated and eventually capitulated in 207 BC after the Qin capital was captured and sacked by rebels, which would ultimately lead to the establishment of the Han Empire. Despite the short duration of the Qin dynasty, it was immensely influential on China and the structure of future Chinese dynasties. Han dynasty (206 BC – AD 220) Western Han The Han dynasty was founded by Liu Bang, who emerged victorious in the Chu–Han Contention that followed the fall of the Qin dynasty. A golden age in Chinese history, the Han dynasty's long period of stability and prosperity consolidated the foundation of China as a unified state under a central imperial bureaucracy, which was to last intermittently for most of the next two millennia. During the Han dynasty, territory of China was extended to most of the China proper and to areas far west. Confucianism was officially elevated to orthodox status and was to shape the subsequent Chinese civilization. Art, culture and science all advanced to unprecedented heights. With the profound and lasting impacts of this period of Chinese history, the dynasty name "Han" had been taken as the name of the Chinese people, now the dominant ethnic group in modern China, and had been commonly used to refer to Chinese language and written characters. After the initial laissez-faire policies of Emperors Wen and Jing, the ambitious Emperor Wu brought the empire to its zenith. To consolidate his power, he extended patronage to Confucianism, which emphasizes stability and order in a well-structured society. Imperial Universities were established to support its study. At the urging of his Legalist advisors, however, he also strengthened the fiscal structure of the dynasty with government monopolies. Major military campaigns were launched to weaken the nomadic Xiongnu Empire, limiting their influence north of the Great Wall. Along with the diplomatic efforts led by Zhang Qian, the sphere of influence of the Han Empire extended to the states in the Tarim Basin, opened up the Silk Road that connected China to the west, stimulating bilateral trade and cultural exchange. To the south, various small kingdoms far beyond the Yangtze River Valley were formally incorporated into the empire. Emperor Wu also dispatched a series of military campaigns against the Baiyue tribes. The Han annexed Minyue in 135 BC and 111 BC, Nanyue in 111 BC, and Dian in 109 BC. Migration and military expeditions led to the cultural assimilation of the south. It also brought the Han into contact with kingdoms in Southeast Asia, introducing diplomacy and trade. After Emperor Wu, the empire slipped into gradual stagnation and decline. Economically, the state treasury was strained by excessive campaigns and projects, while land acquisitions by elite families gradually drained the tax base. Various consort clans exerted increasing control over strings of incompetent emperors and eventually the dynasty was briefly interrupted by the usurpation of Wang Mang. Xin dynasty In AD 9, the usurper Wang Mang claimed that the Mandate of Heaven called for the end of the Han dynasty and the rise of his own, and he founded the short-lived Xin dynasty. Wang Mang started an extensive program of land and other economic reforms, including the outlawing of slavery and land nationalization and redistribution. These programs, however, were never supported by the landholding families, because they favored the peasants. The instability of power brought about chaos, uprisings, and loss of territories. This was compounded by mass flooding of the Yellow River; silt buildup caused it to split into two channels and displaced large numbers of farmers. Wang Mang was eventually killed in Weiyang Palace by an enraged peasant mob in AD 23. Eastern Han Emperor Guangwu reinstated the Han dynasty with the support of landholding and merchant families at Luoyang, east of the former capital Xi'an. Thus, this new era is termed the Eastern Han dynasty. With the capable administrations of Emperors Ming and Zhang, former glories of the dynasty was reclaimed, with brilliant military and cultural achievements. The Xiongnu Empire was decisively defeated. The diplomat and general Ban Chao further expanded the conquests across the Pamirs to the shores of the Caspian Sea, thus reopening the Silk Road, and bringing trade, foreign cultures, along with the arrival of Buddhism. With extensive connections with the west, the first of several Roman embassies to China were recorded in Chinese sources, coming from the sea route in AD 166, and a second one in AD 284. The Eastern Han dynasty was one of the most prolific era of science and technology in ancient China, notably the historic invention of papermaking by Cai Lun, and the numerous scientific and mathematical contributions by the famous polymath Zhang Heng. Three Kingdoms (AD 220–280) By the 2nd century, the empire declined amidst land acquisitions, invasions, and feuding between consort clans and eunuchs. The Yellow Turban Rebellion broke out in AD 184, ushering in an era of warlords. In the ensuing turmoil, three states tried to gain predominance in the period of the Three Kingdoms, since greatly romanticized in works such as Romance of the Three Kingdoms. After Cao Cao reunified the north in 208, his son proclaimed the Wei dynasty in 220. Soon, Wei's rivals Shu and Wu proclaimed their independence, leading China into the Three Kingdoms period. This period was characterized by a gradual decentralization of the state that had existed during the Qin and Han dynasties, and an increase in the power of great families. In 266, the Jin dynasty overthrew the Wei and later unified the country in 280, but this union was short-lived. Jin dynasty (AD 266–420) The Jin dynasty was severely weakened by internecine fighting among imperial princes and lost control of northern China after non-Han Chinese settlers rebelled and captured Luoyang and Chang'an. In 317, a Jin prince in modern-day Nanjing became emperor and continued the dynasty, now known as the Eastern Jin, which held southern China for another century. Prior to this move, historians refer to the Jin dynasty as the Western Jin. Northern China fragmented into a series of independent kingdoms, most of which were founded by Xiongnu, Xianbei, Jie, Di and Qiang rulers. These non-Han peoples were ancestors of the Turks, Mongols, and Tibetans. Many had, to some extent, been "sinicized" long before their ascent to power. In fact, some of them, notably the Qiang and the Xiongnu, had already been allowed to live in the frontier regions within the Great Wall since late Han times. During the period of the Sixteen Kingdoms, warfare ravaged the north and prompted large-scale Han Chinese migration south to the Yangtze River Basin and Delta. Northern and Southern dynasties (AD 420–589) In the early 5th century, China entered a period known as the Northern and Southern dynasties, in which parallel regimes ruled the northern and southern halves of the country. In the south, the Eastern Jin gave way to the Liu Song, Southern Qi, Liang and finally Chen. Each of these Southern dynasties were led by Han Chinese ruling families and used Jiankang (modern Nanjing) as the capital. They held off attacks from the north and preserved many aspects of Chinese civilization, while northern barbarian regimes began to sinify. In the north, the last of the Sixteen Kingdoms was extinguished in 439 by the Northern Wei, a kingdom founded by the Xianbei, a nomadic people who unified northern China. The Northern Wei eventually split into the Eastern and Western Wei, which then became the Northern Qi and Northern Zhou. These regimes were dominated by Xianbei or Han Chinese who had married into Xianbei families. During this period most Xianbei people adopted Han surnames, eventually leading to complete assimilation into the Han. Despite the division of the country, Buddhism spread throughout the land. In southern China, fierce debates about whether Buddhism should be allowed were held frequently by the royal court and nobles. By the end of the era, Buddhists and Taoists had become much more tolerant of each other. Sui dynasty (AD 581–618) The short-lived Sui dynasty was a pivotal period in Chinese history. Founded by Emperor Wen in 581 in succession of the Northern Zhou, the Sui went on to conquer the Southern Chen in 589 to reunify China, ending three centuries of political division. The Sui pioneered many new institutions, including the government system of Three Departments and Six Ministries, imperial examinations for selecting officials from commoners, while improved on the systems of fubing system of the army conscription and the Equal-field system of land distributions. These policies, which were adopted by later dynasties, brought enormous population growth, and amassed excessive wealth to the state. Standardized coinage were enforced throughout the unified empire. Buddhism took root as a prominent religion and was supported officially. Sui China was known for its numerous mega-construction projects. Intended for grains shipment and transporting troops, the Grand Canal was constructed, linking the capitals Daxing (Chang'an) and Luoyang to the wealthy southeast region, and in another route, to the northeast border. The Great Wall was also expanded, while series of military conquests and diplomatic maneuvers further pacified its borders. However, the massive invasions of the Korean Peninsula during the Goguryeo–Sui War failed disastrously, triggering widespread revolts that led to the fall of the dynasty. Tang dynasty (AD 618–907) The Tang dynasty was a golden age of Chinese civilization, a prosperous, stable, and creative period with significant developments in culture, art, literature, particularly poetry, and technology. Buddhism became the predominant religion for the common people. Chang'an (modern Xi'an), the national capital, was the largest city in the world during its time. The first emperor, Emperor Gaozu, came to the throne on 18 June 618, placed there by his son, Li Shimin, who became the second emperor, Taizong, one of the greatest emperors in Chinese history. Combined military conquests and diplomatic maneuvers reduced threats from Central Asian tribes, extended the border, and brought neighboring states into a tributary system. Military victories in the Tarim Basin kept the Silk Road open, connecting Chang'an to Central Asia and areas far to the west. In the south, lucrative maritime trade routes from port cities such as Guangzhou connected with distant countries, and foreign merchants settled in China, encouraging a cosmopolitan culture. The Tang culture and social systems were observed and adapted by neighboring countries, most notably Japan. Internally the Grand Canal linked the political heartland in Chang'an to the agricultural and economic centers in the eastern and southern parts of the empire. Xuanzang, a Chinese Buddhist monk, scholar, traveller, and translator who travelled to India on his own, and returned with, "over six hundred Mahayana and Hinayana texts, seven statues of the Buddha and more than a hundred sarira relics." The prosperity of the early Tang dynasty was abetted by a centralized bureaucracy. The government was organized as "Three Departments and Six Ministries" to separately draft, review, and implement policies. These departments were run by royal family members and landed aristocrats, but as the dynasty wore on, were joined or replaced by scholar officials selected by imperial examinations, setting patterns for later dynasties. Under the Tang "equal-field system" all land was owned by the Emperor and granted to each family according to household size. Men granted land were conscripted for military service for a fixed period each year, a military policy known as the "Fubing system". These policies stimulated a rapid growth in productivity and a significant army without much burden on the state treasury. By the dynasty's midpoint, however, standing armies had replaced conscription, and land was continuously falling into the hands of private owners and religious institutions granted exemptions. The dynasty continued to flourish under the rule of Empress Wu Zetian, the only empress regnant in Chinese history, and reached its zenith during the long reign of Emperor Xuanzong, who oversaw an empire that stretched from the Pacific to the Aral Sea with at least 50 million people. There were vibrant artistic and cultural creations, including works of the greatest Chinese poets, Li Bai and Du Fu. At the zenith of prosperity of the empire, the An Lushan Rebellion from 755 to 763 was a watershed event. War, disease, and economic disruption devastated the population and drastically weakened the central imperial government. Upon suppression of the rebellion, regional military governors, known as Jiedushi, gained increasingly autonomous status. With loss of revenue from land tax, the central imperial government came to rely heavily on salt monopoly. Externally, former submissive states raided the empire and the vast border territories were lost for centuries. Nevertheless, civil society recovered and thrived amidst the weakened imperial bureaucracy. In late Tang period, the empire was worn out by recurring revolts of regional warlords, while internally, as scholar-officials engaged in fierce factional strife, corrupted eunuchs amassed immense power. Catastrophically, the Huang Chao Rebellion, from 874 to 884, devastated the entire empire for a decade. The sack of the southern port Guangzhou in 879 was followed by the massacre of most of its inhabitants, especially the large foreign merchant enclaves. By 881, both capitals, Luoyang and Chang'an, fell successively. The reliance on ethnic Han and Turkic warlords in suppressing the rebellion increased their power and influence. Consequently, the fall of the dynasty following Zhu Wen's usurpation led to an era of division. Five Dynasties and Ten Kingdoms (AD 907–960) The period of political disunity between the Tang and the Song, known as the Five Dynasties and Ten Kingdoms period, lasted from 907 to 960. During this half-century, China was in all respects a multi-state system. Five regimes, namely, (Later) Liang, Tang, Jin, Han and Zhou, rapidly succeeded one another in control of the traditional Imperial heartland in northern China. Among the regimes, rulers of (Later) Tang, Jin and Han were sinicized Shatuo Turks, which ruled over the ethnic majority of Han Chinese. More stable and smaller regimes of mostly ethnic Han rulers coexisted in south and western China over the period, cumulatively constituted the "Ten Kingdoms". Amidst political chaos in the north, the strategic Sixteen Prefectures (region along today's Great Wall) were ceded to the emerging Khitan Liao dynasty, which drastically weakened the defense of the China proper against northern nomadic empires. To the south, Vietnam gained lasting independence after being a Chinese prefecture for many centuries. With wars dominated in Northern China, there were mass southward migrations of population, which further enhanced the southward shift of cultural and economic centers in China. The era ended with the coup of Later Zhou general Zhao Kuangyin, and the establishment of the Song dynasty in 960, which eventually annihilated the remains of the "Ten Kingdoms" and reunified China. Song, Liao, Jin, and Western Xia dynasties (AD 960–1279) In 960, the Song dynasty was founded by Emperor Taizu, with its capital established in Kaifeng (also known as Bianjing). In 979, the Song dynasty reunified most of the China proper, while large swaths of the outer territories were occupied by sinicized nomadic empires. The Khitan Liao dynasty, which lasted from 907 to 1125, ruled over Manchuria, Mongolia, and parts of Northern China. Meanwhile, in what are now the north-western Chinese provinces of Gansu, Shaanxi, and Ningxia, the Tangut tribes founded the Western Xia dynasty from 1032 to 1227. Aiming to recover the strategic Sixteen Prefectures lost in the previous dynasty, campaigns were launched against the Liao dynasty in the early Song period, which all ended in failure. Then in 1004, the Liao cavalry swept over the exposed North China Plain and reached the outskirts of Kaifeng, forcing the Song's submission and then agreement to the Chanyuan Treaty, which imposed heavy annual tributes from the Song treasury. The treaty was a significant reversal of Chinese dominance of the traditional tributary system. Yet the annual outflow of Song's silver to the Liao was paid back through the purchase of Chinese goods and products, which expanded the Song economy, and replenished its treasury. This dampened the incentive for the Song to further campaign against the Liao. Meanwhile, this cross-border trade and contact induced further sinicization within the Liao Empire, at the expense of its military might which was derived from its primitive nomadic lifestyle. Similar treaties and social-economical consequences occurred in Song's relations with the Jin dynasty. Within the Liao Empire, the Jurchen tribes revolted against their overlords to establish the Jin dynasty in 1115. In 1125, the devastating Jin cataphract annihilated the Liao dynasty, while remnants of Liao court members fled to Central Asia to found the Qara Khitai Empire (Western Liao dynasty). Jin's invasion of the Song dynasty followed swiftly. In 1127, Kaifeng was sacked, a massive catastrophe known as the Jingkang Incident, ending the Northern Song dynasty. Later the entire north of China was conquered. The survived members of Song court regrouped in the new capital city of Hangzhou, and initiated the Southern Song dynasty, which ruled territories south of the Huai River. In the ensuing years, the territory and population of China were divided between the Song dynasty, the Jin dynasty and the Western Xia dynasty. The era ended with the Mongol conquest, as Western Xia fell in 1227, the Jin dynasty in 1234, and finally the Southern Song dynasty in 1279. Despite its military weakness, the Song dynasty is widely considered to be the high point of classical Chinese civilization. The Song economy, facilitated by technology advancement, had reached a level of sophistication probably unseen in world history before its time. The population soared to over 100 million and the living standards of common people improved tremendously due to improvements in rice cultivation and the wide availability of coal for production. The capital cities of Kaifeng and subsequently Hangzhou were both the most populous cities in the world for their time, and encouraged vibrant civil societies unmatched by previous Chinese dynasties. Although land trading routes to the far west were blocked by nomadic empires, there were extensive maritime trade with neighboring states, which facilitated the use of Song coinage as the de facto currency of exchange. Giant wooden vessels equipped with compasses traveled throughout the China Seas and northern Indian Ocean. The concept of insurance was practised by merchants to hedge the risks of such long-haul maritime shipments. With prosperous economic activities, the historically first use of paper currency emerged in the western city of Chengdu, as a supplement to the existing copper coins. The Song dynasty was considered to be the golden age of great advancements in science and technology of China, thanks to innovative scholar-officials such as Su Song (1020–1101) and Shen Kuo (1031–1095). Inventions such as the hydro-mechanical astronomical clock, the first continuous and endless power-transmitting chain, woodblock printing and paper money were all invented during the Song dynasty. There was court intrigue between the political reformers and conservatives, led by the chancellors Wang Anshi and Sima Guang, respectively. By the mid-to-late 13th century, the Chinese had adopted the dogma of Neo-Confucian philosophy formulated by Zhu Xi. Enormous literary works were compiled during the Song dynasty, such as the historical work, the Zizhi Tongjian ("Comprehensive Mirror to Aid in Government"). The invention of movable-type printing further facilitated the spread of knowledge. Culture and the arts flourished, with grandiose artworks such as Along the River During the Qingming Festival and Eighteen Songs of a Nomad Flute, along with great Buddhist painters such as the prolific Lin Tinggui. The Song dynasty was also a period of major innovation in the history of warfare. Gunpowder, while invented in the Tang dynasty, was first put into use in battlefields by the Song army, inspiring a succession of new firearms and siege engines designs. During the Southern Song dynasty, as its survival hinged decisively on guarding the Yangtze and Huai River against the cavalry forces from the north, the first standing navy in China was assembled in 1132, with its admiral's headquarters established at Dinghai. Paddle-wheel warships equipped with trebuchets could launch incendiary bombs made of gunpowder and lime, as recorded in Song's victory over the invading Jin forces at the Battle of Tangdao in the East China Sea, and the Battle of Caishi on the Yangtze River in 1161. The advances in civilization during the Song dynasty came to an abrupt end following the devastating Mongol conquest, during which the population sharply dwindled, with a marked contraction in economy. Despite viciously halting Mongol advance for more than three decades, the Southern Song capital Hangzhou fell in 1276, followed by the final annihilation of the Song standing navy at the Battle of Yamen in 1279. Yuan dynasty (AD 1271–1368) The Yuan dynasty was formally proclaimed in 1271, when the Great Khan of Mongol, Kublai Khan, one of the grandsons of Genghis Khan, assumed the additional title of Emperor of China, and considered his inherited part of the Mongol Empire as a Chinese dynasty. In the preceding decades, the Mongols had conquered the Jin dynasty in Northern China, and the Southern Song dynasty fell in 1279 after a protracted and bloody war. The Mongol Yuan dynasty became the first conquest dynasty in Chinese history to rule the entire China proper and its population as an ethnic minority. The dynasty also directly controlled the Mongolian heartland and other regions, inheriting the largest share of territory of the divided Mongol Empire, which roughly coincided with the modern area of China and nearby regions in East Asia. Further expansion of the empire was halted after defeats in the invasions of Japan and Vietnam. Following the previous Jin dynasty, the capital of Yuan dynasty was established at Khanbaliq (also known as Dadu, modern-day Beijing). The Grand Canal was reconstructed to connect the remote capital city to economic hubs in southern part of China, setting the precedence and foundation where Beijing would largely remain as the capital of the successive regimes that unified China mainland. After the peace treaty in 1304 that ended a series of Mongol civil wars, the emperors of the Yuan dynasty were upheld as the nominal Great Khan (Khagan) of the greater Mongol Empire over other Mongol Khanates, which nonetheless remained de facto autonomous. The era was known as Pax Mongolica, when much of the Asian continent was ruled by the Mongols. For the first and only time in history, the silk road was controlled entirely by | one in AD 284. The Eastern Han dynasty was one of the most prolific era of science and technology in ancient China, notably the historic invention of papermaking by Cai Lun, and the numerous scientific and mathematical contributions by the famous polymath Zhang Heng. Three Kingdoms (AD 220–280) By the 2nd century, the empire declined amidst land acquisitions, invasions, and feuding between consort clans and eunuchs. The Yellow Turban Rebellion broke out in AD 184, ushering in an era of warlords. In the ensuing turmoil, three states tried to gain predominance in the period of the Three Kingdoms, since greatly romanticized in works such as Romance of the Three Kingdoms. After Cao Cao reunified the north in 208, his son proclaimed the Wei dynasty in 220. Soon, Wei's rivals Shu and Wu proclaimed their independence, leading China into the Three Kingdoms period. This period was characterized by a gradual decentralization of the state that had existed during the Qin and Han dynasties, and an increase in the power of great families. In 266, the Jin dynasty overthrew the Wei and later unified the country in 280, but this union was short-lived. Jin dynasty (AD 266–420) The Jin dynasty was severely weakened by internecine fighting among imperial princes and lost control of northern China after non-Han Chinese settlers rebelled and captured Luoyang and Chang'an. In 317, a Jin prince in modern-day Nanjing became emperor and continued the dynasty, now known as the Eastern Jin, which held southern China for another century. Prior to this move, historians refer to the Jin dynasty as the Western Jin. Northern China fragmented into a series of independent kingdoms, most of which were founded by Xiongnu, Xianbei, Jie, Di and Qiang rulers. These non-Han peoples were ancestors of the Turks, Mongols, and Tibetans. Many had, to some extent, been "sinicized" long before their ascent to power. In fact, some of them, notably the Qiang and the Xiongnu, had already been allowed to live in the frontier regions within the Great Wall since late Han times. During the period of the Sixteen Kingdoms, warfare ravaged the north and prompted large-scale Han Chinese migration south to the Yangtze River Basin and Delta. Northern and Southern dynasties (AD 420–589) In the early 5th century, China entered a period known as the Northern and Southern dynasties, in which parallel regimes ruled the northern and southern halves of the country. In the south, the Eastern Jin gave way to the Liu Song, Southern Qi, Liang and finally Chen. Each of these Southern dynasties were led by Han Chinese ruling families and used Jiankang (modern Nanjing) as the capital. They held off attacks from the north and preserved many aspects of Chinese civilization, while northern barbarian regimes began to sinify. In the north, the last of the Sixteen Kingdoms was extinguished in 439 by the Northern Wei, a kingdom founded by the Xianbei, a nomadic people who unified northern China. The Northern Wei eventually split into the Eastern and Western Wei, which then became the Northern Qi and Northern Zhou. These regimes were dominated by Xianbei or Han Chinese who had married into Xianbei families. During this period most Xianbei people adopted Han surnames, eventually leading to complete assimilation into the Han. Despite the division of the country, Buddhism spread throughout the land. In southern China, fierce debates about whether Buddhism should be allowed were held frequently by the royal court and nobles. By the end of the era, Buddhists and Taoists had become much more tolerant of each other. Sui dynasty (AD 581–618) The short-lived Sui dynasty was a pivotal period in Chinese history. Founded by Emperor Wen in 581 in succession of the Northern Zhou, the Sui went on to conquer the Southern Chen in 589 to reunify China, ending three centuries of political division. The Sui pioneered many new institutions, including the government system of Three Departments and Six Ministries, imperial examinations for selecting officials from commoners, while improved on the systems of fubing system of the army conscription and the Equal-field system of land distributions. These policies, which were adopted by later dynasties, brought enormous population growth, and amassed excessive wealth to the state. Standardized coinage were enforced throughout the unified empire. Buddhism took root as a prominent religion and was supported officially. Sui China was known for its numerous mega-construction projects. Intended for grains shipment and transporting troops, the Grand Canal was constructed, linking the capitals Daxing (Chang'an) and Luoyang to the wealthy southeast region, and in another route, to the northeast border. The Great Wall was also expanded, while series of military conquests and diplomatic maneuvers further pacified its borders. However, the massive invasions of the Korean Peninsula during the Goguryeo–Sui War failed disastrously, triggering widespread revolts that led to the fall of the dynasty. Tang dynasty (AD 618–907) The Tang dynasty was a golden age of Chinese civilization, a prosperous, stable, and creative period with significant developments in culture, art, literature, particularly poetry, and technology. Buddhism became the predominant religion for the common people. Chang'an (modern Xi'an), the national capital, was the largest city in the world during its time. The first emperor, Emperor Gaozu, came to the throne on 18 June 618, placed there by his son, Li Shimin, who became the second emperor, Taizong, one of the greatest emperors in Chinese history. Combined military conquests and diplomatic maneuvers reduced threats from Central Asian tribes, extended the border, and brought neighboring states into a tributary system. Military victories in the Tarim Basin kept the Silk Road open, connecting Chang'an to Central Asia and areas far to the west. In the south, lucrative maritime trade routes from port cities such as Guangzhou connected with distant countries, and foreign merchants settled in China, encouraging a cosmopolitan culture. The Tang culture and social systems were observed and adapted by neighboring countries, most notably Japan. Internally the Grand Canal linked the political heartland in Chang'an to the agricultural and economic centers in the eastern and southern parts of the empire. Xuanzang, a Chinese Buddhist monk, scholar, traveller, and translator who travelled to India on his own, and returned with, "over six hundred Mahayana and Hinayana texts, seven statues of the Buddha and more than a hundred sarira relics." The prosperity of the early Tang dynasty was abetted by a centralized bureaucracy. The government was organized as "Three Departments and Six Ministries" to separately draft, review, and implement policies. These departments were run by royal family members and landed aristocrats, but as the dynasty wore on, were joined or replaced by scholar officials selected by imperial examinations, setting patterns for later dynasties. Under the Tang "equal-field system" all land was owned by the Emperor and granted to each family according to household size. Men granted land were conscripted for military service for a fixed period each year, a military policy known as the "Fubing system". These policies stimulated a rapid growth in productivity and a significant army without much burden on the state treasury. By the dynasty's midpoint, however, standing armies had replaced conscription, and land was continuously falling into the hands of private owners and religious institutions granted exemptions. The dynasty continued to flourish under the rule of Empress Wu Zetian, the only empress regnant in Chinese history, and reached its zenith during the long reign of Emperor Xuanzong, who oversaw an empire that stretched from the Pacific to the Aral Sea with at least 50 million people. There were vibrant artistic and cultural creations, including works of the greatest Chinese poets, Li Bai and Du Fu. At the zenith of prosperity of the empire, the An Lushan Rebellion from 755 to 763 was a watershed event. War, disease, and economic disruption devastated the population and drastically weakened the central imperial government. Upon suppression of the rebellion, regional military governors, known as Jiedushi, gained increasingly autonomous status. With loss of revenue from land tax, the central imperial government came to rely heavily on salt monopoly. Externally, former submissive states raided the empire and the vast border territories were lost for centuries. Nevertheless, civil society recovered and thrived amidst the weakened imperial bureaucracy. In late Tang period, the empire was worn out by recurring revolts of regional warlords, while internally, as scholar-officials engaged in fierce factional strife, corrupted eunuchs amassed immense power. Catastrophically, the Huang Chao Rebellion, from 874 to 884, devastated the entire empire for a decade. The sack of the southern port Guangzhou in 879 was followed by the massacre of most of its inhabitants, especially the large foreign merchant enclaves. By 881, both capitals, Luoyang and Chang'an, fell successively. The reliance on ethnic Han and Turkic warlords in suppressing the rebellion increased their power and influence. Consequently, the fall of the dynasty following Zhu Wen's usurpation led to an era of division. Five Dynasties and Ten Kingdoms (AD 907–960) The period of political disunity between the Tang and the Song, known as the Five Dynasties and Ten Kingdoms period, lasted from 907 to 960. During this half-century, China was in all respects a multi-state system. Five regimes, namely, (Later) Liang, Tang, Jin, Han and Zhou, rapidly succeeded one another in control of the traditional Imperial heartland in northern China. Among the regimes, rulers of (Later) Tang, Jin and Han were sinicized Shatuo Turks, which ruled over the ethnic majority of Han Chinese. More stable and smaller regimes of mostly ethnic Han rulers coexisted in south and western China over the period, cumulatively constituted the "Ten Kingdoms". Amidst political chaos in the north, the strategic Sixteen Prefectures (region along today's Great Wall) were ceded to the emerging Khitan Liao dynasty, which drastically weakened the defense of the China proper against northern nomadic empires. To the south, Vietnam gained lasting independence after being a Chinese prefecture for many centuries. With wars dominated in Northern China, there were mass southward migrations of population, which further enhanced the southward shift of cultural and economic centers in China. The era ended with the coup of Later Zhou general Zhao Kuangyin, and the establishment of the Song dynasty in 960, which eventually annihilated the remains of the "Ten Kingdoms" and reunified China. Song, Liao, Jin, and Western Xia dynasties (AD 960–1279) In 960, the Song dynasty was founded by Emperor Taizu, with its capital established in Kaifeng (also known as Bianjing). In 979, the Song dynasty reunified most of the China proper, while large swaths of the outer territories were occupied by sinicized nomadic empires. The Khitan Liao dynasty, which lasted from 907 to 1125, ruled over Manchuria, Mongolia, and parts of Northern China. Meanwhile, in what are now the north-western Chinese provinces of Gansu, Shaanxi, and Ningxia, the Tangut tribes founded the Western Xia dynasty from 1032 to 1227. Aiming to recover the strategic Sixteen Prefectures lost in the previous dynasty, campaigns were launched against the Liao dynasty in the early Song period, which all ended in failure. Then in 1004, the Liao cavalry swept over the exposed North China Plain and reached the outskirts of Kaifeng, forcing the Song's submission and then agreement to the Chanyuan Treaty, which imposed heavy annual tributes from the Song treasury. The treaty was a significant reversal of Chinese dominance of the traditional tributary system. Yet the annual outflow of Song's silver to the Liao was paid back through the purchase of Chinese goods and products, which expanded the Song economy, and replenished its treasury. This dampened the incentive for the Song to further campaign against the Liao. Meanwhile, this cross-border trade and contact induced further sinicization within the Liao Empire, at the expense of its military might which was derived from its primitive nomadic lifestyle. Similar treaties and social-economical consequences occurred in Song's relations with the Jin dynasty. Within the Liao Empire, the Jurchen tribes revolted against their overlords to establish the Jin dynasty in 1115. In 1125, the devastating Jin cataphract annihilated the Liao dynasty, while remnants of Liao court members fled to Central Asia to found the Qara Khitai Empire (Western Liao dynasty). Jin's invasion of the Song dynasty followed swiftly. In 1127, Kaifeng was sacked, a massive catastrophe known as the Jingkang Incident, ending the Northern Song dynasty. Later the entire north of China was conquered. The survived members of Song court regrouped in the new capital city of Hangzhou, and initiated the Southern Song dynasty, which ruled territories south of the Huai River. In the ensuing years, the territory and population of China were divided between the Song dynasty, the Jin dynasty and the Western Xia dynasty. The era ended with the Mongol conquest, as Western Xia fell in 1227, the Jin dynasty in 1234, and finally the Southern Song dynasty in 1279. Despite its military weakness, the Song dynasty is widely considered to be the high point of classical Chinese civilization. The Song economy, facilitated by technology advancement, had reached a level of sophistication probably unseen in world history before its time. The population soared to over 100 million and the living standards of common people improved tremendously due to improvements in rice cultivation and the wide availability of coal for production. The capital cities of Kaifeng and subsequently Hangzhou were both the most populous cities in the world for their time, and encouraged vibrant civil societies unmatched by previous Chinese dynasties. Although land trading routes to the far west were blocked by nomadic empires, there were extensive maritime trade with neighboring states, which facilitated the use of Song coinage as the de facto currency of exchange. Giant wooden vessels equipped with compasses traveled throughout the China Seas and northern Indian Ocean. The concept of insurance was practised by merchants to hedge the risks of such long-haul maritime shipments. With prosperous economic activities, the historically first use of paper currency emerged in the western city of Chengdu, as a supplement to the existing copper coins. The Song dynasty was considered to be the golden age of great advancements in science and technology of China, thanks to innovative scholar-officials such as Su Song (1020–1101) and Shen Kuo (1031–1095). Inventions such as the hydro-mechanical astronomical clock, the first continuous and endless power-transmitting chain, woodblock printing and paper money were all invented during the Song dynasty. There was court intrigue between the political reformers and conservatives, led by the chancellors Wang Anshi and Sima Guang, respectively. By the mid-to-late 13th century, the Chinese had adopted the dogma of Neo-Confucian philosophy formulated by Zhu Xi. Enormous literary works were compiled during the Song dynasty, such as the historical work, the Zizhi Tongjian ("Comprehensive Mirror to Aid in Government"). The invention of movable-type printing further facilitated the spread of knowledge. Culture and the arts flourished, with grandiose artworks such as Along the River During the Qingming Festival and Eighteen Songs of a Nomad Flute, along with great Buddhist painters such as the prolific Lin Tinggui. The Song dynasty was also a period of major innovation in the history of warfare. Gunpowder, while invented in the Tang dynasty, was first put into use in battlefields by the Song army, inspiring a succession of new firearms and siege engines designs. During the Southern Song dynasty, as its survival hinged decisively on guarding the Yangtze and Huai River against the cavalry forces from the north, the first standing navy in China was assembled in 1132, with its admiral's headquarters established at Dinghai. Paddle-wheel warships equipped with trebuchets could launch incendiary bombs made of gunpowder and lime, as recorded in Song's victory over the invading Jin forces at the Battle of Tangdao in the East China Sea, and the Battle of Caishi on the Yangtze River in 1161. The advances in civilization during the Song dynasty came to an abrupt end following the devastating Mongol conquest, during which the population sharply dwindled, with a marked contraction in economy. Despite viciously halting Mongol advance for more than three decades, the Southern Song capital Hangzhou fell in 1276, followed by the final annihilation of the Song standing navy at the Battle of Yamen in 1279. Yuan dynasty (AD 1271–1368) The Yuan dynasty was formally proclaimed in 1271, when the Great Khan of Mongol, Kublai Khan, one of the grandsons of Genghis Khan, assumed the additional title of Emperor of China, and considered his inherited part of the Mongol Empire as a Chinese dynasty. In the preceding decades, the Mongols had conquered the Jin dynasty in Northern China, and the Southern Song dynasty fell in 1279 after a protracted and bloody war. The Mongol Yuan dynasty became the first conquest dynasty in Chinese history to rule the entire China proper and its population as an ethnic minority. The dynasty also directly controlled the Mongolian heartland and other regions, inheriting the largest share of territory of the divided Mongol Empire, which roughly coincided with the modern area of China and nearby regions in East Asia. Further expansion of the empire was halted after defeats in the invasions of Japan and Vietnam. Following the previous Jin dynasty, the capital of Yuan dynasty was established at Khanbaliq (also known as Dadu, modern-day Beijing). The Grand Canal was reconstructed to connect the remote capital city to economic hubs in southern part of China, setting the precedence and foundation where Beijing would largely remain as the capital of the successive regimes that unified China mainland. After the peace treaty in 1304 that ended a series of Mongol civil wars, the emperors of the Yuan dynasty were upheld as the nominal Great Khan (Khagan) of the greater Mongol Empire over other Mongol Khanates, which nonetheless remained de facto autonomous. The era was known as Pax Mongolica, when much of the Asian continent was ruled by the Mongols. For the first and only time in history, the silk road was controlled entirely by a single state, facilitating the flow of people, trade, and cultural exchange. Network of roads and a postal system were established to connect the vast empire. Lucrative maritime trade, developed from the previous Song dynasty, continued to flourish, with Quanzhou and Hangzhou emerging as the largest ports in the world. Adventurous travelers from the far west, most notably the Venetian, Marco Polo, would have settled in China for decades. Upon his return, his detail travel record inspired generations of medieval Europeans with the splendors of the far East. The Yuan dynasty was the first ancient economy, where paper currency, known at the time as Jiaochao, was used as the predominant medium of exchange. Its unrestricted issuance in the late Yuan dynasty inflicted hyperinflation, which eventually brought the downfall of the dynasty. While the Mongol rulers of the Yuan dynasty adopted substantially to Chinese culture, their sinicization was of lesser extent compared to earlier conquest dynasties in Chinese history. For preserving racial superiority as the conqueror and ruling class, traditional nomadic customs and heritage from the Mongolian steppe were held in high regard. On the other hand, the Mongol rulers also adopted flexibly to a variety of cultures from many advanced civilizations within the vast empire. Traditional social structure and culture in China underwent immense transform during the Mongol dominance. Large group of foreign migrants settled in China, who enjoyed elevated social status over the majority Han Chinese, while enriching Chinese culture with foreign elements. The class of scholar officials and intellectuals, traditional bearers of elite Chinese culture, lost substantial social status. This stimulated the development of culture of the common folks. There were prolific works in zaju variety shows and literary songs (sanqu), which were written in a distinctive poetry style known as qu. Novels of vernacular style gained unprecedented status and popularity. Before the Mongol invasion, Chinese dynasties reported approximately 120 million inhabitants; after the conquest had been completed in 1279, the 1300 census reported roughly 60 million people. This major decline is not necessarily due only to Mongol killings. Scholars such as Frederick W. Mote argue that the wide drop in numbers reflects an administrative failure to record rather than an actual decrease; others such as Timothy Brook argue that the Mongols created a system of enserfment among a huge portion of the Chinese populace, causing many to disappear from the census altogether; other historians including William McNeill and David Morgan consider that plague was the main factor behind the demographic decline during this period. In the 14th century China suffered additional depredations from epidemics of plague, estimated to have killed 25 million people, 30% of the population of China. Throughout the Yuan dynasty, there was some general sentiment among the populace against the Mongol dominance. Yet rather than the nationalist cause, it was mainly strings of natural disasters and incompetent governance that triggered widespread peasant uprisings since the 1340s. After the massive naval engagement at Lake Poyang, Zhu Yuanzhang prevailed over other rebel forces in the south. He proclaimed himself emperor and founded the Ming dynasty in 1368. The same year his northern expedition army captured the capital Khanbaliq. The Yuan remnants fled back to Mongolia and sustained the regime. Other Mongol Khanates in Central Asia continued to exist after the fall of Yuan dynasty in China. Ming dynasty (AD 1368–1644) The Ming dynasty was founded by Zhu Yuanzhang in 1368, who proclaimed himself as the Hongwu Emperor. The capital was initially set at Nanjing, and was later moved to Beijing from Yongle Emperor's reign onward. Urbanization increased as the population grew and as the division of labor grew more complex. Large urban centers, such as Nanjing and Beijing, also contributed to the growth of private industry. In particular, small-scale industries grew up, often specializing in paper, silk, cotton, and porcelain goods. For the most part, however, relatively small urban centers with markets proliferated around the country. Town markets mainly traded food, with some necessary manufactures such as pins or oil. Despite the xenophobia and intellectual introspection characteristic of the increasingly popular new school of neo-Confucianism, China under the early Ming dynasty was not isolated. Foreign trade and other contacts with the outside world, particularly Japan, increased considerably. Chinese merchants explored all of the Indian Ocean, reaching East Africa with the voyages of Zheng He. The Hongwu Emperor, being the only founder of a Chinese dynasty who was also of peasant origin, had laid the foundation of a state that relied fundamentally in agriculture. Commerce and trade, which flourished in the previous Song and Yuan dynasties, were less emphasized. Neo-feudal landholdings of the Song and Mongol periods were expropriated by the Ming rulers. Land estates were confiscated by the government, fragmented, and rented out. Private slavery was forbidden. Consequently, after the death of the Yongle Emperor, independent peasant landholders predominated in Chinese agriculture. These laws might have paved the way to removing the worst of the poverty during the previous regimes. Towards later era of the Ming dynasty, with declining government control, commerce, trade and private industries revived. The dynasty had a strong and complex central government that unified and controlled the empire. The emperor's role became more autocratic, although Hongwu Emperor necessarily continued to use what he called the "Grand Secretariat" to assist with the immense paperwork of the bureaucracy, including memorials (petitions and recommendations to the throne), imperial edicts in reply, reports of various kinds, and tax records. It was this same bureaucracy that later prevented the Ming government from being able to adapt to changes in society, and eventually led to its decline. The Yongle Emperor strenuously tried to extend China's influence beyond its borders by demanding other rulers send ambassadors to China to present tribute. A large navy was built, including four-masted ships displacing 1,500 tons. A standing army of 1 million troops was created. The Chinese armies conquered and occupied Vietnam for around 20 years, while the Chinese fleet sailed the China seas and the Indian Ocean, cruising as far as the east coast of Africa. The Chinese gained influence in eastern Moghulistan. Several maritime Asian nations sent envoys with tribute for the Chinese emperor. Domestically, the Grand Canal was expanded and became a stimulus to domestic trade. Over 100,000 tons of iron per year were produced. Many books were printed using movable type. The imperial palace in Beijing's Forbidden City reached its current splendor. It was also during these centuries that the potential of south China came to be fully exploited. New crops were widely cultivated and industries such as those producing porcelain and textiles flourished. In 1449 Esen Tayisi led an Oirat Mongol invasion of northern China which culminated in the capture of the Zhengtong Emperor at Tumu. Since then, the Ming became on the defensive on the northern frontier, which led to the Ming Great Wall being built. Most of what remains of the Great Wall of China today was either built or repaired by the Ming. The brick and granite work was enlarged, the watchtowers were redesigned, and cannons were placed along its length. At sea, the Ming became increasingly isolationist after the death of the Yongle Emperor. The treasure voyages which sailed Indian Ocean were discontinued, and the maritime prohibition laws were set in place banning the Chinese from sailing abroad. European traders who reached China in the midst of the Age of Discovery were repeatedly rebuked in their requests for trade, with the Portuguese being repulsed by the Ming navy at Tuen Mun in 1521 and again in 1522. Domestic and foreign demands for overseas trade, deemed illegal by the state, led to widespread wokou piracy attacking the southeastern coastline during the rule of the Jiajing Emperor (1507–1567), which only subsided after the opening of ports in Guangdong and Fujian and much military suppression. The Portuguese were allowed to settle in Macau in 1557 for trade, which remained in Portuguese hands until 1999. The Dutch entry into the Chinese seas was also met with fierce resistance, with the Dutch being chased off the Penghu islands in the Sino-Dutch conflicts of 1622–1624 and were forced to settle in Taiwan instead. The Dutch in Taiwan fought with the Ming in the Battle of Liaoluo Bay in 1633 and lost, and eventually surrendered to the Ming loyalist Koxinga in 1662, after the fall of the Ming dynasty. In 1556, during the rule of the Jiajing Emperor, the Shaanxi earthquake killed about 830,000 people, the deadliest earthquake of all time. The Ming dynasty intervened deeply in the Japanese invasions of Korea (1592–98), which ended with the withdrawal of all invading Japanese forces in Korea, and the restoration of the Joseon dynasty, its traditional ally and tributary state. The regional hegemony of the Ming dynasty was preserved at a toll on its resources. Coincidentally, with Ming's control in Manchuria in decline, the Manchu (Jurchen) tribes, under their chieftain Nurhaci, broke away from Ming's rule, and emerged as a powerful, unified state, which was later proclaimed as the Qing dynasty. It went on to subdue the much weakened Korea as its tributary, conquered Mongolia, and expanded its territory to the outskirt of the Great Wall. The most elite army of the Ming dynasty was to station at the Shanhai Pass to guard the last stronghold against the Manchus, which weakened its suppression of internal peasants uprisings. Qing dynasty (AD 1644–1912) The Qing dynasty (1644–1912) was the last imperial dynasty in China. Founded by the Manchus, it was the second conquest dynasty to rule the entirety of China proper, and roughly doubled the territory controlled by the Ming. The Manchus were formerly known as Jurchens, residing in the northeastern part of the Ming territory outside the Great Wall. They emerged as the major threat to the late Ming dynasty after Nurhaci united all Jurchen tribes and his son, Hong Taiji, declared the founding of the Qing dynasty in 1636. The Qing dynasty set up the Eight Banners system that provided the basic framework for the Qing military conquest. Li Zicheng's peasant rebellion captured Beijing in 1644 and the Chongzhen Emperor, the last Ming emperor, committed suicide. The Manchus allied with the Ming general Wu Sangui to seize Beijing, which was made the capital of the Qing dynasty, and then proceeded to subdue the Ming remnants in the south. The decades of Manchu conquest caused enormous loss of lives and the economic scale of China shrank drastically. In total, the Qing conquest of the Ming (1618–1683) cost as many as 25 million lives. The early Manchu emperors combined traditions of Central Asian rule with Confucian norms of traditional Chinese government and were considered a Chinese dynasty. The Manchus enforced a 'queue order', forcing Han Chinese men to adopt the Manchu queue hairstyle. Officials were required to wear Manchu-style clothing Changshan (bannermen dress and Tangzhuang), but ordinary Han civilians were allowed to wear traditional Han clothing. Bannermen could not undertake trade or manual labor; they had to petition to be removed from banner status. They were considered a form of nobility and were given annual pensions, land, and allotments of cloth. The Kangxi Emperor ordered the creation of the Kangxi Dictionary, the most complete dictionary of Chinese characters that had been compiled. Over the next half-century, all areas previously under the Ming dynasty were consolidated under the Qing. Conquests in Central Asia in the eighteenth century extended territorial control. Between 1673 and 1681, the Kangxi Emperor suppressed the Revolt of the Three Feudatories, an uprising of three generals in Southern China who had been denied hereditary rule of large fiefdoms granted by the previous emperor. In 1683, the Qing staged an amphibious assault on southern Taiwan, bringing down the rebel Kingdom of Tungning, which was founded by the Ming loyalist Koxinga (Zheng Chenggong) in 1662 after the fall of the Southern Ming, and had served as a base for continued Ming resistance in Southern China. The Qing defeated the Russians at Albazin, resulting in the Treaty of Nerchinsk. By the end of Qianlong Emperor's long reign in 1796, the Qing Empire was at its zenith. The Qing ruled more than one-third of the world's population, and had the largest economy in the world. By area it was one of the largest empires ever. In the 19th century the empire was internally restive and externally threatened by western powers. The defeat by the British Empire in the First Opium War (1840) led to the Treaty of Nanking (1842), under which Hong Kong was ceded to Britain and importation of opium (produced by British Empire territories) was allowed. Opium usage continued to grow in China, adversely affecting societal stability. Subsequent military defeats and unequal treaties with other western powers continued even after the fall of the Qing dynasty. Internally the Taiping Rebellion (1851–1864), a Christian religious movement led by the "Heavenly King" Hong Xiuquan swept from the south to establish the Taiping Heavenly Kingdom and controlled roughly a third of China proper for over a decade. The court in desperation empowered Han Chinese officials such as Zeng Guofan to raise local armies. After initial defeats, Zeng crushed the rebels in the Third Battle of Nanking in 1864. This was one of the largest wars in the 19th century in terms of troop involvement; there was massive loss of life, with a death toll of about 20 million. A string of civil disturbances followed, including the Punti–Hakka Clan Wars, Nian Rebellion, Dungan Revolt, and Panthay Rebellion. All rebellions were ultimately put down, but |
and environmental health engineering are other terms being used. Environmental engineering deals with treatment of chemical, biological, or thermal wastes, purification of water and air, and remediation of contaminated sites after waste disposal or accidental contamination. Among the topics covered by environmental engineering are pollutant transport, water purification, waste water treatment, air pollution, solid waste treatment, recycling, and hazardous waste management. Environmental engineers administer pollution reduction, green engineering, and industrial ecology. Environmental engineers also compile information on environmental consequences of proposed actions. Forensic engineering Forensic engineering is the investigation of materials, products, structures or components that fail or do not operate or function as intended, causing personal injury or damage to property. The consequences of failure are dealt with by the law of product liability. The field also deals with retracing processes and procedures leading to accidents in operation of vehicles or machinery. The subject is applied most commonly in civil law cases, although it may be of use in criminal law cases. Generally the purpose of a Forensic engineering investigation is to locate cause or causes of failure with a view to improve performance or life of a component, or to assist a court in determining the facts of an accident. It can also involve investigation of intellectual property claims, especially patents. Geotechnical engineering Geotechnical engineering studies rock and soil supporting civil engineering systems. Knowledge from the field of soil science, materials science, mechanics, and hydraulics is applied to safely and economically design foundations, retaining walls, and other structures. Environmental efforts to protect groundwater and safely maintain landfills have spawned a new area of research called geo-environmental engineering. Identification of soil properties presents challenges to geotechnical engineers. Boundary conditions are often well defined in other branches of civil engineering, but unlike steel or concrete, the material properties and behavior of soil are difficult to predict due to its variability and limitation on investigation. Furthermore, soil exhibits nonlinear (stress-dependent) strength, stiffness, and dilatancy (volume change associated with application of shear stress), making studying soil mechanics all the more difficult. Geotechnical engineers frequently work with professional geologists and soil scientists. Materials science and engineering Materials science is closely related to civil engineering. It studies fundamental characteristics of materials, and deals with ceramics such as concrete and mix asphalt concrete, strong metals such as aluminum and steel, and thermosetting polymers including polymethylmethacrylate (PMMA) and carbon fibers. Materials engineering involves protection and prevention (paints and finishes). Alloying combines two types of metals to produce another metal with desired properties. It incorporates elements of applied physics and chemistry. With recent media attention on nanoscience and nanotechnology, materials engineering has been at the forefront of academic research. It is also an important part of forensic engineering and failure analysis. Site development and planning Site development, also known as site planning, is focused on the planning and development potential of a site as well as addressing possible impacts from permitting issues and environmental challenges. Structural engineering Structural engineering is concerned with the structural design and structural analysis of buildings, bridges, towers, flyovers (overpasses), tunnels, off shore structures like oil and gas fields in the sea, aerostructure and other structures. This involves identifying the loads which act upon a structure and the forces and stresses which arise within that structure due to those loads, and then designing the structure to successfully support and resist those loads. The loads can be self weight of the structures, other dead load, live loads, moving (wheel) load, wind load, earthquake load, load from temperature change etc. The structural engineer must design structures to be safe for their users and to successfully fulfill the function they are designed for (to be serviceable). Due to the nature of some loading conditions, sub-disciplines within structural engineering have emerged, including wind engineering and earthquake engineering. Design considerations will include strength, stiffness, and stability of the structure when subjected to loads which may be static, such as furniture or self-weight, or dynamic, such as wind, seismic, crowd or vehicle loads, or transitory, such as temporary construction loads or impact. Other considerations include cost, constructability, safety, aesthetics and sustainability. Surveying Surveying is the process by which a surveyor measures certain dimensions that occur on or near the surface of the Earth. Surveying equipment such as levels and theodolites are used for accurate measurement of angular deviation, horizontal, vertical and slope distances. With computerisation, electronic distance measurement (EDM), total stations, GPS surveying and laser scanning have to a large extent supplanted traditional instruments. Data collected by survey measurement is converted into a graphical representation of the Earth's surface in the form of a map. This information is then used by civil engineers, contractors and realtors to design from, build on, and trade, respectively. Elements of a structure must be sized and positioned in relation to each other and to site boundaries and adjacent structures. Although surveying is a distinct profession with separate qualifications and licensing arrangements, civil engineers are trained in the basics of surveying and mapping, as well as geographic information systems. Surveyors also lay out the routes of railways, tramway tracks, highways, roads, pipelines and streets as well as position other infrastructure, such as harbors, before construction. Land surveying In the United States, Canada, the United Kingdom and most Commonwealth countries land surveying is considered to be a separate and distinct profession. Land surveyors are not considered to be engineers, and have their own professional associations and licensing requirements. The services of a licensed land surveyor are generally required for boundary surveys (to establish the boundaries of a parcel using its legal description) and subdivision plans (a plot or map based on a survey of a parcel of land, with boundary lines drawn inside the larger parcel to indicate the creation of new boundary lines and roads), both of which are generally referred to as Cadastral surveying. Construction surveying Construction surveying is generally performed by specialized technicians. Unlike land surveyors, the resulting plan does not have legal status. Construction surveyors perform the following tasks: Surveying existing conditions of the future work site, including topography, existing buildings and infrastructure, and underground infrastructure when possible; "lay-out" or "setting-out": placing reference points and markers that will guide the construction of new structures such as roads or buildings; Verifying the location of structures during construction; As-Built surveying: a survey conducted at the end of the construction project to verify that the work authorized was completed to the specifications set on plans. Transportation engineering Transportation engineering is concerned with moving people and goods efficiently, safely, and in a manner conducive to a vibrant community. This involves specifying, designing, constructing, and maintaining transportation infrastructure which includes streets, canals, highways, rail systems, airports, ports, and mass transit. It includes areas such as transportation design, transportation planning, traffic engineering, some aspects of urban engineering, queueing theory, pavement engineering, Intelligent Transportation System (ITS), and infrastructure management. Municipal or urban engineering Municipal engineering is concerned with municipal infrastructure. This involves specifying, designing, constructing, and maintaining streets, sidewalks, water supply networks, sewers, street lighting, municipal solid waste management and disposal, storage depots for various bulk materials used for maintenance and public works (salt, sand, etc.), public parks and cycling infrastructure. In the case of underground utility networks, it may also include the civil portion (conduits and access chambers) of the local distribution networks of electrical and telecommunications services. It can also include the optimizing of waste collection and bus service networks. Some of these disciplines overlap with other civil engineering specialties, however municipal engineering focuses on the coordination of these infrastructure networks and services, as they are often built simultaneously, and managed by the same municipal authority. Municipal engineers may also design the site civil works for large buildings, industrial plants or campuses (i.e. access roads, parking lots, potable water supply, treatment or pretreatment of waste water, site drainage, etc.) Water resources engineering Water resources engineering is concerned with the collection and management of water (as a natural resource). As a discipline it therefore combines elements of hydrology, environmental science, meteorology, conservation, and resource management. This area of civil engineering relates to the prediction and management of both the quality and the quantity of water in both underground (aquifers) and above ground (lakes, rivers, and streams) resources. Water resource engineers analyze and model very small to very large areas of the earth to predict the amount and content of water as it flows | actions. Forensic engineering Forensic engineering is the investigation of materials, products, structures or components that fail or do not operate or function as intended, causing personal injury or damage to property. The consequences of failure are dealt with by the law of product liability. The field also deals with retracing processes and procedures leading to accidents in operation of vehicles or machinery. The subject is applied most commonly in civil law cases, although it may be of use in criminal law cases. Generally the purpose of a Forensic engineering investigation is to locate cause or causes of failure with a view to improve performance or life of a component, or to assist a court in determining the facts of an accident. It can also involve investigation of intellectual property claims, especially patents. Geotechnical engineering Geotechnical engineering studies rock and soil supporting civil engineering systems. Knowledge from the field of soil science, materials science, mechanics, and hydraulics is applied to safely and economically design foundations, retaining walls, and other structures. Environmental efforts to protect groundwater and safely maintain landfills have spawned a new area of research called geo-environmental engineering. Identification of soil properties presents challenges to geotechnical engineers. Boundary conditions are often well defined in other branches of civil engineering, but unlike steel or concrete, the material properties and behavior of soil are difficult to predict due to its variability and limitation on investigation. Furthermore, soil exhibits nonlinear (stress-dependent) strength, stiffness, and dilatancy (volume change associated with application of shear stress), making studying soil mechanics all the more difficult. Geotechnical engineers frequently work with professional geologists and soil scientists. Materials science and engineering Materials science is closely related to civil engineering. It studies fundamental characteristics of materials, and deals with ceramics such as concrete and mix asphalt concrete, strong metals such as aluminum and steel, and thermosetting polymers including polymethylmethacrylate (PMMA) and carbon fibers. Materials engineering involves protection and prevention (paints and finishes). Alloying combines two types of metals to produce another metal with desired properties. It incorporates elements of applied physics and chemistry. With recent media attention on nanoscience and nanotechnology, materials engineering has been at the forefront of academic research. It is also an important part of forensic engineering and failure analysis. Site development and planning Site development, also known as site planning, is focused on the planning and development potential of a site as well as addressing possible impacts from permitting issues and environmental challenges. Structural engineering Structural engineering is concerned with the structural design and structural analysis of buildings, bridges, towers, flyovers (overpasses), tunnels, off shore structures like oil and gas fields in the sea, aerostructure and other structures. This involves identifying the loads which act upon a structure and the forces and stresses which arise within that structure due to those loads, and then designing the structure to successfully support and resist those loads. The loads can be self weight of the structures, other dead load, live loads, moving (wheel) load, wind load, earthquake load, load from temperature change etc. The structural engineer must design structures to be safe for their users and to successfully fulfill the function they are designed for (to be serviceable). Due to the nature of some loading conditions, sub-disciplines within structural engineering have emerged, including wind engineering and earthquake engineering. Design considerations will include strength, stiffness, and stability of the structure when subjected to loads which may be static, such as furniture or self-weight, or dynamic, such as wind, seismic, crowd or vehicle loads, or transitory, such as temporary construction loads or impact. Other considerations include cost, constructability, safety, aesthetics and sustainability. Surveying Surveying is the process by which a surveyor measures certain dimensions that occur on or near the surface of the Earth. Surveying equipment such as levels and theodolites are used for accurate measurement of angular deviation, horizontal, vertical and slope distances. With computerisation, electronic distance measurement (EDM), total stations, GPS surveying and laser scanning have to a large extent supplanted traditional instruments. Data collected by survey measurement is converted into a graphical representation of the Earth's surface in the form of a map. This information is then used by civil engineers, contractors and realtors to design from, build on, and trade, respectively. Elements of a structure must be sized and positioned in relation to each other and to site boundaries and adjacent structures. Although surveying is a distinct profession with separate qualifications and licensing arrangements, civil engineers are trained in the basics of surveying and mapping, as well as geographic information systems. Surveyors also lay out the routes of railways, tramway tracks, highways, roads, pipelines and streets as well as position other infrastructure, such as harbors, before construction. Land surveying In the United States, Canada, the United Kingdom and most Commonwealth countries land surveying is considered to be a separate and distinct profession. Land surveyors are not considered to be engineers, and have their own professional associations and licensing requirements. The services of a licensed land surveyor are generally required for boundary surveys (to establish the boundaries of a parcel using its legal description) and subdivision plans (a plot or map based on a survey of a parcel of land, with boundary lines drawn inside the larger parcel to indicate the creation of new boundary lines and roads), both of which are generally referred to as Cadastral surveying. Construction surveying Construction surveying is generally performed by specialized technicians. Unlike land surveyors, the resulting plan does not have legal status. Construction surveyors perform the following tasks: Surveying existing conditions of the future work site, including topography, existing buildings and infrastructure, and underground infrastructure when possible; "lay-out" or "setting-out": placing reference points and markers that will guide the construction of new structures such as roads or buildings; Verifying the location of structures during construction; As-Built surveying: a survey conducted at the end of the construction project to verify that the work authorized was completed to the specifications set on plans. Transportation engineering Transportation engineering is concerned with moving people and goods efficiently, safely, and in a manner conducive to a vibrant community. This involves specifying, designing, constructing, and maintaining transportation infrastructure which includes streets, canals, highways, rail systems, airports, ports, and mass transit. It includes areas such as transportation design, transportation planning, traffic engineering, some aspects of urban engineering, queueing theory, pavement engineering, Intelligent Transportation System (ITS), and infrastructure management. Municipal or urban engineering Municipal engineering is concerned with municipal infrastructure. This involves specifying, designing, constructing, and maintaining streets, sidewalks, water supply networks, sewers, street lighting, municipal solid waste management and disposal, storage depots for various bulk materials used for maintenance and public works (salt, sand, etc.), public parks and cycling infrastructure. In the case of underground utility networks, it may also include the civil portion (conduits and access chambers) of the local distribution networks of electrical and telecommunications services. It can also include the optimizing of waste collection and bus service networks. Some of these disciplines overlap with other civil engineering specialties, however municipal engineering focuses on the coordination of these infrastructure networks and services, as they are often built simultaneously, and managed by the same municipal authority. Municipal engineers may also design the site civil works for large buildings, industrial plants or campuses (i.e. access roads, parking lots, potable water supply, treatment or pretreatment of waste water, site drainage, etc.) Water resources engineering Water resources engineering is concerned with the collection and management of water (as a natural resource). As a discipline it therefore combines elements of hydrology, environmental science, meteorology, conservation, and resource management. This area of civil engineering relates to the prediction and management of both the quality and the quantity of water in both underground (aquifers) and above ground (lakes, rivers, and streams) resources. Water resource engineers analyze and model very small to very large areas of the earth to predict the amount and content of water as it flows into, through, or out of a facility. Although the actual design of the facility may be left to other engineers. Hydraulic engineering is concerned with the flow and conveyance of fluids, principally water. This area of civil engineering is intimately related to the design of pipelines, water supply network, drainage facilities (including bridges, dams, channels, culverts, levees, storm sewers), and canals. Hydraulic engineers design these facilities using the concepts of fluid pressure, fluid statics, fluid dynamics, and hydraulics, among others. Civil engineering systems Civil engineering systems is a discipline that promotes the use of systems thinking to manage complexity and change in civil engineering within its wider public context. It posits that the proper development of civil engineering infrastructure requires a holistic, coherent understanding of the relationships between all of the important factors that contribute to successful projects while at the same time emphasizing the importance of attention to technical detail. Its purpose is to help integrate the entire civil engineering project life cycle from conception, through planning, designing, making, operating to decommissioning. See also Architectural engineering Civil engineering software Engineering drawing Glossary of civil engineering Index of civil engineering articles List of civil engineers List of engineering branches List of Historic Civil Engineering Landmarks Macro-engineering Railway engineering Site survey Associations American Society of Civil Engineers Canadian Society for Civil Engineering Chartered Institution of Civil Engineering Surveyors Earthquake Engineering Research Institute Engineers Australia European Federation of National Engineering Associations International Federation of Consulting Engineers Indian Geotechnical Society Institution of Civil Engineers Institution of Structural Engineers Institute of Engineering (Nepal) International Society of Soil Mechanics and Geotechnical Engineering Institution of Engineers, Bangladesh Institution of Engineers (India) Institution of Engineers of Ireland Institute of Transportation Engineers Japan Society of Civil Engineers Pakistan Engineering Council Philippine Institute of Civil Engineers Transportation Research Board References Further reading External links The Institution of Civil |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.