text
stringlengths 144
682k
|
---|
Motion Style Acupuncture Treatment (MSAT) lecture by Joon-shik Shin, PhD, K.M.D.
Learn About Osteopathic Medicine
Share |
What is osteopathic medicine?
You are more than just the sum of your body parts. That's why doctors of osteopathic medicine (D.O.s) practice a "whole person" approach to medicine. Instead of just treating specific symptoms, osteopathic physicians concentrate on treating you as a whole for truly patient-centered care.
Osteopathic physicians understand how all the body's systems are interconnected and how each one affects the others. They focus special attention on the musculoskeletal system, which reflects and influences the condition of all other body systems. This system of bones and muscles makes up about two-thirds of the body's mass, and a routine part of the osteopathic patient examination is a careful evaluation of these important structures.
D.O.s know that the body's structure plays a critical role in its ability to function. They can use their eyes and hands to identify structural problems and to support the body's natural tendency toward health and self-healing. Osteopathic physicians also use their ears to listen to you and your health concerns. Doctors of osteopathic medicine help patients develop attitudes and lifestyles that don't just fight illness, but help prevent it, too. Millions of Americans prefer this concerned and compassionate care and have made D.O.s their doctors for life.
What’s unique about osteopathic medicine?
Most distinctively, however, D.O.s are trained in osteopathic manual medicine (OMM), which they can use to help diagnose and treat illness.
What is OMM?
Is osteopathic medicine a new form of medicine?
How many osteopathic physicians are there in Michigan? In the U.S.?
Michigan has one of the strongest presences of osteopathic medicine in the country, with around 7,000 osteopathic physicians in the state. Nationally, there are nearly 60,000 D.O.s.
Where do osteopathic physicians receive their training?
Currently, there are 23 osteopathic medical schools in 26 locations throughout the United States. Michigan’s only osteopathic medical school is the Michigan State University College of Osteopathic Medicine. Established in 1969, it has educated nearly 4,000 D.O.s and is consistently ranked among the top ten for primary care education among all medical schools, either M.D. or D.O., in the nation.
The osteopathic curriculum involves four years of undergraduate study, four years of medical school and two-to-six years of residency training. Many D.O.s then choose to take a residency program in a specialty area, such as internal medicine, surgery, family practice, pediatrics, radiology or pathology.
For more information, visit the MSU-COM website at
|
Science in Society
Home |Science in Society |Life | News
Mad Science: Nine of the oddest experiments ever
Continue reading page |1|2
Reto Schneider has collected some of the most bizarre experiments conducted in the name of science for his book The Mad Science Book (reviewed here). Here he selects nine of his favourites.
1. Dogbot meets real Dog
In 2003, researchers from Eötvös Loránd University in Budapest and the Sony Computer Science Laboratory in Paris tried to find out whether dogs would accept Sony's commercial dogbot AIBO as one of their own. The experiment resulted in a formal scientific publication, "Social behaviour of dogs encountering AIBO, an animal-like robot in a neutral and in a feeding situation", and the insight that the answer is "no".
Watch a video of the experiment
2. The psychonaut
Lilly later gave up scientific research and founded the firm Samadhi Tanks, which manufactured tanks for domestic use. Having became something of a New Age guru, he died in 2001.
One of the few scientific experiments honoured by Hollywood, Lilly's work was the model for the 1980 film Altered States. To no one's surprise, the real experiments were done with much less flashy equipment than that shown in the film. Lilly sometimes had to switch off the light himself and then climb, in complete darkness, into a tank, which was little more than an outsize bathtub.
Watch the title sequence of Altered States, which shows a sophisticated vertical tank that never actually existed
3. Psychology's atom bomb
This is probably the most famous experiment ever not actually done. American market researcher James Vicary claimed that he had exposed the audience in a cinema in Fort Lee, NJ to the secret instructions "Eat Popcorn!" and "Drink Coke!" As a result, the sales of Coca-Cola in the cinema foyer increased by 18.1%, while those of popcorn rose by 57.5%.
Vicary later admitted that the whole story had been fabricated. But it stuck and became an urban myth.
Vicary's experiment had its last major airing to date during the US Presidential elections of 2000, when in a TV advert promoting the Republican candidate George W Bush unseen by viewers, the word "RATS" was flashed up momentarily when a Democrat policy was mentioned. See the ad for yourself: the word appears at 0:25
4. Holidaying in a draught
Being a guinea pig for the British government's Common Cold Unit in 1946 was very popular with students. They saw it as a cheap holiday: getting free accommodation in spacious flats fully equipped with books, games, radio and telephone, and spending your leisure time playing table tennis, badminton, or golf. You even got paid three shillings a day.
The students were instructed to maintain a distance of at least 9 metres from all unprotected persons, other than their flatmates. The unpleasant part of the experiment began when the participants had to spend half an hour in a draughty corridor after taking a hot bath, had to wear wet socks for the rest of the day, and were infected with nasal secretion from a cold sufferer.
To everyone's surprise the experiments demonstrated that the common cold had nothing to do with cold temperatures.
Watch a (hilarious) film about the experiment
5. Remote control bullfight
Spanish neurologist Jose Delgado from Yale University was not only convinced that electrical stimulation of the brain was the key to understanding the biological bases of social behaviour: he was also prepared to prove his case in a rather risky fashion.
On a spring evening in 1964 he came face to face with Lucero, a 250-kilogram fighting bull owned by landowner Ramón Sánchez, who had granted Delgado the use of a small practice ring on his estate of La Almarilla in Córdoba for the experiment.
Lucero lumbered towards him. Delgado pressed a button on the remote control. The radio-controlled electrodes he had placed in the brain a few days before the experiment activated. This instantly dissipated the animal's aggression - Lucero skidded to a halt and trotted off.
Watch a video of Delgado's encounter with the bull
Delgado's experiment was considered newsworthy enough to be published on the front page of the New York Times - ironically only one year after it was actually done.
6. The 28-hour day
At one time, one of the great unsolved mysteries of sleep research was whether the human sleep-wake rhythm of 24 hours was merely a habit, changeable at any time, or whether people had an internal, hard-wired body clock.
So sleep researcher Nathaniel Kleitman set out to find a location where there was no difference between day and night.
Continue reading page |1|2
(Image: jingkung, stock.xchng)
(Image: jingkung, stock.xchng)
What the polls really mean for the future of Scotland
15:08 09 September 2014
Use of 'language of deceit' betrays scientific fraud
18:05 29 August 2014
Life in Lego: how mini-figure academics went viral
13:23 27 August 2014
Donna Yates explains how her Lego female scientists became a Twitter hit
Who's flying this thing? End in sight for pilotsMovie Camera
20:00 06 August 2014
Latest news
Daydream believers: Is imagination our greatest skill?Movie Camera
20:00 17 September 2014
Rainbow galaxies reveal why cosmos is full of spirals
19:30 17 September 2014
Finding Nemo is real: Clownfish make epic sea journeys
19:00 17 September 2014
China clamps down on dirtiest coal to curb pollution
18:30 17 September 2014
© Copyright Reed Business Information Ltd.
|
The Farber Collection
In EnglishEn Español
René Portocarrero
1912-1985 Havana
René Portocarrero’s body of work still awaits a study that will confirm him as one of the most magnificent artists in the history of Cuban painting, despite periods of prolific, sometimes too-commercial excess that have bogged down his reputation.
Portocarrero was born in El Cerro, a Havana neighborhood that had been a resort for the wealthy in the late 1800s, and which would inspire his early series of paintings. Though mostly self-taught, he studied at the San Alejandro Academy, and emerged as an artist with the second wave of the Cuban avant-garde, around the time of World War II. He worked at the Estudio Libre para Pintores y Escultores (Free Atelier for Painters and Sculptors) in Havana. His prolific illustrations appeared in literary magazines such as Verbum, Espuela de Plata, and Orígenes (all sponsored by poet José Lezama Lima), and he painted several murals. In 1944-1945, he exhibited his works at the Julian Levy Gallery, in the “Modern Cuban Painters” exhibition at New York’s Museum of Modern Art, at the San Francisco Museum of Art, and at other art centers in Haiti, Mexico, and Moscow.
Like Amelia Peláez, Carlos Enríquez, and Mariano Rodríguez, Portocarrero portrayed the “criollo space,” utilizing a creative assimilation of cubism, Mexican muralismo, surrealism, and abstractionism, as well as the influence of individual figures such as Picasso and Matisse. In his inveterate search for cubanidad, the Cuban essence, he chose visual motifs from his physical as well as cultural environment: domestic interiors, foods, women, celebrations, religions and popular saints, views of Havana and Trinidad, and colonial-era buildings. Over the course of his career he returned to those motifs again and again, in numerous series. Through a characteristically precise use of the spatula, he made his images easily identifiable by their dense amalgam of vibrant textures: perfect examples of Latin American baroque.
Despite occasional incursions into rural landscape—a near-cliché of the Cuban pictorial tradition—from the 1940s onward, Portocarrero focused primarily on the city and its architecture. Between the 1942 drawing Catedral (Cathedral) and the 1960 painting Catedral en Amarillo (Cathedral in Yellow), there is a two-decade gulf, rife with the search for and achievement of a personal style. In both pieces, the subject is the Cathedral of Havana, built between 1748-1777, and which presides over the plaza of the same name, located in the historical section within the walls of the old city.
An indefatigable draughtsman, Portocarrero considered drawing as an art in its own right. His evident intention is to create a dislocated, motley, crowded space, where the Cathedral looms over the rest of the buildings. The artist shatters both academic perspective and the rigid design of the colonial plaza to offer his unique vision. His hand is lively but firm, and his signature chromatic outbursts are notoriously absent.
When, in 1960, Portocarrero painted Catedral en Amarillo, he had already become an icon of Cuban art. Although he had gone through an abstract period in the 1950s, producing works in the vein of Paul Klee, he now returned to the fleshy figurativeness, bursting with color, that defined his style. Once again transfigured by his brushes, the cathedral is a subject to which he would often return, in the midst of a febrile artistic activity, until the 1970s. In this piece he forgoes any illusion of volume, placing the linear mass of the building against a plain warm background, almost as if in a hard-edge painting. The use of impasto, thickly applied in swift brushstrokes, allows him to delineate the church as a svelte edifice of emphatically vertical lines, whose towers—unlike the original—are practically enmeshed in the main body of the building. The central oculus becomes a stained-glass window in the primary colors dear to colonial-style stained glass, and columns and pilasters entwine with tropical doors and blinds, complete with resplendent fanlights. Portocarrero has transformed the spiritual building into a temple of sensuality, infused with the optimistic affirmation of life ever-present in his renewed visions of Havana.
—Abelardo G. Mena Chicuri
Facebook Twitter RSS
Browse The Collection
Or search by artist, medium and/or year.
|
Skip to New Jersey Shore navigation
Lucy the Elephant
Lucy the ElephantLucy the Elephant, located in Margate City New Jersey, stands 65 feet high, and is made from tin sheeting, and nearly one million pieces of wood. It is the oldest animal-shaped building in the world, and also holds the distinction of being the largest elephant in the world. This gigantic building is a true historical treasure, and has survived hurricanes, rowdy drunks, prohibition and demolition in its rich 124-year history.
James V. Lafferty, an engineer and inventor whose parents had immigrated to the United States from Ireland, constructed Lucy in 1882.
While in his twenties, Lafferty acquired a number of pieces of land in the South Atlantic City area. This land was not idea for development, given that it was cut off from Atlantic City by a tidal creek that filled during high tide, making it impossible for anyone to visit Lafferty's properties until the tide subsided.
Lafferty came up with the idea of Lucy the Elephant; a huge elephant shaped building, as a ploy to attract real estate development and tourism to his land. Architect William Free was hired to design the structure, and a contractor from Philadelphia was hired to build it. The 12,000 square feet of tin sheeting, and nearly one million pieces of wood were likely transported to the construction site by boat.
The Elephant was completed in 1881 for a sizeable sum of money. The reported cost at the time was $25,000, though Lafferty was known to claim that the cost by the end of the project was closer to $38,000. Lafferty then began placing newspaper advertisements offering building lots in both local and Philadelphia area papers.
The construction of animal shaped buildings was unprecedented, and Lafferty applied for a patent to protect his idea. On Dec. 5, 1882, Lafferty was granted U.S. Patent no 268,503, giving him the exclusive right to create and sell animal shaped structures for 17 years.
Patent in hand, Lafferty spearheaded the construction of more animal shaped structures. He built two more elephants - one at Cape May, and one at Coney Island - although Lucy is the only of the three to have survived.
In 1881, Lafferty sold the South Atlantic land, having overextended himself elsewhere along the New Jersey Shore and New York. He sold Lucy the Elephant, along with some other property, to Anthony Gertzen Sr. of Philadelphia. Anthony Sr. died in 1902, and his properties were divided amongst his children.
John Gertzen, Anthony's third son, came into possession of Lucy the Elephant and began offering tours of the structure for 10 cents. It is said that John Gertzen's wife Sophia, was responsible for giving the Elephant the name “Lucy", though this has never been confirmed.
The tours offered by the Gertzens attracted many celebrities, including theatre stars and opera singers, as well as future President Woodrow Wilson, who was said to be a generous tipper.
Lucy's next residents were an English Doctor and his family, who leased the Elephant from the Gertzens in 1902, with the intention of turning it into a summer home. The family renovated Lucy's interior, creating four bedrooms, a dining room, a kitchen, a parlor, and a small bathroom located in one of Lucy's shoulders.
Lucy did not remain a home for long however. A hurricane in 1903 severely damaged the Elephant, and left it half buried in sand. Volunteers were enlisted to dig Lucy out and moved her away from the sea. At this point the Gertzens converted the Elephant into a tavern, and it became a haven for rowdy drinkers until 1904, when Lucy was almost destroyed by a fire caused by an overturned oil lantern.
John Gertzen died in 1916, and Sophia began to, again, charge 10-cent admission for tours of Lucy to support the Gertzen family. Lucy once again became a popular tourist destination. Woodrow Wilson, now President of the United States, visited Lucy for the second time along with his wife in 1916.
The Gerzten family went back into the business of selling alcohol with the end of Prohibition in 1933. The family created an old-fashioned beer garden around Lucy and named it the Elephant Café. Sophia sold the café after the Second World War, but kept Lucy the Elephant and the Gertzen summer home. She would later repurchase the café and convert it into the Elephant Hotel.
Sophia died in 1963, and her children continued to run the family business. The Elephant Hotel continued to operate, and Lucy the Elephant remained a tourist attraction until 1970, when the Gertzens retired. They sold the business and donated Lucy to the City of Margate before moving to Florida.
By this time Lucy the Elephant was in serious disrepair, and it seemed that demolition of the famous landmark was inevitable. The Margate Civic Association took up Lucy's cause, and began to look for a way to save the Elephant. The most immediate problem faced by the committee was that Lucy needed to be moved, as a developer was purchasing the land where the Elephant currently stood.
Members of the Margate Civic Association approached the city with the proposal that Lucy should be relocated to some city parkland two blocks south of the Elephant's current location. The city agreed to the proposal, and the Civic Association acquired the services of John A. Milner, a restoration architect, to assess Lucy's structure and determine if moving the Elephant would even be possible.
Milner determined that Lucy's structure would hold, but a number of challenges still remained. A major obstacle was that the Civic Association had been given only 30 days to remove the Elephant by the new owners of the land that Lucy currently stood on. In addition, moving the structure and constructing a new foundation for Lucy was estimated to cost $24,000.
The Save Lucy Committee was formed, and they undertook a number of fundraising campaigns, including bake sales and canvassing drives. They were unable to raise the $24,000 with so little time however, and found themselves $10,000 short with the deadline looming. Thankfully, an anonymous donation allowed the move to proceed.
July 20, 1970 was to be Lucy's moving day. The movers began the process of lifting the gigantic structure with special jacks, and installing the dollies that would be used to transport the Elephant to its new location.
Three days before the moving date, the committee encountered a huge and unanticipated problem that threatened disaster. The Atlantic Beach Corporation, the owners of the land immediately adjacent to the new site, filed for a legal injunction to prevent Lucy from moving to its new location, on the grounds that the presence of the Elephant would reduce the local property values.
The Save Lucy Committee immediately appealed to Atlantic County Judge Benjamin Rimm, who held an emergency Saturday hearing to decide the case. Judge Rimm considered both arguments, but ruled in favor of the Save Lucy Committee. The move could continue.
The committee had another brief scare in the form of a heavy fog on the morning of July 20, which mercifully cleared before the event.
Seven hours later, Lucy had been safely moved to her new home without incident. The sight of the enormous elephant being wheeled down the street created a huge spectacle, which generated national and international publicity. Donations to the Save Lucy Committee began to arrive from around the globe.
Restoration of the Elephant became the next challenge for the Save Lucy Committee. To help fund the project, and to protect Lucy for future generations, the committee applied to have the Elephant recognized in the National Register of Historic Places.
The committee was able to raise another $124,000 through government grants and private donations. Mounting costs and unexpected problems meant that this sum of money was not enough to fully restore the Elephant, but it proved to be enough to allow Lucy to be reopened for tours in 1974.
The restoration of Lucy's exterior was tackled next, funded partially from the proceeds from the tours. It unfortunately took nearly 3 years to complete the exterior repairs due both to numerous delays in receiving government grants, and to mounting construction costs. However, by 1976, the elephant had a newly restored and painted exterior, and the U.S. Department of the Interior officially recognized Lucy the Elephant as a National Historic Landmark.
The restoration of Lucy the Elephant has continued since 1976. The interior was further restored in 1980, and replicas of many of the buildings that once surrounded the Elephant at its original location have been rebuilt.
Today, the Save Lucy Committee continues its efforts to preserve and enhance this historic structure. The interior of the Elephant has been completely restored, and thousands of visitors annually tour the innards of this massive Elephant, and purchase souvenirs from the adjacent gift shop.
VisitNJShore - The Most Comprehensive Guide to the New Jersey Shore
|
Unconscious is a state of the brain. It is a lack of consciousness. It is a condition of the physical brain where awareness of sensory stimulus is not obtained. It is the state of not knowing, of not perceiving, of not being aware. It is sometimes used in the sense of having an unconscious awareness of something else that does not seem to be normally known.
CARM ison
Copyright 2014
CARM Office number: 208-466-1301
Email: [email protected]
Mailing Address: CARM, PO BOX 1353, Nampa ID 83653
|
When it comes to the most typical cause of poor power factor in a facility, motor inductance is a likely culprit. The problem worsens when motors are not loaded to their full capacity. Harmonic currents reflected back into the system also reduce power factor (What is Power Factor).
The good news is you can correct low power factor by adding power factor correction capacitors to the facility’s power distribution system. This is best accomplished via an automatic controller that switches capacitors, and sometimes reactors, on and off. The most basic applications use a fixed capacitor bank.
Power factor correction capacitors can reduce your energy costs by avoiding the premium rates electric utilities charge when your power factor falls below specified values. Facilities typically install these capacitors when their inductive loads cause power factor problems for their neighbors or the electric utility.
Under normal conditions, capacitors should operate trouble-free for many years. However, conditions such as harmonic currents, high ambient temperatures, and poor ventilation can cause premature failures in power factor correction capacitors and related circuitry. These failures can lead to substantial increases in energy expenses and — in extreme cases — create the potential for fires or explosion (Photo 1). Therefore, it’s critical to inspect power factor correction capacitors on a regular basis to ensure they’re working properly. In fact, most manufacturers recommend that preventive maintenance be performed twice a year.
Safety first
These energy storage devices can deliver a lethal shock long after the power serving them has been disconnected. Although most capacitors are equipped with a discharge circuit, when the circuit fails, a shock hazard exists for an extended period of time. When testing is required with the voltage applied, you must use extreme caution. Capacitor bank maintenance requires training specific to the equipment, its application, and the task you are expected to perform. The proper personal protective equipment (PPE) per NFPA 70E is also required.
Additional hazards are involved in working with current transformer (CT) circuits, including the wiring and shorting block. The CT itself is normally located in the switchboard, not in the capacitor bank enclosure. Even after the capacitor bank has been de-energized, there is a danger of electrical shock from the CT wiring. If the CT circuit is opened when there is a load on the switchboard, the CT can develop a lethal voltage across its terminals.
Visual inspection and cleaning
Start by performing a complete visual inspection of the system. Look for discolored components, bulging and/or leaking capacitors, and signs of heating and/or moisture. Clean and/or replace filters for cooling fans. Clean the units using a vacuum — never use compressed air.
Prior to re-energizing the capacitors, perform an insulation integrity test from the bus phase-to-phase and phase-to-ground points. Note: The control power transformer line-side breaker or fuses must be removed to prevent erroneous readings phase-to-phase.
Infrared inspection
The most valuable tool for evaluating capacitor banks is a thermal imager. The system should be energized for at least 1 hour prior to testing. To begin, check the controller display to determine if all the stages are connected. Next, verify that the cooling fans are operating properly. Conduct an infrared examination of the enclosure prior to opening the doors. Based on your arc flash assessment, wear the required PPE when performing tasks near energized equipment.
Examine power and control wiring with the thermal imager, looking for loose connections. A thermal evaluation will identify a bad connection by showing a temperature increase due to the additional resistance at the point of connection. A good connection should measure no more than 20°F above the ambient temperature. There should be little or no difference in temperature phase-to-phase or bank-to-bank at points of connection.
An infrared evaluation will detect a blown fuse by highlighting temperature differences between blown and intact fuses. A blown fuse in a capacitor bank stage reduces the amount of correction available. Some units are equipped with blown fuse indicators; others are not. If you find a blown fuse, shut down the entire bank, and determine what caused the fuse to blow. Some common causes are bad capacitors, reactor problems, and bad connections at line-fuse, load-fuse, or fuse clip points.
You should also look for differences in temperature of individual capacitors (Photo 2). If a capacitor is not called for or connected at the time of examination, then it should be cooler. Also keep in mind that the temperature of components might be higher in the upper sections due to convection. However, if according to the controller all stages are connected, then temperature differences usually indicate a problem. For example, high pressure may cause the capacitor’s internal pressure interrupter to operate before the external fuse, thus removing the capacitor from the circuit without warning.
Current measurements
As part of your preventive maintenance procedures, take a current measurement on all three phases of each stage and record it using a multimeter and current clamp. Use the multimeter to measure the current input to the controller from the current transformer in the switchboard, using a current clamp around the CT secondary conductor.
A calculation is required to convert the measured current value to the actual current flowing through the switchboard. If the current transformer is rated 3,000A to 5A — and you measure 2A — the actual current is: (3,000A ÷ 5A) × 2A = 1,200A.
In addition, measure the current through the breaker feeding the capacitor bank for phase imbalance, with all stages connected. Maintain a log of all readings to provide a benchmark for readings taken at a later date.
Power factor measurements
Measuring power factor requires a meter that can simultaneously measure voltage, current, power, and demand over at least a 1-second period. A digital multimeter (DMM) cannot perform these measurements, but a power quality analyzer with a current clamp will measure all of these elements over time, helping you to build an accurate picture of the facility’s power consumption. A power logger, another type of power quality tool, can perform a 30-day load study to provide an even better understanding of power factor and other parameters.
Capacitance measurements
Before measuring capacitance, de-energize the capacitor bank and wait for the period specified in the manufacturer’s service bulletin. While wearing the proper PPE, confirm with a properly rated meter that no AC is present. Follow your facility’s lockout/tagout procedure. Using a DC meter rated for the voltage to be tested and set to 1,000VDC, test each stage — phase-to-phase and phase-to-ground. There should be no voltage present. The presence of voltage indicates the capacitor may not be discharged. If no voltage is detected, measure capacitance with the meter and compare the reading to the manufacturer’s specifications for each stage. Although power factor correction capacitors are designed to provide years of service, the key is performing proper maintenance as recommended by the individual manufacturer.
Kennedy is a licensed electrical contractor and technical author for Fluke Corp., Everett, Wash. He also serves as an OSHA-authorized general industry outreach safety trainer and instructor/curriculum developer for Prairie State College, Lakeland College, Joliet Junior College, and the Indiana Safety Council.
|
Blood oath (Hungarians)
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Fresco by Bertalan Székely in the ceremonial hall of the city hall of Kecskemét, Hungary. Created between 1895 and 1897, the painting features a depiction of the bull's head bowl, from the treasure of Nagyszentmiklós.[1]
The blood oath (Hungarian: vérszerződés, lit. "blood contract") was, according to tradition, a pact among the leaders of the seven Hungarian tribes, traditionally held to be the first, unwritten constitution of the Hungarian nation. Its story, along with the terms agreed upon in it, is mostly known from the somewhat unreliable Gesta Hungarorum, a chronicle written between 1196–1203 and is thus possibly influenced by 12th century laws and customs. The oath was sealed by the seven leaders – Álmos, Előd, Ond, Kond, Tas, Huba and Töhötöm – by cutting their arms and letting their blood into a chalice. This practice is likely to have been used traditionally to seal exceptionally strong oaths, and there must have been several similar oaths, but the phrase "blood oath" usually refers to the one by the seven leaders.
In the Gesta[edit]
The blood oath is usually regarded to have taken place in the 9th century, under High Prince Álmos, in Etelköz, before the migration into the Carpathian basin. The author of Gesta – only known as "Magister P" and generally referred to as "Anonymus" – narrated its story in his book.
"Then they said to Chieftain Álmos together: »We have chosen you, from this day onward, to be our leader and commander, and wherever your destiny takes you, we are bound to follow.« Then each of the aforementioned men let, in accord with Pagan custom, his blood into a vessel, and sanctioned his oath therewith. And although they were Pagans, still they kept this oath they made together until this death.
And thus was the first part of the oath: That as long as they live and their descendants live, their leader will always be from Álmos's lineage. And thus was the second part of the oath: That all wealth acquired by them will be divided between them. And thus was the third part of the oath: That the nobles who have chosen Álmos as their leader by their own will, and their descendants, will always be included in the leader's council and will bear the country's offices. And thus was the fourth part of the oath: If someone of their descendants would ever be disloyal to the leader or would incite disagreement between the leader and his kin, then he should have his blood spilt, just as the leaders' blood was let from their body when they swore their oath to Chieftain Álmos. And thus was the fifth part of the oath: If a descendant of Álmos or the other leaders would violate the terms of this agreement, he should be forever cursed. The names of these seven men were: Álmos, father of Árpád; Előd, father of Szabolcs, a forefather of the Csák clan; Kend, father of Korcán, Ond, father of Ete, a forefather of the Kalán and Kölcse clans; Tas, father of Lél, Huba, forefather of the Szemere clan; the seventh was Tétény, father of Horka, whose sons were Gyula and Zombor, forefathers of the Maglód clan, which will be written about later. But enough of this, let's follow the course of history.” – Anonymus: Gesta Hungarorum[2]
According to contemporary sources, similar blood oaths were common among Nomadic peoples that were similar to the Hungarians, like the Scythians. Herodotus described a Scythian ritual in which "a large earthern bowl is filled with wine, and the parties to the oath, wounding themselves slightly with a knife or an awl, drop some of their blood into the wine; then they plunge into the mixture a scymitar, some arrows, a battle-axe, and a javelin, all the while repeating prayers; lastly the two contracting parties drink each a draught from the bowl, as do also the chief men among their followers."[3]
The description of the oath taking ceremony mirrors the political and societal changes during Anonymus' lifetime. The increasing power of the nobles and their need for the codification of their rights culminated in the issuing of the Golden Bull of 1222. Several historians concluded that Anonymus' intentions in writing down this agreement were to express the societal changes during his own period, and support the fight for the rights of the nobility, as a kind of historical justification. According to historian István Nemeskürty "The aim of Magister P. (Anonymus) is to justify the rights and claims of 13th century Hungarian nobility and create a lineage going back to the Conquest for all of his friends and family. Also, although Anonymus stresses that his works are based on written sources, he wanted to create a literary work in the style of his own time period."[4]
1. ^ A hónap művésze: Székely Bertalan
2. ^ Anonymus: Gesta Hungarorum
3. ^ The History of Herodotus, Book IV
4. ^ Nemeskürty István: Mi magyarok, Akadémiai kiadó, Budapest 1993, 89. oldal
|
The Einstein Theory of Relativity
From Wikipedia, the free encyclopedia
Jump to: navigation, search
A frame from a German relativity film produced in 1922, published in Scientific American.
The Einstein Theory of Relativity (1923) is a silent film directed by Max and Dave Fleischer and released by Fleischer Studios.
In August 1922, Scientific American published an article explaining their position that a silent film would be unsuccessful in presenting Albert Einstein's theory of relativity to the general public. They argued that only as part of a broader educational package including lecture and text would such a film be successful. Scientific American then went on to review frames from an unnamed German film reported to be financially successful.
Six months later, on February 11, 1923, the Fleischers released their relativity film, produced in collaboration with popular science journalist Garrett P. Serviss to accompany his book on the same topic. Two versions of the Fleischer film are reported to exist - a shorter two-reel (20 minute) edit intended for general theater audiences, and a longer five-reel (50 minute) version intended for educational use.[1]
The Fleischers lifted footage from the German predecessor, Die Grundlagen der Einsteinschen Relativitäts-Theorie,[2] directed by Hanns-Walter Kornblum, for inclusion into their film. Presented here are images from the Fleischer film and German film. If actual footage was not recycled into The Einstein Theory of Relativity, these images and text from the Scientific American article suggest that original visual elements from the German film were.[3]
This film, like much of the Fleischer's work, has fallen into the public domain. Unlike Fleischer Studio's Superman or Betty Boop cartoons, The Einstein Theory of Relativity has very few existing prints and is available in 16mm from only a few specialized film preservation organizations.
1. ^ "Relativity in the Films". Scientific American. August 1922. Retrieved 2008-07-14.
2. ^ Die Grundlagen der Einsteinschen Relativitäts-Theorie at the Internet Movie Database
3. ^ "Re: First animated feature ever". Archived from the original on 2006-12-31. Retrieved 2009-04-07.
External links[edit]
|
Friday, August 28, 2009
Military Iconography and Louis XVI
Monsieur Buvat de Virginy gives a fresh interpretation. To quote:
For many, Louis XVI does not often conjure up much in the way of martial prowess or skill. However, like his famed Bourbon predecessors Henri IV and Louis XIV, he used military allegory and iconography to strengthen the image of kingship embodied by Versailles. As an absolute monarch from 1774 to 1789 and even during his rule as a constitutional sovereign from 1789 to 1792, representations and allegories of the king with military themes were used to reinforce manifestations of the royal power and control. In the wake of France’s victory in the American War of Independence, the image of royal authority under Louis XVI was strengthened not only through martial prowess on the battlefield and high seas, but also within the context of absolutist Bourbon imagery, royal commissions and architecture....
Unveiled in 1777, the white marble portrait of the king by Lucas combines military allegory with that of French prosperity and wealth. The king, dressed as a Roman emperor complete with a cuirass or classical breastplate, sword, and laurel crown, rests his right hand on a horn of abundance. Defaced during the Revolution, it is conjectured that the left hand originally brandished a scepter. In this example from the early part of the king’s reign, the use of such martial iconography is readily apparent and follows the tradition of Bourbon absolutist imagery while incorporating the commemoration of his perceived enlightened acts as king.
M Buvat de Virginy said...
Thank you for including this, I'm honored !
It's my first post and became a little long-winded (I had to cut down a paper I had written on the same subject which became difficult as I was flooded by all my research and couldn't decide what to include and what to cut !). Also had some trouble in including images.
I'm not sure where to suggest this, but have you ever considered a post reviewing the 2001 Eric Rohmer film "l'Anglaise et le duc" or The Lady and the Duke as it was marketed in the States ? It's a fascinating film, a little slow at times, but the political views it explores are revolutionary in terms of how the events of 1789 are presented. It's about Grace Elliot, a royalist Englishwoman living in Paris from 1790 to 1793, and her tempestuous relationship with a former lover, the duc d'Orleans. Very controversial in France !
I'd also love to see your comments on Louis XV. Although I can see where his frequent infidelities might get him a bad reputation, he was much more complex than many historians make him out to be. As early as the 1730s, when he was really able to take charge, he was already initiating reforms to tax previously privileged social groups.
elena maria vidal said...
Yes, I have seen that film and will do a post on it, at your request, Monsieur.
I have never done a post directly about Louis XV, although I have done posts about everyone in his family. I'll certainly think about writing one. I would love for you to do one, and then I could link to it!
|
Video of the Day: Live streaming Earth
NASA thought it would be kinda cool to put a bunch of HD video cameras up on the ISS and stream live video of the Earth down to anyone who wants it. Really, they're doing it because they want to see how some different kinds of cameras stand up to radiation in space, but you have to figure that there's definitely that aforementioned "kinda cool" factor, because it's very kinda cool. There is an array of four different commercial high definition cameras pointed in different directions, showing some spectacular views of the Earth, and they're all streaming right now, live.
Note that the streaming feed (embedded below) may not pop up immediately — if you see a gray image, it's because the stream is hopping between cameras or the station is (temporarily) out of range. If you see a black image, the stream is working, but the ISS is in the Earth's shadow, so it's dark. You can click here to see where on Earth the station is, so worst case, you'll be waiting maybe 30 minutes or so to see something cool.
NASA, via UStream
For the latest tech stories, follow DVICE on Twitter
at @dvice or find us on Facebook
|
Kindergarten Intellectual Development
3 of 8
They want routine and rules.
Kindergarteners want to know the rules, and they usually follow them. If the rules change even slightly, they can be upset by the shift in their routine. Sometimes they have a strong desire to please adults and they complain when others break rules.
What you should do: Ensure your home has structure. Explain the rules and make sure your child understands them before expecting her to fall in line. Encourage simple routines like setting the dinner table and making the bed. When your child follows rules, tell her you appreciate it. When she doesn’t, calmly reinforce rules. Say, “You must clean your room,” for example.
Watch your kids go Brainzy!
Online games from that help early readers
get ahead
Try Risk Free
|
WebMD Health - Sleep Disorders
Sep 27, 2011 8:44 PM
Chronic Nightmare Therapy May Make Sleep Peaceful
Yael Levy recalls having chronic nightmares as far back as elementary school, when she was living in Israel. The grandchild of Holocaust survivors, she says her dreams were filled with images of suffering and death.
In one recurrent nightmare, Levy was trapped in a concentration camp, facing death. In another, she was drowning in deep water. At their worst, the nightmares occurred on an almost weekly basis, leaving her jittery and desperately fatigued.
"I would wake up so terrified that I was afraid to go back to sleep," Levy says. "And the bad feelings were hard to shake. I would continue to feel frightened throughout the next day."
Chronic Nightmare or Bad Dream?
There's nothing unusual about having an occasional nightmare (which sleep experts define simply as a bad dream that causes the sleeper to wake up). But up to 8% of the adult population suffers from chronic nightmares, waking in terror at least once a week.
Sometimes the nightmares are so frequent and so upsetting that they make sound sleep all but impossible, setting the stage for fatigue and emotional problems like anxiety and depression.
Nightmares vary widely in their themes and specific content -- experts say they can be "about" anything -- but all cause fear, sadness, anger, shame, or another negative emotion. They occur during REM sleep, typically in the latter part of the night. Though more common in children and adolescents, they also strike in adulthood.
In many cases, chronic nightmares are triggered by psychological stress -- such as that stemming from posttraumatic stress disorder, a severe anxiety disorder that strikes people who have been exposed to or witnessed combat, violent assaults, accidents, natural disasters, and other terrifying ordeals.
Other causes of chronic nightmares include alcohol abuse, the use of certain medications, and sleep disorders, including the disordered breathing condition known as sleep apnea.
Plagued by Nightmares
Now 29 years old and living in New York City with her husband and 4-month-old son, Levy says she endured years of fractured sleep and persistent anxiety because of chronic nightmares. It never occurred to her that help was available.
"People have nightmares," Levy says. "I had mine, and that was that. I didn't think it was the sort of problem that could be treated."
It's a common misconception.
"Lots of people think that nightmares can't be treated," says Shelby Harris, PsyD, director of the behavioral sleep medicine program at Montefiore Medical Center's Sleep-Wake Disorders Center in New York City. "But there are effective treatments."
Help for Chronic Nightmares
One treatment option is psychodynamic psychotherapy, in which patients meet regularly with a therapist to discuss their nightmares and consider any emotional problems that might be causing them.
Another option is taking prazosin, a medication usually prescribed for high blood pressure; studies have shown that nightly doses of the drug are effective against chronic nightmares in people with posttraumatic stress disorder.
But Levy found relief not in pills or psychotherapy but from a simple behavioral technique she learned from Harris after seeking treatment not for nightmares but for insomnia.
Changing Nightmare Scripts
The technique that Levy used, known as imagery rehearsal therapy (IRT), grew out of research conducted in the 1990s. It's been steadily gaining favor as a treatment for chronic nightmares since 2001 when a landmark study published in the Journal of the American Medical Association found that it not only curbed nightmares among victims of sexual assault but also reduced PTSD symptoms.
"Studies show that 70% to 80% of people who try IRT get significant relief," says Barry Krakow, MD, director of the Maimonides International Nightmare Treatment Center in Albuquerque, N.M. He's one of the researchers who worked on the JAMA study and the author of four books on sleep medicine, including Sound Sleep, Sound Mind.
IRT is surprisingly easy to learn and to use. The basic technique can often be mastered in a few hours; once learned, it's used for only a few minutes a day for a matter of days or weeks.
Krakow says it's possible to try IRT on your own, but he warns that people who suffer from PTSD or another psychological condition should attempt the technique only with the help of a doctor or therapist.
Working with a professional also makes sense for people who have trouble visualizing dream images while awake. "Some people have difficulty painting a picture in the mind's eye," Harris says. "But with help, they get good at priming the pump for imagery."
3 Steps to Nightmare Control
As described by Krakow and Harris, IRT is a three-step process:
1. Jot down a brief description of a recent nightmare. If your most recent nightmare is too upsetting to think about, pick another.
2. Think of a way to change the nightmare. Krakow declines to tell his patients what sort of change to make, encouraging them to rely on their intuition to make an appropriate change.
3. Set aside a few minutes each day to imagine this altered version of the nightmare. Simply paint a mental picture of the altered version.
Some people with chronic nightmares, especially those who have suffered for years, find it hard to believe that a simple, essentially do-it-yourself technique could be effective.
Krakow says that when he explains IRT to his patients, "it's almost like they think the process is disrespecting them. They say, 'What do you mean I just write down a nightmare and change it and picture it in my mind? That's crazy.' It's almost like they think I'm saying, 'Change two dreams and call me in the morning.'"
Peaceful Nights
Levy can't recall exactly what she thought when Harris told her about IRT. But she tried it and found that it worked. Her nightmare about the concentration camp? She re-imagined herself in a summer camp where she could walk about freely. And the bad dream about drowning? The deep water that threatened to swallow her up became shallow enough to stand up in.
Levy still has nightmares, but they occur much less frequently -- about once every six weeks or so. When they do occur, they are less upsetting.
"Just learning that there was something I could do about my nightmares really helped a lot," Levy says. "Getting help changed things for me significantly. I'm more rested and happier, and I'm able to be more active during the day."
|
A doctor washes his hands just moments before his heart transplant surgery is set to begin at a hospital in the Bronx, New York City.
Photo by Q. Sakamaki/Redux
Can we make US medicine less dangerous?
Modern medicine works indisputable wonders when it’s delivered carefully and appropriately. But it’s easy to forget how much harm it can cause when something goes awry. Medical errors kill an estimated 440,000 U.S. patients every year—well over 1,000 every day—and harm many times that number. The toll puts medicine itself in the same league as cancer and heart disease as a leading cause of death. Yet until recently, no one was even measuring the devastation, let alone working to reduce it.
That’s now changing fast, thanks in part to the nonprofit Leapfrog Group. Leapfrog’s Hospital Safety Score—a user-friendly report card launched last year and updated every six months—enables anyone to inspect the safety records of 2,523 acute-care hospitals at a glance.
“Medical errors kill a population the size of Miami every year,” says Leah Binder, Leapfrog’s president and CEO. “By showing the public how hospitals are safeguarding their patients—or aren’t—we give the whole industry an incentive to do better.”
Hospitals already report many safety-related practices and outcomes to Medicare officials and the American Hospital Association. But neither the government nor the industry group converts those data into rankings that a layperson can grasp.
Leapfrog uses the existing public data, along with its own voluntary survey, to track each hospital’s performance on 28 safety measures. After plotting all the scores on a bell curve, the group’s analysts give each hospital a letter grade that reflects its standing in relation to all the others.
The latest grades, released Thursday, reveal some encouraging trends. Except for Wyoming and Washington, D.C., every state in the country has seen a slight increase in its hospitals’ safety scores. Nationally, the average score has jumped by 6% since 2012, and a third of all hospitals have raised their standing by at least 10%. Many have improved staffing and training, while adopting proven strategies to prevent infections, injuries and medication errors.
“More hospitals are working harder to create a safe environment,” says Binder, “and that’s good news for patients.”
But the gains look awfully modest when you consider the remaining challenges. Nationally, 173 hospitals got grades of D or F, meaning they were 1.5 to 2.7 times more dangerous than those with A’s and B’s. In four states (Alaska, Idaho, Nebraska and Wyoming) and the District of Columbia, not a single hospital earned an A grade. And some of the country’s biggest-name institutions—from the Cleveland Clinic to UCLA’s Ronald Reagan Medical Center—received C’s for patient safety. Some of them scored respectably on most measures but failed spectacularly to address preventable hazards such as falls, trauma and postoperative blood clots.
How do such mundane problems persist in such high-tech environments? Writing in the Journal of Patient Safety last fall, toxicologist John T. James offers a sobering list of contributors. “Our country is distinguished for its patchwork of medical care subsystems that can require patients to bounce around in a complex maze of providers as they seek effective and affordable care,” he writes. “Because of increased production demands, providers may be expected to give care in suboptimal working conditions, with decreased staff, and a shortage of physicians, which leads to fatigue and burnout. … The picture is further complicated by a lack of transparency and limited accountability for errors that harm patients.”
The Leapfrog listings are a bold move toward transparency, and experience suggests that alone can shame business interests into better behavior. Food poisoning fell sharply in Los Angeles during the late 1990s, after sanitary inspectors started posting letter grades in restaurant windows. New York City saw a similar decline after it adopted the practice in 2010. “Restaurant letter grades were our inspiration,” says Binder. “I wish hospitals had to paste safety grades in their windows.”
Since they don’t, the Leapfrog group has developed iPhone and Android apps that anyone can use to check hospital safety grades by name or location. “This spring we saw eight million people sign up for health insurance via the Affordable Care Act,” Binder says. “As they launch a search for health care providers, we’re urging them to put safety first and look for an ‘A’ hospital in their area.”
If that happens, hospitals may yet achieve the kind of progress that restaurants have made in New York and Los Angeles. But no one expects transparency alone to stop the epidemic of hospital hazards. Under The Affordable Care Act’s Value-Based Purchasing provisions, Medicare now rewards hospitals for meeting various quality measures, and penalizes those that fall short. The incentives have changed only modestly so far, but the Centers for Medicare & Medicaid Services will expand them in coming years to address more safety issues. As a Georgia hospital executive told Kaiser Health News last fall, “The thing about the government, if they start paying attention to it, we have to scramble around to pay attention to it. It gets us moving.”
American medicine is changing, but it can still be hazardous to your health. So don’t take those gauzy, feel-good hospital ads too seriously. As Binder wrote in Forbes on Tuesday, “transparency is not public relations, but cold, hard data supplied through reliable sources, scrubbed, vetted, and checked for validity.” That’s why the new Leapfrog grades are so valuable.
Medicaid, Medicare and Obamacare
Can we make US medicine less dangerous?
|
The Legacy of Muslim Societies in Global Modernity | Browse Items
Question 1 : In what ways has the paradigm of golden age and decline dominated the historiography of Muslim regions in the period from 1300-1900 CE, and how has this paradigm been detrimental?
Human beings respond readily to stories. A narrative with a beginning, middle, and end seems to be a natural way for humans to structure their understanding of the world around them. There are of course many ways in which stories are instructive for purposes of understanding the world. But we must always remember that stories are radical simplifications of reality. The suggestion that a particular society had a difficult (or miraculous) birth followed by a period of power (or prosperity) but later experienced decline (and maybe even collapse) might capture some important dimensions of its existence but totally obscure others that are equally important.
One widely held story holds that the Muslim world enjoyed a golden age at the time of the Abbasid dynasty and entered into a long era of decline after the Turks and Mongols established a series of transregional empires during the period about 1000 to 1300. Some have viewed the entire era from 1300 to 1900 as an age of Muslim decline. That must be a world record for a process of decline. How many societies have been able to decline for six centuries straight?
There are many problems with this story. One is that it measures Muslim “decline” against the yardstick of European “progress.” There is no question that European peoples did remarkable things during the era 1300 to 1900. They built powerful national states and established global maritime empires. They also constructed modern science and carried out an amazing process of industrialization. But there is no reason why Muslim societies should necessarily have followed the same path, even if they could have done so. Since they did not have access to the natural resources of the New World, nor did they enjoy the windfall of energy resources in the form of coal that fueled the process of industrialization in Europe, it would have been very difficult indeed for Muslim societies to duplicate European experience.
Another problem with the story is that it totally overlooks impressive achievements of Muslim societies themselves. One salient example has to do with the remarkable expansion of Ottoman power in the Indian Ocean basin during the sixteenth century. The fascinating new book by Giancarlo Casale, The Ottoman Age of Exploration, brings into view a round of maritime exploration and imperial expansion that paralleled European efforts in the New World.
I have taught Islamic civilization to California undergraduate History majors for 40 years. I have used the Marshall Hodgson’s 3 volume Venture of Islam, lately supplemented with Ira Lapidus’ History of Islamic Societies. Like Hodgson, I am also a world historian. Once I encountered Hodgson’s opus, I realized instinctively the correctness of his approach: any history worthy of the name must give equal wait to all periods of Islamic history, and must also consistently seek to locate it in the larger Eurasian contexts of which it was a part. Hodgson’s 100 page methodological introduction to Vol. 1 of The Venture remains essential reading.
I would add that we need to be aware of the connections between the Golden Age paradigm and the “Rise of the West” paradigm, according to which the course of Western history can be seen as a continually upward sloping line linking the Greeks the Renaissance and Modern Times. For Hodgson, this line is an optical elusion. The way forward, he suggests, lies in inserting both the history of the West and of the lands of Islam in their world historical contexts.
As an historian of the early modern Ottoman Empire, the paradigm of golden age and decline has a doubly complex and contradictory effect on the conceptualization of my field of research. On the one hand, within the grand narrative of “Islamic civilization,” the period 1300-1900 has traditionally held the place of “the dark ages,” the antithesis of the golden age during which political fragmentation, intellectual stagnation, and eventually foreign occupation were the defining elements of Muslim historical experience. Since these centuries are virtually coterminous with the history of the Ottoman state, this has had the effect of equating the entire trajectory of Ottoman history with decline, and – at least until very recently—has relegated the field of Ottoman history to a marginal position within the larger field of Islamic studies.
On the other hand, within the more restricted confines of Ottoman history we are confronted with another version of the same problem: The sixteenth century—which is the subject of my own research—has long held the status of an Ottoman ‘golden age,’ while subsequent centuries have been defined as a period of inexorable decline. Recently, Ottomanists have devoted a great deal of energy to the project of deconstructing this periodization. And yet, it remains true that the scholarly literature on the sixteenth century is comparatively quite developed, while many subfields of Ottoman history relating to the seventeenth, and particularly the eighteenth centuries are still in their infancy. This imbalance makes it extremely difficult to construct a compelling narrative of Ottoman history as a whole that can replace the story of “golden age” and “decline” that we are so eager to transcend.
Of course, all of this also needs to be understood within an even larger framework: the grand narrative of the “Rise of the West,” which continues to define the ways in which we make sense of history as a discipline, as well as the manner in which we conceptualize, organize and combine all of its constituent parts. According to this narrative paradigm, the historical experience of Europe during the nineteenth and twentieth centuries is equated with ‘modernity,’ and the establishment of European political, economic, and cultural hegemony over the rest of the world during at this time is understood as the ‘end game’ of history. Within this framework, societies defined as ‘non-Western’ can only have historical relevance to the extent that they are able either to contribute to, mimic, or resist the relentless rise of the West—and those periods in which they are able to accomplish one of those three things are typically identified as “golden ages” (to be followed inevitably by decline and, eventually, historical oblivion).
This is a paradigm, which is principally associated with the historiography of the Ottoman Empire. For a long period it seemed to be a preoccupation of Ottoman historians. Scholars who specialize in the study of other empires, Muslim or non-Muslim have always, been interested in such questions, but not to the extent that it concerns or has concerned Ottoman specialists. Safavid historians study an empire that was never as powerful as the Ottomans or as wealthy as the Mughals. In the Iranian case scholars have been more preoccupied with the survival of a fragile, impoverished state and perhaps even more so with the themes of Iranian identity and the rise of Shi‘i Islam, which becomes associated with Iranian identity. Mughal or Timurid-Mughal historians have also been preoccupied with other issues such as Hindu-Muslim relations and the colonial occupation of the subcontinent in the waning days of the Empire.
If one wishes to discuss this paradigm, it is important to emphasize that both golden age and decline can both be discussed from different perspectives: the attitude of rulers, the perception of an empire’s intellectuals, scholars or bureaucrats or religious scholars or the later interpretation of twenty-first century historians. If one wants to return to the traditional question of golden ages and decline then at least it is important to be precise about the criteria of the debate and the identity and perspective of those who discuss it. The idea of Golden Ages has different meaning for different individuals or classes.
One of the striking omissions in the discussion of imperial rise and decline has been the failure to engage the single indigenous Middle Eastern/Islamic model of the rise and fall of states, albeit tribal ones. This is Ibn Khaldun’s famous dialectical theory, which he advances in the Muqaddimah, which, while, like most models, it does not exactly fit the case of any of the so-called early modern Muslim empires, it still raises fundamental questions about the political, social and psychological changes that occurs in any state over the course of its existence. Some Ottoman historians worried about the implications of Ibn Khaldun’s cyclical model for the Ottoman Empire, but most modern historians have ignored Ibn Khaldun’s social, political and psychological insights about the cycles of dynasties. If we are to focus on the question of the rise and fall of empires, why not begin with Ibn Khaldun’s model.
It has created its method of periodization with 1800 as a cutting line, a before and an after, contact with the west. The centuries prior to 1800 were associated with decline while the period starting with 1800 was associated with an awakening due to contact with the west. The ‘before’ has sometimes been studied in ahistorical ways. This approach was also usually lacking in the economic aspects of history, and with a heavy concentration on cultural and religious aspects. It has also meant that all the sources for modernity were from the west. As a result for a long time earlier centuries were under studied.
The paradigm separates the period 1300-1900 from the times before and after it; it defines the Islamic world in terms of Middle Eastern empires rather than its larger and expanding frontiers; it contrasts a declining Islamic world to a rising European world; it focuses history on imperial conflict and assumes an underlying religious hostility as the motive force for this history.
The ‘decline paradigm’ has long been associated with discussions of both the Ottoman system but also with the larger Muslim project in the aftermath of 1258. From at least the work of the well-known Persianist E G Browne (d. 1926) in the early part of the last century ‘decline’ also has dominated discussions of the Safavid period in Iran, a period conventionally given the dates 1501 to 1722. In the Safavid case, ‘decline’ has been most often deployed with respect to the trajectory of the 17th century. Most Western-language commentators on 17th century Safavid Iran have viewed the period as having begun with a burst of cultural and intellectual achievement, in an atmosphere of military, political, and economic stability, due largely to the policies undertaken by Shah `Abbas I (r. 1588-1629), only to end in the darkness of fanatical religious orthodoxy amid military, political, and economic chaos. Most commentators cite the changing behavior and interests of important Twelver Shi`i `ulama over the 17th century as a key factor in Safavid ‘decline’: where the `ulama of the early 17th century have been characterized as interested primarily in philosophy and mysticism, and, as averse to, or having refrained from, entanglements in secular affairs. Western-language scholars have portrayed the majority of the late 17th century Iranian `ulama as intolerant, orthodox clerics who crushed the philosophical renaissance of the earlier half of the century and whose growing political influence inhibited an adequate response by the Safavid court to the political and military crises enveloping it, with the result that, in 1722, the Afghans sacked the Safavid capital of Esfahan.
A key body of material cited in support of aspects of ‘Safavid decline’ comprises, first, Persian-language sources, especially including chronicles, many completed many years after the 1722 fall of Esfahan to the Afghans, the event conventionally heralded as marking the dynasty’s end, and, secondly, the accounts of foreign travelers to and residents in Safavid Iran. The ‘agenda’ of the authors of these sources is all too seldom subjected to critical analysis.
As a result of recent activity scholars and lay persons interested in Safavid Iran today have at their disposal a much vaster array of primary and secondary sources, composed in a myriad of languages than was available prior to Iran’s 1979 Revolution. A myriad of sub-fields now may also now be said to exist within ‘Safavid Studies’. But, scholars in these ‘new’ sub-disciplines continue to take this model of the decline and fall of the Safavid ‘state’ as given and to privilege identification of signs of ‘decay’ in the ‘life’ of their sub-discipline over signs of ‘vitality’.
As Bentley observes in his response, a narrative with a beginning, middle, and end is very attractive to present complicated developments to a large audience. Obviously, this paradigm’s domination of the historiography of the Islamic World in this period is also related to Orientalism as Said showed to everyone more than thirty years ago. It is not too difficult to see that making decline the focus of analysis has led for many significant developments in the Islamic World of this period to be overlooked.
However, among the professionals of Islamic history, the decline paradigm has been challenged for quite a while now – if we take Roger Owen’s piece in Review of Middle East Studies as a starting point, since 1975. Yet an alternative narrative that is as attractive to non-specialists as the decline has been does not seem to have emerged – otherwise this forum would not be necessary. So perhaps focusing on the critique of the decline paradigm is not necessarily the best thing to do to appeal to our colleagues outside our field, or to the public at large. We need to come up with an alternative narrative.
Another important question to consider is what people in the Islamic World think about this paradigm of golden age and decline. What had struck me within the first few years of my graduate school experience in the US was the disjunction between the strong critique of Ottoman decline among the Middle East specialists of the US academe (Cemal Kafadar’s article on the subject was an inspiring exception to me) and the continuing relevance of the concept in Turkey. Obviously, there is a whole set of reasons for this disjunction such as the internalization of Orientalism by the modernizing elite of Turkey. Yet, it is also difficult to argue that the Ottoman Empire did not decline in its global significance or that it did not become relatively poorer: just take a twenty minute walk in Istanbul from the Topkapi Palace, where William Harborne, the ambassador of Queen Elizabeth, sought the alliance of the Ottomans against the Spanish in the 1580s, to the Istanbul High School the building of which used to be occupied by the Ottoman Public Debt Administration that was run by Europeans who collected taxes in the Ottoman Empire in order to transfer funds to the empire’s creditors in the 1880s. So perhaps centering our scholarship on the critique of the decline paradigm does not resonate well in the Islamic World where people live in the midst of physical markers of a relative decline.
The “Long Decline Paradigm” (“LDP” for purposes of this discussion) is one that has shaped not only the study of Muslim societies but also the study of all non-European societies in this era. The LDP is part of the Europe-centered vision of world history that interprets world history primarily as it relates to Western European history. Since the basic narrative of Europe-centered world historiography is the rise to global dominance of Western Europe in the 19th century, the narrative for other major global societies gets dominated by asking the question: “What went wrong?”—which basically is asking the question of why did not Chinese, South Asian, and Muslim societies have the same history as Western Europe? “Success” in the LDP is defined by viewing a non-European society in terms of how close its experiences were to that of Western Europe: did it have a “Renaissance,” did it have a religious “Reformation” that went beyond medieval theological-institutional formulations, did it have an “Enlightenment,” or an “Industrial Revolution”?
In recent years, the concept of “multiple modernities” has been developed by scholars like S. N. Eisenstadt. This conceptualization recognizes the broad range of ways that socio-cultural identities can be both distinctive and “modern.” Societies in the era from 1300-1900 reflect a similar duality of sharing a common experience of major changes as a result of increasingly intense networks of hemispheric and global interactions and, at the same time, developing distinctive responses to those new conditions. The major city-based societies of the era were strong and dynamic, not declining and failing. For example, China under the Qing dynasty in the 17th/18th centuries reached its largest territorial expansion in Chinese history.
In the Muslim world, the era from 1300-1500 was a time of major expansion of the number of believers and of important Muslim political systems. The result was that by the 16th century, the world of Islam was virtually twice the size that it had been in the era of the “Golden Age” of the Caliphs (7th-10th centuries CE). Political power was expressed in dynamically expansive states ranging from the Songhay state in West Africa through the great imperial sultanates of the Ottomans, Mughals, and Safavids and the entrepreneurial amirs and sultans of South East Asia.
The LDP ignores these developments and gives a misleading sense of centrality to Western European experience. This means that not only is the history of Muslim societies distorted but also the history of Western Europe is clearly misunderstood. It means, to note a very specific example, that the history of the Industrial Revolution ignores very important elements: if, as many scholars think, the development of the cotton industry in Lancashire is important in the Industrial Revolution, it is a distortion of understanding that development to ignore the role of imported cotton cloth from India (which dominated the global market in cotton cloth at the beginning of the 18th century) as an incentive to create local British cotton cloth production.
The LDP, in other words, provides an extremely misleading narrative for understanding world history in 1300-1900, for understanding the history of Muslim regions in that era, and even for understanding the history of Western Europe.
One of the most important effects of the decline paradigm has been its power in shaping our approach to the history of “Muslim regions.” It is essentialist, binary, and normative. It urges us to divide history into qualitatively and morally preferential periods, sort out what it deems substantial from the ephemeral, the authentic from the borrowed, the genuine from the fake, and the correct from the wrong. It reduces all aspects of history to “religion” as the core of “Muslim regions” and favors Arabic, the Middle East, and Umayyad-Abbasid caliphates as the essence of “Islamic history,” beyond and after which one can only observe decline and syncretism until European modernity comes to rescue this long in sleep civilization from the ruins of the middle ages. At the end, even when it praises the “golden age” the decline paradigm undermines “Islam” and “Muslim” as exotic, incomplete, and irreparably flawed. While “Islam” is depicted as the other, the alter ego which the “West” is not and should not be, the “West” emerges as the model against which we judge the others.
For Website Image Credits pdf file Click Here
Beyond Golden Age and Decline is proud to be sponsored by:
Copyright 2010, the Ali Vural Ak Center for Global Islamic Studies
necessarily reflect those of the National Endowment for the Humanities.
|
Bookmark and Share Email this page Email Print this page Print
Particularly clear skies for the Gulf Stream
Jan 1, 2003
Particularly clear skys for October allowed analysts to track an unusual feature of the Gulf Stream. Presumably unnoticed by mariners on their way to the Caribbean or to points along the U.S. East Coast was a large warm eddy located south, yes south of the Gulf Stream.
Typically, warm-core eddies are found north of the Gulf Stream while cold-core eddies are found south of the stream. Most Bermuda sailors are familiar with the ways of warm eddies spinning clockwise and cold eddies spinning counterclockwise.
This past fall, however, Gulf Stream charts such as the one shown with this column by Jenifer Clark have exhibited a conspicuously marked ring of warm water in the vicinity of 36° N, 64° W, well south of the main body of the stream. This anomalous, clockwise warm eddy was moving slowly to the west and southwest throughout October and November.
"We are aware of the existence of maybe half a dozen of these each year," said Clark, who provides Gulf Stream data and analysis as a business service. "But during periods of particularly clear weather we may be seeing that these anomalous eddies are out there all the time," she added.
"We're not really sure how these eddies originate because they don't seem to break off from meanders of the stream in the normal way," said Clark. "I think there are dynamics involved which we don't really understand yet."
Gulf Stream observations are made using infrared satellite imagery. When there is cloud cover it blocks the satellite's view of the ocean surface.
Oceanographers have explained that anomalous eddies typically exist in conjunction with a normally rotating eddy, often with water between them entrained into motion. In this case, the accompanying eddy might be a cold eddy located just to the southeast at 36° N, 62° W. A pair of such eddies is typically known as a double vortex. The phenomenon of double-vortex eddies has been observed in many oceanic locations and can be simulated in any body of water, including a bathtub.
"This particular warm eddy may last for as long as six months," said Clark. "I don't think it is going to be entrained by the main body of the Gulf Stream, and since it is warm and south of the Gulf Stream it's not likely to sink beneath the surface the way many cold eddies do. It looks pretty stable, and it's been there for a while."
Those interested in more information can contact Jenifer Clark's Gulf Stream at 301-952-0930.
|
thing past
This article is on the group of early Muslims. For the article on the contemporary Islamic movement, see Salafi
Salaf or as-Salaf aṣ-Ṣāliḥ (السلف الصالح) can be variously translated as "(righteous) predecessors" or "(righteous) ancestors." In Islamic terminology, it is generally used to refer to the first three generations of Muslims:
Ṣaḥāba: (Arabic: الصحابه, "The Companions") The companions of Muhammad, who had met or had seen him while in a state of īmān, and then died on that state
Tābi‘īn: (Arabic: التابعين, "The Successors") Those who had met or had seen the ṣaḥāba while in a state of īmān (belief), and then died on that state.
Tāba‘ at-Tābi‘īn: (Arabic: تابع التابعين, "The Successors of the Successors") Those who had met or had seen the tābi`īn while in a state of īmān, and then died on that state.
In a ḥadīth (prophetic tradition), Muhammad says of the salaf, "The best people are those living in my generation, then those coming after them, and then those coming after (the second generation)." (Ṣaḥīḥ Bukhārī )
Following is an incomplete alphabetical list of the Salaf.
Usage of the Word in the Qur'an
The word salaf is mentioned in the Qur'an in the following verses:
"When they angered Us, We punished them and drowned them every one. And We made them a thing past (salafan), and an example for those after." (43:55-56)
"Tell those who disbelieve that if they cease (from persecution of believers) that which is past (salafa) will be forgiven them; but if they return (thereto) then the example of the men of old hath already gone (before them, for a warning). And fight them until persecution is no more, and religion is all for God. But if they cease, then lo! God is Seer of what they do." (8:38-39)
First generation
Second generation
Third generation
See also
Search another word or see thing paston Dictionary | Thesaurus |Spanish
Copyright © 2014, LLC. All rights reserved.
• Please Login or Sign Up to use the Recent Searches feature
|
The Charles Sowers Windswept Installation Moves with the Wind
By: Gil Haddi - Published: • References: charlessowers & dezeen
Interacting with its environment, the Charles Sowers Windswept installation demonstrates the movement of the wind.
As something we often ignore because we can't see it, the wind ebbs and flows through this installation piece, making it appear as though it is contorting on its own. The project uses 612 aluminum weather vanes that rotate and point in the direction the wind is blowing. It gives us a visual of how dynamic the air flow really is, constantly changing in multiple directions, causing the weather vanes to have a rippling effect at some moments and a swirling effect at others.
The movement of the Charles Sowers Windswept installation demonstrates how the wind interacts with the surface of a building, a phenomena we wouldn't normally notice otherwise. Stats for Airflow-Flaunting Facades Trending: Older & Mild
Traction: 1,364 clicks in 97 w
Interest: 2 minutes
Concept: Charles Sowers Windswept
Related: 112 examples / 86 photos
Segment: Neutral, 12-55+
Comparison Set: 41 similar articles, including: suspended reflective platforms, colorful book walls, and toilet paper wall installations.
|
Why do we vaccinate?
Is it true that unvaccinated children never get autism?
Can one get sick even if they are fully vaccinated?
Why should I vaccinate for polio if the disease hasn’t infected anyone in the US for decades?
If contracting the disease naturally creates immunity, why should I vaccinate my child?
I hear some vaccines contain mercury called thimerosal. Isn’t mercury toxic? Doesn’t that mean vaccines will be toxic to my child?
Some people say that vaccines caused autism in their children. Is that true?
I hear that some vaccines have aborted fetus cells in them. Is that true?
What are the most common side effects of vaccines?
What are some of the less common side effects of vaccines?
Share and Enjoy:
• Facebook
• Twitter
• Reddit
• Digg
• LinkedIn
• Print
• StumbleUpon
• Google Bookmarks
• del.icio.us
• Add to favorites
• email
• Fark
• Mixx
• Netvibes
• NewsVine
• PDF
• Ping.fm
• Posterous
• Slashdot
• Technorati
• Tumblr
3 Responses to “F.A.Q.”
1. http://factsnotfatasy.com/vaccines.html has the answers to these questions if you care to borrow them. :)
2. A causal statement is a strong statement to be proven. However, a strong linkage exists between MMR, MMR components, and Autism. While some general studies may not show linkage because they are taken across a wide population, when the studies focus on the at-risk populations, linkages and possible causality exist. Given the intense financial pressure to subdue such information, readers should read with caution any study that summarily dismisses a lack of causality but also must not jump to conclusions not supported by the facts. For more facts see http://english2016.wordpress.com/mmr-and-autism http://www.cdc.gov/vaccinesafety/vsd/mmrv.htm
http://www.cdc.gov/vaccines/pubs/vis/downloads/vis-mmr.pdf paying particular attention to the warning that some people “should NOT get the MMR” (those high risk groups including those with autoimmune disorders.) Note: Baby eczema can be an indicator of autoimmune allergic reactions. http://childhealthsafety.wordpress.com/2009/07/05/wakefieldreplicated/ which does a more thorough analysis than the cursory search of literature mentioned in your post.
• Dear English desk, it is true that some people should not get the vaccine if they are already sick with certain autoimmune disorders, but I think you’re getting confused. They’re referring to people that are already sick, how do you go from that to “a strong linkage exists between MMR, MMR components, and Autism”?
Leave a Reply
|
Mind-controlled prosthetics offer more mobility
There are two million people with amputations in the United States. For many of these patients, prosthetic devices offer greater mobility. Now, researchers are testing the next generation of these devices.
They are nothing like you have seen before.
Not much slows down Zac Vawter, not even an amputated leg.
"I lost my leg in a motorcycle accident,” Zac says.
Zac received a prosthetic leg. It helps him get around, but it has its limitations.
He says, "If I walked up and left the knee locked, and sat down, it stays locked."
However, a thought-controlled myo-electric leg does what Zac's prosthetic cannot. Before Zac could use it, orthopedic surgeon Doug Smith took nerves from his lower leg and redirected them to his hamstring muscle.
"Instead of firing when you think about bending your knee, it would fire when you think about raising your ankle,” Smith says.
When Zac wants to move the leg, the brain signal travels down his spinal cord, through the nerves. Then, electrodes in the prosthetic pick up the signals from the muscles.
"You can have a prosthetic device that actually works according to your thoughts,” Smith adds.
The device is still being studied, so Zac cannot take it home. But he looks forward to the day he can.
Zac says, "Stairs with that leg, the bionic leg, is really phenomenal."
Until now, only thought-controlled arms were available.
Although the cost of the bionic leg has not been determined, researchers say a version could be available for consumer use within three-to-five years.
REPORT: MB #3720
BACKGROUND: Prosthesis is an apparatus that is used to restore the function of a limb with a prosthetic replacement. A prosthetic replacement can reinstate the use of the legs, arms, joints, eyes, and hands. When a person needs a prosthetic limb usually it is because of a tragic accident or disease. When a person has to have an amputation, they are losing the function of that body part. Thankfully, doctors have created prosthetic body parts to replace the function of their biological body part. (Source: http://www.nlm.nih.gov/medlineplus/ency/article/002286.htm)
BENEFITS: Those who have undergone the process of having a prosthetic limb have been known to have more energy. Those who decided to continue to use crutches and a wheelchair did not have the same amount of energy that prosthetic limb recipients had. Patients who have prosthetic limbs also have more mobility. Those who have prosthetic legs have the ability to go up and down the stairs and reach places that are not wheelchair accessible. Prosthetic limbs also offer a sense of independence that others do not have. (Source: http://www.livestrong.com/article/36509-advantages-prosthetic-legs/)
MODERN PROSTHETIC LIMBS: Prosthetic limbs have advanced throughout the years. They have grown to be lighter, more realistic, and stronger than older prosthetics. They are also easier to grip and walk in, and they provide more comfort. Although prosthetic limbs have evolved from earlier days, each prosthetic is tailored and specific to the person receiving it. (Source: http://science.howstuffworks.com/prosthetic-limb2.htm)
"BIONIC LEG": The latest technology for prosthetics is the "bionic leg." Scientists have designed a bionic prosthetic leg that can reproduce a full range of ambulatory movements by communicating with the brain of the person wearing it. The prospects for such connections between a patient's prosthetic and their peripheral nerves are generally dim. In most amputations, the nerves in the thigh are left to die. A neurosurgeon at UW Medicine, Dr. Todd Kuiken, pioneered a practice called "reinervation" of nerves severed by amputation. Dr. Doug Smith was trained to conduct the operations. He rewired the severed nerves to control some of the muscles in Zac's thigh that would be used less frequently in the absence of his lower leg. Within just a few months of the amputation, those nerves had recovered from the shock of the injury and began to regenerate and carry electrical impulses. When Zac thought about flexing his fight foot in a specific way, the rerouted nerve endings would consistently cause a distinctive contraction in his hamstring. When compared with prosthetics that were not able to "read" the intent of their wearers, the robotic leg programmed to follow Zac's commands reduced the kinds of errors that cause unnatural movements, discomfort and falls by as much as 44 percent, according to The New England Journal of Medicine. (Source: http://www.orthop.washington.edu/?q=bionic-leg-is-controlled-by-brain-power-article-in-la-times-featuring-dr-douglas-smith.html)
Susan Gregg
Media Relations & Public Relations
UW Medicine Strategic Marketing & Communications
(206) 616-6730
powered by Disqus
Gray Television, Inc. - Copyright © 2002-2014 - Designed by Gray Digital Media - Powered by Clickability 240881551 - wndu.com/a?a=240881551
|
When you restart a computer do you lose all your files to?
When you normally restart a computer after installing or deleting hardware or programs or turning it on for the day you should not lose anything as long as you have saved any open documents prior to closing the computer.
More Info:
A computer file is a resource for storing information, which is available to a computer program and is usually based on some kind of durable storage. A file is durable in the sense that it remains available for programs to use after the current program has finished. Computer files can be considered as the modern counterpart of paper documents which traditionally are kept in offices' and libraries' files, and this is the source of the term.
The word "file" was used publicly in the context of computer storage as early as February, 1950. In an RCA (Radio Corporation of America) advertisement in Popular Science Magazine describing a new "memory" vacuum tube it had developed, RCA stated:
Technology Internet Technology Internet
Related Websites:
Terms of service | About
|
What is Big Think?
We are Big Idea Hunters…
Big Think Features:
12,000+ Expert Videos
Watch videos
World Renowned Bloggers
Go to blogs
Big Think Edge
Find out more
With rendition switcher
One of the reasons it’s so difficult to make a quantum computer, and one of the reasons I'm a little skeptical at the moment, is that - the reason the quantum world seems so strange to us is that we don’t behave quantum mechanically. I don’t – you know, you can - not me, but you could run towards the wall behind us from now 'til the end of the universe and bang your head in to it and you’d just get a tremendous headache. But if you're an electron, there's a probability if I throw it towards the wall that it will disappear and appear on the other side due to something called quantum tunneling, okay.
And that’s the problem with a quantum computer. You want to make this macroscopic object, you want to keep it behaving quantum mechanically which means isolating it very carefully from, within itself, all the interactions and the outside world. And that’s the hard part, Is isolating things enough to maintain this what's called quantum coherence. And that’s the challenge and it’s a huge challenge.
But the potential is unbelievably great. Once you can engineer materials on a scale where quantum mechanical properties are important, a whole new world of phenomenon open up to you. And you might be able to say - as we say, if we created a quantum computer, and I’m not - I must admit I’m skeptical that we'll be able to do that in the near-term, but if we could, we'd be able to do computations in a finite time that would take longer than the age of the universe right now. We’d be able to do strange and wonderful things. And of course, if you ask me what’s the next big breakthrough, I’ll tell you what I always tell people, which is if I knew, I’d be doing it right now.
Directed / Produced by Jonathan Fowler and Elizabeth Rodd
Lawrence Krauss: Quantum Co...
Newsletter: Share:
|
South Korea
South Africa
Filed under: Info/Facts
Toronto Info / Facts
| Published Friday, 23 October 2009 11:21 |
General Info
Courtesy of www.wikipedia.com
Toronto's population is cosmopolitan and international, reflecting its role as an important destination for immigrants to Canada. Toronto is one of the world's most diverse cities by percentage of non-Canadian-born residents, as about 49 percent of the population were born outside of Canada. Because of the city's low crime rates, clean environment, generally high standard of living, and friendlier attitudes to diversity; Toronto is consistently rated as one of the world's most livable cities by the Economist Intelligence Unit and the Mercer Quality of Living Survey. In addition, Toronto was ranked as the most expensive Canadian city in which to live in 2006.
Residents of Toronto are called Torontonians.
Toronto's climate is moderate for Canada due to its southerly location within the country and its proximity to Lake Ontario. It has a humid continental climate (Koppen climate classification Dfa), with warm, humid summers and generally cold winters. The city experiences four distinct seasons with considerable variance in day to day temperature, particularly during the colder weather season. Due to urbanization and proximity to water, Toronto has a fairly low diurnal temperature range, at least in built-up city and lakeshore areas. At different times of the year, this maritime influence has various localized and regional impacts on the climate, including lake effect snow and delaying the onset of spring- and fall-like conditions or seasonal lag.
Toronto winters sometimes feature short cold snaps where maximum temperatures remain below −10 °C (14 °F), often made to feel colder by wind chill. Snowstorms sometimes mixed with ice and rain can disrupt work and travel schedules, accumulating snow can fall anytime from November until mid-April. However, mild stretches also occur throughout winter melting accumulated snow, with temperatures reaching into the 5 to 14 °C (40 to 57 °F) range and infrequently higher. Summer in Toronto is characterized by long stretches of humid weather. Daytime temperatures occasionally surpass 35 °C (95 °F), with high humidity making it feel oppressive during usually brief periods of hot weather. Spring and Autumn are transitional seasons with generally mild or cool temperatures with alternating dry and wet periods.
According to some prominent residents of the city and some important architects who have designed buildings there, Toronto has no single dominant, architectural style. Lawrence Richards, a member of the faculty of architecture at the University of Toronto, has said "Toronto is a new, brash, rag-tag place—a big mix of periods and styles". Toronto buildings vary in design and age with some structures dating back to the mid 1800s, while other prominent buildings were just newly built in the 2000s.
Defining the Toronto skyline is the CN Tower. At a height of 553.33 metres (1,815 ft, 5 in) it is the world's second tallest freestanding structure, and the tallest tower in the western hemisphere surpassing Chicago's Sears Tower by 110 metres in height. It is an important telecommunications hub, and a centre of tourism in Toronto.
Toronto is a city of high-rises, having over 2,000 buildings over 90 metres (300 ft) in height, second only to New York (which has over 5,000 such buildings) in North America. Most of these buildings are residential (either rental or condominium, where as the Central business district contains the taller commercial office towers). There has been recent media attention given for the need to retrofit many of these buildings, which were constructed beginning in the 1950s as residential apartment blocks to accommodate a quickly growing population. Many of the older buildings are shown to give off high concentrations of CO2 and are thought to be a significant contributor to the urban heat island effect, in addition to the aesthetic concerns as many of the buildings are viewed by many as urban blights often surrounded by limited landscaping and concrete parking lots without integration with the surrounding neighbourhoods.
In contrast, Toronto has also begun to experience an architectural overhaul within the past five years. The Royal Ontario Museum, the Gardiner Museum, the Art Gallery of Ontario, and the Ontario College of Art and Design are just some of the many public art buildings that have undergone massive renovations. The historic Distillery District, located on the eastern edge of downtown, is North America's largest and best preserved collection of Victorian era industrial architecture. It has been redeveloped into a pedestrian-oriented arts, culture and entertainment neighbourhood. Modern glass and steel highrises have begun to transform the majority of the downtown area as the condominium market has exploded and triggered widespread construction throughout the city's centre. Trump International Hotel and Tower, Ritz-Carlton, Four Seasons Hotels and Resorts, Shangri-La Hotels and Resorts are just some of the many high rise luxury condominium-hotel projects currently under construction in the downtown core.
Toronto, Ontario, Canada is called "the city of neighbourhoods" because of the strength and vitality of its many communities. The city has upwards of 240 distinct neighbourhoods within its boundaries. Before 1998, Toronto was a much smaller municipality and formed part of Metropolitan Toronto. When the city amalgamated that year, Toronto grew to encompass the former municipalities of York, East York, North York, Etobicoke, and Scarborough. Each of these former municipalities still maintains, to a certain degree, its own distinct identity, and the names of these municipalities are still used by their residents. The area known as Toronto before the amalgamation is sometimes called the "old" City of Toronto, the Central District or simply "Downtown".
The "old" City of Toronto is, by far, the most populous and dense part of the city. It is also the business centre of the city.
The "inner ring" suburbs of York and East York are older, predominantly middle-income areas, and ethnically diverse. Much of the housing stock in these areas consists of old pre-war single-family houses and post-war high-rises. Many of the neighbourhoods in these areas were built up as streetcar suburbs and contain many dense and mixed-use streets. Mostly they share many characteristics with sections of the "old" city, outside of the downtown core.
The "outer ring" suburbs of Etobicoke, Scarborough, and North York are much more suburban in nature (although these boroughs are developing urban centres of their own, such as North York Centre around Mel Lastman Square). The following is a list of the more notable neighbourhoods, divided by the neighbourhoods' location based on the former municipalities, the names of which are still known and commonly used by Torontonians.
What makes Toronto unique in many ways is the concern of local residents within its neighbourhoods. Many Ratepayer’s Associations, Resident's Associations and Homeowner's Associations exist and meet regularly. Larger umbrella organizations such as CORRA, FoNTRA and CHIP organize because of bigger issues. Many of these organizations have websites.
Historically, as Toronto sprawled out, industrial areas were set up on the outskirts. Over time, they would become part of the inner city as more land was developed further out. This trend would repeat itself, and continues to this day, as the largest factories and warehouses have moved to Peel and York Regions, and the vast majority of industrial areas are in the more suburban parts of the city; Etobicoke (particularly around the airport), North York, and Scarborough. Thus, many of Toronto's former industrial sites have been redeveloped, most notably the Toronto waterfront, and Liberty Village. One of Toronto's most unusual neighbourhoods, the Distillery District contains the largest and best-preserved collection of Victorian industrial architecture in North America. A national heritage site, it was listed by National Geographic magazine as a "top pick" in Canada for travellers. Similar areas that still retain their character, but are now largely residential are the Fashion District, Corktown, and parts of South Riverdale and Leslieville. Toronto still has some active older industrial areas, such as the Brockton Village, and New Toronto areas. In the west end of Old Toronto and York, the Weston/Mount Dennis and Junction areas have a sense of grit to them, as they still contain factories, but are mostly residential.
Culture and Events
Toronto's Caribana festival takes place from mid-July to early August of every summer, and is one of North America's largest street festivals. For the most part, Caribana is based on the Trinidad and Tobago Carnival, and the first Caribana took place in 1967 when the city's Caribbean community celebrated Canada's Centennial year. 40 years later, it has grown to attract one million people to Toronto's Lake Shore Boulevard annually. Tourism for the festival is in the hundred thousands, and each year, the event brings in about $300 million.
Pride Week in Toronto takes place in mid-June, and is one of the largest LGBT festivals in the world. It attracts more than one million people from around the world, and is one of the largest events to take place in the city. Toronto is a major centre for gay and lesbian culture and entertainment, and the gay village is located in the Church and Wellesley area of Downtown.
Toronto is currently ranked 14th in the world with over 4 million tourist arrivals a year. Toronto's most prominent landmark is the CN Tower, which stood as the tallest free-standing land structure in the world at 553 metres (1,815 ft). To the surprise of its creators, the tower held the world record for over 30 years.
The Royal Ontario Museum (ROM) is a major museum for world culture and natural history. The Toronto Zoo, one of the largest in the world,[45][46] is home to over 5,000 animals representing over 460 distinct species. The Art Gallery of Ontario contains a large collection of Canadian, European, African and contemporary artwork. The Gardiner Museum of ceramic art which is the only museum in Canada entirely devoted to ceramics and the Museum's collection contains more than 2,900 ceramic works from Asia, the Americas, and Europe. The Ontario Science Centre always has new hands-on activities and science displays particularly appealing to children, and the Bata Shoe Museum also features many unique exhibitions. The Don Valley Brick Works is a former industrial site, which opened in 1889, and has recently been restored as a park and heritage site. The Canadian National Exhibition is held annually at Exhibition Place, and it is the oldest annual fair in the world. It is Canada's largest annual fair and the fifth largest in North America, with an average attendance of 1.25 million.
The Yorkville neighbourhood is one of Toronto's most elegant shopping and dining areas. On many occasions, celebrities from all over North America can be spotted in the area, especially during the Toronto International Film Festival. The Toronto Eaton Centre is one of North America's top shopping destinations, and Toronto's most popular tourist attraction with over 52 million visitors annually.
Greektown on the Danforth, is another one of the major attractions of Toronto which boasts one of the highest concentrations of restaurants per kilometre in the world. It is also home to the annual "Taste of the Danforth" festival which attracts over one million people in 2 1/2 days. Toronto is also home to Canada's most famous "castle" - Casa Loma, the former estate of Sir Henry Pellatt, a prominent Toronto financier, industrialist and military man. Other notable neighbourhoods and attractions include The Beaches, the Toronto Islands, Kensington Market, Fort York, and the Hockey Hall of Fame.
Toronto is home to the Toronto Maple Leafs, one of the National Hockey League's Original Six clubs. The city has also served as home to the Hockey Hall of Fame since 1958. The city has a rich history of hockey championships. Toronto Maple Leafs has 14 Stanley Cup titles. Toronto is the only Canadian city with representation in six major league sports through National Hockey League, Major League Baseball, National Lacrosse League, National Basketball Association, Canadian Football League and Major League Soccer teams, as well as sharing a National Football League franchise with the city of Buffalo, New York. The major sports complexes include the Air Canada Centre, Rogers Centre (formerly known as SkyDome), Ricoh Coliseum and BMO Field.
The city is represented in the Canadian Football League by the Toronto Argonauts who have won 15 Grey Cup titles. Toronto played host to the 95th Grey Cup in 2007, the first held in the city since 1992. The city is also home to Major League Baseball's Toronto Blue Jays, who have won two World Series titles and is currently the only major league baseball team in Canada. Both teams play their home games at the Rogers Centre, in the downtown core.
Toronto is a major international centre for business and finance. Generally considered the financial capital of Canada, Toronto has a high concentration of banks and brokerage firms on Bay Street, in the Financial District. The Toronto Stock Exchange is the world's seventh-largest stock exchange by market capitalization. All of the Big Five banks of Canada are headquartered in Toronto.
The city is an important centre for the media, publishing, telecommunications, information technology and film production industries; it is home to Thomson Corporation, CTVglobemedia, Rogers Communications, Alliance Atlantis and Celestica. Other prominent Canadian corporations in Toronto include Four Seasons Hotels, the Hudson's Bay Company and Manulife Financial.
Although much of the region's manufacturing activities take place outside the city limits, Toronto continues to be an important wholesale and distribution point for the industrial sector. The city's strategic position along the Quebec City-Windsor Corridor and its extensive road and rail connections help support the nearby production of motor vehicles, iron, steel, food, machinery, chemicals and paper. The completion of the St. Lawrence Seaway in 1959 gave ships access to the Great Lakes from the Atlantic Ocean.
The last complete census by Statistics Canada estimated there were 2,503,281 people residing in Toronto in June 2006. The city's population grew by 4% (96,073 residents) between 1996 and 2001, and 1% (21,787 residents) between 2001 and 2006. Persons aged 14 years and under made up 17.5% of the population, and those aged 65 years and over made up 13.6%. The median age was 36.9 years. Foreign-born people made up 49.9% of the population.
As of 2001, 42.8% of the residents of the city proper belong to a visible minority group, and visible minorities are projected to comprise a majority in Toronto by 2017. According to the United Nations Development Programme, Toronto has the second-highest percentage of foreign-born population among world cities, after Miami, Florida. Statistics Canada's 2006 figures indicate that Toronto has surpassed Miami in this year. While Miami's foreign-born population consists mostly of Cubans and other Latin Americans, no single nationality or culture dominates Toronto's immigrant population, placing it among the most diverse cities in the world.
In 2001, people of European ethnicities formed the largest cluster of ethnic groups in Toronto, 57.2%, mostly of English, Irish, Scottish, Italian, and French origins, while the five largest visible minority groups in Toronto are Chinese (10.6%), South Asian/Indo-Caribbean (10.3%), Black/Afro-Caribbean (8.3%), Filipino (3.5%) and Latin American (2.2%). This diversity is reflected in Toronto's ethnic neighbourhoods which include Little Italy, The Junction, Little Jamaica, Little India, Chinatown, Koreatown, Greektown, Portugal Village, Corso Italia, Kensington Market, and The Westway.
Christianity is the largest religious group in Toronto. The 2001 Census reports that 31.1% of the city's population is Catholic, followed by Protestant at 21.1%, Christian Orthodox at 4.8%, Coptic Orthodox at 0.2%, and other Christians at 3.9%. Other religions in the city are Islam (6.7%), Hinduism (4.8%), Judaism (4.2%), Buddhism (2.7%), Sikhism (0.9%), and other Eastern Religions (0.2%). 18.7% of the population professes no faith.
While English is the predominant language spoken by Torontonians, many other languages have considerable numbers of local speakers, including French, Italian, Chinese, Spanish, Portuguese, Punjabi, Tagalog, and Hindi. Chinese and Italian are the second and third most widely spoken language at work. As a result, the city's 9-1-1 emergency services are equipped to respond in over 150 languages.
The low crime rate in Toronto has resulted in the city having a reputation as one of the safest major cities in North America. In 1999, the homicide rate for Toronto was 1.9 per 100,000 people, compared to Atlanta (34.5), Boston (5.5) New York City (9.1), Vancouver (2.8), and Washington, D.C. (45.5). For robbery rates, Toronto also ranks low, with 115.1 robberies per 100,000, compared to Dallas (583.7), Los Angeles (397.9), Montreal (193.9), New York City (490.6), and Washington, D.C. (670.6). Toronto has a comparable rate of car theft to various U.S. cities, although it is not among the highest in Canada. The overall crime rate in general was an average of 48 incidents per 100,000, compared to Cincinnati (326), Los Angeles (283), New York City (225), and Vancouver (239). However, many in the city, especially the local media, have concerns regarding gun violence, gangs, and racial profiling by Toronto Police against minorities.
Toronto recorded its largest number of homicides in 1991 with 89; a rate of 3.9 per 100,000. In 2005, Toronto media coined the term "Year of the Gun", because the number of gun-related homicides reached 52 out of 80 murders in total; almost double the 27 gun deaths recorded the previous year. The total number of homicides dropped to 69 in 2006. Additionally, during the first half of 2006, there were 137 (13 fatal) shooting incidents in the city, down marginally from 164 (19 fatal) in the first half of 2005. 84 murders were committed in 2007, nearly eclipsing the record of 89, and roughly half of them involved firearms. Gang-related incidents have also been on the rise; between the years of 1997 and 2005, over 300 gang-related murders have occurred. As a result, the Ontario government has come up with an anti-gun strategy.
Toronto is home to a number of post-secondary academic institutions. The University of Toronto, established in 1827, is the oldest university in Ontario and a leading public research institution. It is a worldwide leader in biomedical research and houses North America's third largest library system, after that of Harvard University and Yale University. York University, located in the north end of Toronto, houses the largest law library in the Commonwealth of Nations. The city is also home to Ryerson University, Ontario College of Art & Design, and the University of Guelph-Humber.
There are five diploma-granting colleges in Toronto: Seneca College, Humber College, Centennial College, Sheridan College and George Brown College. In nearby Oshawa -- usually considered part of the Greater Toronto Area -- are Durham College and the new University of Ontario Institute of Technology. The Royal Conservatory of Music, which includes The Glenn Gould School, is a noted school of music located downtown. The Canadian Film Centre is a film, television and new media training institute founded by filmmaker Norman Jewison. Tyndale University College and Seminary is a transdenominational Christian post-secondary institution and Canada's largest seminary.
The Toronto District School Board (TDSB) operates 558 public schools. Of these, 451 are elementary and 102 are secondary (high) schools. This makes the TDSB the largest school board in Canada. Additionally, the Toronto Catholic District School Board manages the city's publicly-funded Roman Catholic schools, while the Conseil scolaire de district du Centre-Sud-Ouest and the Conseil scolaire de district catholique Centre-Sud manages public and Roman Catholic French-language schools. There are also numerous private university-preparatory schools, such as Upper Canada College, Crescent School, Toronto French School, University of Toronto Schools, Havergal College, Bishop Strachan School, Branksome Hall, and St. Michael's College School.
The Toronto Public Library is the largest public library system in Canada, consisting of 99 branches with more than 11 million items in its collection.
The Toronto Transit Commission (TTC) is the third largest public transit system in North America after the New York City Transit Authority, and the Mexico City Metro. The TTC provides public transit within the City of Toronto. The backbone of its public transport network is the subway system. The TTC also operates an extensive network of buses and streetcars.
The Government of Ontario also operates an extensive rail and bus transit system called GO Transit in the City of Toronto, as well in its suburbs. With thirty-eight trains, and seven train lines, GO Transit run 179 trips, and carry over 160,000 passengers in the Greater Toronto Area every day. An additional 288 GO buses feed the main rail lines.
Canada's busiest airport, Toronto Pearson International Airport (IATA: YYZ), straddles the city's western boundary with the suburban city of Mississauga. Limited commercial and passenger service is also offered from the Toronto City Centre Airport, on the Toronto Islands. Toronto/Buttonville Municipal Airport in Markham provides general aviation facilities. Toronto/Downsview Airport, near the city's north end, is owned by de Havilland Canada and serves the Bombardier Aerospace aircraft factory.
There are a number of expressways and highways that serve Toronto and the Greater Toronto Area. In particular, Highway 401 bisects the city from west to east, bypassing the downtown core. It is one of the busiest highways in the world. The square grid of major city streets was laid out by the concession road system.
2ronto Community
Poll of The Week
In summer, do your children participate in...
Top 10 2ronto Videos
Welcome Team - Ask The Expert!
• Devender Munjal: personal finance and investing expert
• Maya Fernandez: newcomer information and referral specialist
• Constantine Choto: small business startup and expansion expert
• Kriti Verma: real estate and housing expert
Carl Reid
|
Definición de fruitless en inglés:
Saltos de línea: fruit|less
Pronunciación: /ˈfruːtləs
1Failing to achieve the desired results; unproductive or useless: his fruitless attempts to publish poetry
Más ejemplos en oraciones
• And when their efforts again prove mostly fruitless, the cycle starts anew.
• Trying to make ourselves stronger than everyone else is surely an unproductive, ultimately fruitless endeavor.
• But measuring the group's many recordings against each other is ultimately fruitless.
2(Of a tree or plant) not producing fruit: a banana leaf from a fruitless palm
Más ejemplos en oraciones
• Don't waste energy fruitlessly pursuing it; distract yourself with something productive, be it whittling, knitting or washing dishes.
• Ignorant of the word ‘pushchair’, she fruitlessly searched London for plastic covers for her ‘stroller’.
Más ejemplos en oraciones
• Acknowledging the basic fruitlessness of human existence is important, but so is grinning.
• For further proof of the fruitlessness of the efforts in either penalty box there was the final frantic exchanges in County's box.
• The depiction of the brutality and fruitlessness of war leaves a lasting impact on the reader.
Definición de fruitless en:
Obtener más de Oxford Dictionaries
Subscribirse para eliminar anuncios y acceder a los recursos premium
Palabra del día impudicity
Pronunciación: ˌɪmpjʊˈdɪsɪti
lack of modesty
|
'Beasts: What Animals Can Teach Us About the Origins of Good and Evil': wisdom from the wild kingdom
Jeffrey Moussaieff Masson offers a unique perspective on cruelty, compassion and the human condition
Share with others:
Print Email Read Later
Why are humans so violent, cruel and vengeful? Not every one of us, and not every minute, but many of us and daily ... read this newspaper today or any day to scroll the roster of evil deeds.
In particular, Jeffrey Moussaieff Masson wonders if our savagery is shared by other apex predators. Do orcas and crocodiles, wolves and lions wage war, commit murder and rape, abuse their offspring? Do they torture members of their own and other species?
By Jeffrey Moussaieff Masson
Bloomsbury ($26)
Mr. Masson answers his rhetorical questions quickly, knowing non-PETA readers don't enjoy books this full of painful information: "If there is one insight I feel a reader should take away from this book, it is that no serious evidence supports the idea that other animals besides humans engage in mass killing of one another."
The word "bestial" insults other animals. In a swift survey of the available literature, Mr. Masson shows that no other species kills gratuitously unless the killing animals have become deranged (usually by human contact). But there's hope for humanity.
Unlike many philosophers, from the Old Testament on, he doesn't believe humans are wicked from birth. Instead, Mr. Masson thinks our species learned how to be cruel when we shifted from hunter-gatherers to farmers and domesticators of animals. He's not the only one to subscribe to this theory -- Jared Diamond, among others, damns the moment we turned our spears into plowshares.
Apparently, a surplus of food doesn't make humans healthy, wealthy and wise; it makes us possessive, violent and paranoid. Even Jane Goodall saw this among the Tanzanian chimps she had fed bananas. The surplus food changed them for the worse.
Chimps given bananas began to fight with each other, their aggressive behavior escalating until even Ms. Goodall had to acknowledge that this species she loved had "a dark side."
Cultivating plants and domesticating animals go hand in hand. With the possible exception of the wolves that became dogs, no species prefers domestication to wildness.
Domestication means enclosure and violent death. People who work in slaughterhouses suffer from intense stress, while people who eat meat practice amnesia. Rare is the person enjoying a steak, a rack of ribs or a drumstick who wants to imagine the life story of the cow, pig or chicken whose dismembered body part lies upon her plate.
Mr. Masson's running plea for us all to become vegetarians does make sense, and I must admit it's convinced me. I'm someone who's been on the fence for a long time, knowing but trying to ignore the realities of factory farming every time I eat meat. Mr. Masson has pushed me in a greener direction.
Humans don't need meat for survival and never did. It's fast and satisfying protein, but enjoying it requires a suspension of compassion: "Perhaps the roots of indifference are to be found in our willingness to eat what was once a living, feeling animal. If we gave that up, would we reconnect with feelings we have suppressed, repressed or never known?"
Altruism, mercy and empathy number among the feelings Mr. Masson thinks we'll rediscover. He believes "our species has suffered a kind of PTSD since the origins of agriculture and domestication of animals."
As the former director of the Freud Archives, he reminisces about his many conversations with Anna Freud. Like her father, she didn't believe humans were capable of real altruism, a theory that renders any hope of change moot.
Mr. Masson believes we can change: "We would in my ideal world (and why not strive for one?) stop eating animals, stop experimenting on them, stop wearing them, stop exploiting them in any way, and certainly stop comparing them to us negatively."
Slavery degraded the slaveholders as well as the slaves, and animal-rights advocates invariably compare our treatment of animals to slavery. If liberating animals would stop human violence, who wouldn't be for it? My own hope lies in the further development of lab-engineered meat. If I could have my bacon and burgers without killing animals, I'd be one happy carnivore!
For I do agree with Mr. Masson that we are the apex predators of the world, as cruel and bloodthirsty a species as ever lived. (His "Appendix VI: The Problem With Pinker on the Problem of Human Violence" nicely debunks Steven Pinker's 2011 book "The Better Angels of Our Nature," which claims that, historically, violence has declined.)
Where I fear Mr. Masson has gone awry is in his assertion that the advent of agriculture made us brutes. Instead, I remember the fall series of lectures on human evolution sponsored by the University of Pittsburgh's Anthropology Department. I had the good fortune to be invited to dinner by Jeffrey Schwartz, the main organizer, when David Lordkipanidze, the head of the scientific team that found ancient hominin skulls in Dmansi, Georgia, delivered the final talk.
When I asked why Neanderthals and other hominins disappeared in every area where Homo sapiens appeared, the two men exchanged chagrined and knowing looks. "Do you think our species killed them off?" I asked. The answer was silence.
Susan Balee volunteers at the Animal Rescue League as a dog walker and fosterer of dogs and cats.
Join the conversation:
Commenting policy | How to report abuse
Commenting policy | How to report abuse
You have 2 remaining free articles this month
Try unlimited digital access
If you are an existing subscriber,
link your account for free access. Start here
You’ve reached the limit of free articles this month.
To continue unlimited reading
If you are an existing subscriber,
link your account for free access. Start here
|
Classics and Modern Languages
1. Thucydides’ moral chaos
The error of seeing history as governed by social forces
1. Huns besieging Aquileia (14th century)
Why the Middle Ages are still with us
Early medieval problems – Schleswig-Holstein, Alsace-Lorraine, Belgium, more recently Yugoslavia – have resisted even the sleekest of modern political solutions
1. Mr and Mrs Borges
A ringside seat at an ill-fated marriage
Arts & Commentary
1. The paradox of Charles Ives
How the foundational American music of a rebel composer became the subject of a century-spanning controversy
1. Why (cell) size matters
Taking a new look at the grandeur of life under the microscope – life that, in terms of total biomass, considerably outweighs its larger, multi-cellular counterpart
Philosophy & Religion
1. What makes Islam unique?
How Islam compares with other creeds – in the past, present and future
Literature & Poetry
1. Rounded with a Sleep
A new poem by Clive James
Politics & Social Studies
1. Pure agony
How pain is shaped by language and history
1. Present-day Pre-Raphaelite
Nicola Barker’s provocative, spiritual new novel
Get Adobe Flash player
In the next TLS
• VICTOR DAVIS HANSONWhat war is good for
• RICHARD SIEBURTHBlaise Cendrars, back in the sky
• NADIA ATIAStories of old Baghdad
• ARCHIE BROWNUkraine at the end of the USSR
|
Health knowledge made personal
Join this community!
› Share page:
Search posts:
You Ask, I Answer: Nitrates, Nitrites… and then Some!
Posted Nov 13 2009 10:00pm
Cold cutslarge A recent post on cured meats, cancer risk, and nitrates sparked a significant number of comments and personal e-mails.
Alas, here is a compilation of all the questions I received on the subject — and the appropriate answers.
What are nitrates?
Although they can be manufactured in laboratories (mainly to cure meats), nitrates are a type of inorganic (jargon for “carbon-free”) chemical found in nature.
Fertilizers and sewerage contain significant amounts of nitrates (they contain high amounts of nitrogen, which bacteria feast on and, among other things, convert into nitrates).
Is there a difference between nitrates and nitrites?
Not really. Most food manufacturers prefer nitrites because they present fewer complications from a processing standpoint.
It’s akin to asking if there is a significant difference, nutritionally speaking, between the artificial sweeteners Splenda and aspartame. Although their makeup is different, they are used in similar ways.
Are nitrates only found in cold cuts?
No. Certain vegetables — including spinach, celery, lettuce, and eggplant — contain nitrates.
So, then, why do we only hear about nitrates and cold cuts?
For two reasons. One: cold cuts contain higher amounts of nitrates/nitrites than vegetables.
Number two: the average American consumes more cold cuts than celery, spinach, or eggplant.
What are the health risks of consuming too many nitrates?
This is where it all gets interesting — and slightly complicated.
A large portion of nitrates are converted into nitrites by our bodies.
Obviously, if you consume ham that contains nitrites, this first step is a moot point.
Nitrites can then combine with particular compounds known as amines in the stomach.
This combination forms a new hybrid compound: nitrosamines.
Due to the cellular damage they cause, nitrosamines have been linked with higher risks of a wide array of cancers — particularly that of the prostate, colon, and pancreas.
Earlier this summer, a study in the Journal of Alzheimer’s Disease concluded that frequent consumption of nitrates and nitrites relates to higher risks of developing the neural disorder.
Some research also suggests that when nitrites in food are exposed to high heat — as they are, say, when you fry bacon — their chemical structure morphs into that of nitrosamines.
PS: Another reason why you don’t hear much about nitrites in vegetables? All nitrate-containing vegetables also provide vitamin C, which has been shown to reduce the formation of nitrosamines in the body.
Are there any guidelines for what amount of nitrates is safe to consume?
The Environmental Protection Agency has come up with a “parts per million” guideline in reference to the water supply, but there is no exact amount in regards to food.
The general idea with cold cuts is: the less, the better. Conservative guidelines recommend no more than two ounces per week, while more liberal recommendations place the limit at six ounces per week.
Since vitamins C and E appear to reduce nitrite-to-nitrosamine conversion, one “safety measure” you can always take is to include a food high in either of those nutrients in a meal that contains processed meats.
For example, add plenty of sliced tomatoes to a ham sandwich, or make bacon the accompaniment to a broccoli and red pepper frittata.
Do organic cold cuts contain nitrites?
Some of them don’t. As with everything else, it’s always good to check the ingredient list.
• Share/Save/Bookmark
No CommentsYou Ask/I Answer, cancer, nitrates, nitrites, vitamin C, vitamin E
Post a comment
Write a comment:
Related Searches
|
June 14, 2013|Lauren Ritchie, COMMENTARY
Question: What's green and brown and wet all over?
Answer: Unfortunately, Lake Harris, Little Lake Harris and the Dead River.
This is not so funny. Have you noticed? The lakes and their connector to Lake Eustis, always dark from tannins, have turned a murky brown.
Scoop up a handful of water and you'll see tiny brown particles floating in a milky solution. Now, go wash your hands.
The brown unfortunately is an algae called cylindrospermopsis, and Lake County is getting a nasty dose of it early this summer. Most often, algae bloom later in the season when the heat is relentless and the water temperature is up.
The tricky thing about this wily bacterium, nicknamed "Cy" by biologists, is determining whether it can producing the toxin cylindrospermosin, which can affect the liver, kidneys, heart and other organs. Some strains also produce a neurotoxin that can cause respiratory distress and paralysis in those eating fish caught in a water body with a bloom.
And Cy has an evil twist, too: When killed by, say, chlorine, the dying cells can throw off a deadly toxin. That's what happened in 1996 when water pumped from a shallow reservoir in northwest Brazil and treated with chlorine was used on kidney dialysis patients. About 60 of them died because scientists didn't yet know about Cy's uncanny ability to take revenge on humans who kill it.
The current version of Cy seems far more benign. In fact, research by a University of Florida graduate student shows that the sort of Cy growing in Florida doesn't even carry the gene that allows it to produce harmful toxins.
So, that's good news and bad, said Ron Hart, a lakes expert with the Lake County Water Authority.
"The good news is we're getting the strain that may not be capable of producing toxins. The bad news is we still have it. It looks bad and makes the water nasty, so you don't want to go into it or swim in it," Hart said.
"It's definitely not the kind of thing you want to drink."
All-righty then. No Scotch and Cy for me.
For years, scientists didn't think that Cy existed in Florida waters. They began identifying it in the 1990s, when more powerful microscopes began to be able to detect the tiny organism, which appears to be either a straight rod or a coiled rod. Samples of lake water preserved from the 1970s now show that the algae was present even then, Hart said.
What triggers the algae to produce its vicious toxins — other than its own death — isn't well understood, said Andy Chapman, an algae specialist with Greenwater Laboratories in Gainesville. The last big blooms of Cy were in 2000, he said, and that's when the grad student found the strains that lacked the toxin gene.
This year's early bloom hit because several factors came together at once, Hart said. A couple of months ago, he said, the Leesburg area got two heavy periods of rain with more than an inch apiece. At the same time, there was a bit of an increase in nutrients in the water and the temperature spiked in the lakes.
Voila. Algae.
Chapman, too, warned about ingesting the water with Cy in it, though he said, "there's not a whole lot of evidence" that the toxins accumulate in fish that swim in the water. Don't you just love how scientists talk? It seems a person would do well to worry even if there were just a smidgen of evidence.
Still, the most likely outcome of human contact with Cy comes from getting in the water with it.
"A lot of people have skin reactions," Chapman observed. "It's not uncommon." Lauren invites you to send her a friend request on Facebook at
Orlando Sentinel Articles
|
Skip to content Skip to navigation
Moving Towards More Simple Assistive Technology Systems
ATHEN E-Journal Issue #3 (2007)
Paul Blenkhorn
Professor of Assistive Technology
School of Informatics
University of Manchester
For many people assistive technologies provide significant help for users in accessing standard applications. For some people, e.g. blind people, access would be impossible without such support. However, as assistive technologies have developed over the years and have become more and more sophisticated there is a danger that they themselves can start to become an obstacle to access for many users. Although the systems still support “accessibility” they become less “usable”. This paper outlines this potential problem and presents some possible ways forward.
accessibility, assistive technology, usability
Assistive Technology for "Print Impaired" people
In this presentation Assistive Technology is taken to mean software systems that enable a person who is having some difficulties with accessing their computer. Here the focus is with print impairments, i.e. blind and visually impaired and sighted people who have difficulties with print, e.g. dyslexic.
For Blind and Visually Impaired People
This group has difficulty seeing the information on the computer's screen. To help they use screenreaders and magnifiers. Examples of screenreaders include JAWS, Window-Eyes, Supernova, and Thunder. Magnification systems include Zoomtext, Magic and Lightning. There are also systems that both magnify and speak. Users typically cannot access the computer without their assistive technology therefore the technology is used all the time.
For sighted people who have difficulty reading and writing
This group includes dyslexic people, those for whom English is not their first language, and more broadly people who are illiterate for a variety of reasons. Some access may be possible without assistive technology, but tools for organising and structuring information can help (some) users as can tools that vocalise information on the screen.
Current problems
The main problem I wish to focus on here is the gradual increase in complexity of Assistive Technology systems.
Feature “creep”
Originally assistive technologies were developed for basic access to some fairly straightforward applications for a significant proportion of the (often very able) target population (using technology). However, as that population became more sophisticated it demanded access to more applications. In addition more users would come on board with slightly different needs. The assistive technologies then become increasingly feature rich until they may be too complex for the intended users, and can be particularly difficult for new users.
"Assistive Technologies" which are really business tools
Quite rightly we have made use of existing business tools that can significantly help our users. However, the interface (dialog box), menus, etc. may be rather complex either in layout or structure. Finding an option in a deep hierarchy or a button amongst a great deal of text and graphics can be hard.
Here I use the spell checker dialog from Microsoft Word to illustrate a fairly complex dialog that may be difficult for some users. However, I could equally have used dialogs from some optical character recognition (OCR) systems, or deeply nested dialog boxes or menus, etc.
The way forward?
The way forward is not clear however here are some ideas:
• Keep considering users, especially those who are not "the most able". We need to consider the functional needs of users in a given environment. We all know that we should focus on the individual and not the disability. We need to be wary of giving AT by rote to a person's label rather than to the actual person, e.g. not all dyslexic people can work effectively using a specialised concept mapping program.
• New software and new technologies. As our understanding of users’ needs increases and as technological capabilities increase we can explore new ways of supporting individuals. The danger of not being carried away by the technology needs to be remembered!
• Provide a collection of smaller, more focused applications rather than "bloatware". Rather than users being given one application that provides many solutions, maybe we can provide several smaller applications. Of course, this can cause difficulties if the applications have different interfaces and can be disastrous if they do not co-exist. This is often the case with Assistive Technologies from different manufacturers.
• Evaluate the effectiveness of the technology that is being used and replace it if it is not effective
|
Chronic wound
From Wikipedia, the free encyclopedia - View original article
Jump to: navigation, search
A chronic wound is a wound that does not heal in an orderly set of stages and in a predictable amount of time the way most wounds do; wounds that do not heal within three months are often considered chronic.[1] Chronic wounds seem to be detained in one or more of the phases of wound healing. For example, chronic wounds often remain in the inflammatory stage for too long.[2][3] In acute wounds, there is a precise balance between production and degradation of molecules such as collagen; in chronic wounds this balance is lost and degradation plays too large a role.[4][5]
Chronic wounds may never heal or may take years to do so. These wounds cause patients severe emotional and physical stress and create a significant financial burden on patients and the whole healthcare system.[6]
Acute and chronic wounds are at opposite ends of a spectrum of wound healing types that progress toward being healed at different rates.[7]
Signs and symptoms[edit]
Chronic wound patients often report pain as dominant in their lives.[8] It is recommended that healthcare providers handle the pain related to chronic wounds as one of the main priorities in chronic wound management (together with addressing the cause). Six out of ten venous leg ulcer patients experience pain with their ulcer,[9] and similar trends are observed for other chronic wounds.
Persistent pain (at night, at rest, and with activity) is the main problem for patients with chronic ulcers.[10] Frustrations regarding ineffective analgesics and plans of care that they were unable to adhere to were also identified.
In addition to poor circulation, neuropathy, and difficulty moving, factors that contribute to chronic wounds include systemic illnesses, age, and repeated trauma. Comorbid ailments that may contribute to the formation of chronic wounds include vasculitis (an inflammation of blood vessels), immune suppression, pyoderma gangrenosum, and diseases that cause ischemia.[2] Immune suppression can be caused by illnesses or medical drugs used over a long period, for example steroids.[2] Emotional stress can also negatively affect the healing of a wound, possibly by raising blood pressure and levels of cortisol, which lowers immunity.[6]
What appears to be a chronic wound may also be a malignancy; for example, cancerous tissue can grow until blood cannot reach the cells and the tissue becomes an ulcer.[11] Cancer, especially squamous cell carcinoma, may also form as the result of chronic wounds, probably due to repetitive tissue damage that stimulates rapid cell proliferation.[11]
Another factor that may contribute to chronic wounds is old age.[12] The skin of older people is more easily damaged, and older cells do not proliferate as fast and may not have an adequate response to stress in terms of gene upregulation of stress-related proteins.[12] In older cells, stress response genes are overexpressed when the cell is not stressed, but when it is, the expression of these proteins is not upregulated by as much as in younger cells.[12]
Chronic wounds may affect only the epidermis and dermis, or they may affect tissues all the way to the fascia.[15] They may be formed originally by the same things that cause acute ones, such as surgery or accidental trauma, or they may form as the result of systemic infection, vascular, immune, or nerve insufficiency, or comorbidities such as neoplasias or metabolic disorders.[15] The reason a wound becomes chronic is that the body’s ability to deal with the damage is overwhelmed by factors such as repeated trauma, continued pressure, ischemia, or illness.[7][15]
Ischemia is an important factor in the formation and persistence of wounds, especially when it occurs repetitively (as it usually does) or when combined with a patient’s old age.[12] Ischemia causes tissue to become inflamed and cells to release factors that attract neutrophils such as interleukins, chemokines, leukotrienes, and complement factors.[12]
While they fight pathogens, neutrophils also release inflammatory cytokines and enzymes that damage cells.[2][12] One of their important jobs is to produce Reactive Oxygen Species (ROS) to kill bacteria, for which they use an enzyme called myeloperoxidase.[12] The enzymes and ROS produced by neutrophils and other leukocytes damage cells and prevent cell proliferation and wound closure by damaging DNA, lipids, proteins,[16] the extracellular matrix (ECM), and cytokines that speed healing.[12] Neutrophils remain in chronic wounds for longer than they do in acute wounds, and contribute to the fact that chronic wounds have higher levels of inflammatory cytokines and ROS.[3][5] Since wound fluid from chronic wounds has an excess of proteases and ROS, the fluid itself can inhibit healing by inhibiting cell growth and breaking down growth factors and proteins in the ECM.[2]
Bacterial colonization[edit]
Since more oxygen in the wound environment allows white blood cells to produce ROS to kill bacteria, patients with inadequate tissue oxygenation, for example those who suffered hypothermia during surgery, are at higher risk for infection.[12] The host’s immune response to the presence of bacteria prolongs inflammation, delays healing, and damages tissue.[12] Infection can lead not only to chronic wounds but also to gangrene, loss of the infected limb, and death of the patient.
Like ischemia, bacterial colonization and infection damage tissue by causing a greater number of neutrophils to enter the wound site.[2] In patients with chronic wounds, bacteria with resistances to antibiotics may have time to develop.[17] In addition, patients that carry drug resistant bacterial strains such as methicillin-resistant Staphylococcus aureus (MRSA) have more chronic wounds.[17]
Growth factors and proteolytic enzymes[edit]
Chronic wounds also differ in makeup from acute wounds in that their levels of proteolytic enzymes such as elastase.[4] and matrix metalloproteinases (MMPs) are higher,[18] while their concentrations of growth factors such as Platelet-derived growth factor and Keratinocyte Growth Factor are lower.[5][15]
Since growth factors (GFs) are imperative in timely wound healing, inadequate GF levels may be an important factor in chronic wound formation.[15] In chronic wounds, the formation and release of growth factors may be prevented, the factors may be sequestered and unable to perform their metabolic roles, or degraded in excess by cellular or bacterial proteases.[15]
Chronic wounds such as diabetic and venous ulcers are also caused by a failure of fibroblasts to produce adequate ECM proteins and by keratinocytes to epithelialize the wound.[19] Fibroblast gene expression is different in chronic wounds than in acute wounds.[19]
Though all wounds require a certain level of elastase and proteases for proper healing, too high a concentration is damaging.[4] Leukocytes in the wound area release elastase, which increases inflammation, destroys tissue, proteoglycans, and collagen,[20] and damages growth factors, fibronectin, and factors that inhibit proteases.[4] The activity of elastase is increased by human serum albumin, which is the most abundant protein found in chronic wounds.[4] However, chronic wounds with inadequate albumin are especially unlikely to heal, so regulating the wound's levels of that protein may in the future prove helpful in healing chronic wounds.[4]
If a chronic wound becomes more painful this is a good indication that it is infected.[22] A lack of pain however does not mean that it is not infected.[22] Other methods of determination are less effective.[22]
The vast majority of chronic wounds can be classified into three categories: venous ulcers, diabetic, and pressure ulcers.[7][12] A small number of wounds that do not fall into these categories may be due to causes such as radiation poisoning or ischemia.[12]
Venous and arterial ulcers[edit]
Venous ulcers, which usually occur in the legs, account for about 70% to 90% of chronic wounds[2] and mostly affect the elderly. They are thought to be due to venous hypertension caused by improper function of valves that exist in the veins to prevent blood from flowing backward. Ischemia results from the dysfunction and, combined with reperfusion injury, causes the tissue damage that leads to the wounds.
Diabetic ulcers[edit]
Another major cause of chronic wounds, diabetes, is increasing in prevalence.[23] Diabetics have a 15% higher risk for amputation than the general population[2] due to chronic ulcers. Diabetes causes neuropathy, which inhibits nociception and the perception of pain.[2] Thus patients may not initially notice small wounds to legs and feet, and may therefore fail to prevent infection or repeated injury.[7] Further, diabetes causes immune compromise and damage to small blood vessels, preventing adequate oxygenation of tissue, which can cause chronic wounds.[7] Pressure also plays a role in the formation of diabetic ulcers.[12]
Pressure ulcers[edit]
Another leading type of chronic wounds is pressure ulcers,[24] which usually occur in people with conditions such as paralysis that inhibit movement of body parts that are commonly subjected to pressure such as the heels, shoulder blades, and sacrum.[25][26] Pressure ulcers are caused by ischemia that occurs when pressure on the tissue is greater than the pressure in capillaries, and thus restricts blood flow into the area.[24] Muscle tissue, which needs more oxygen and nutrients than skin does, shows the worst effects from prolonged pressure.[26] As in other chronic ulcers, reperfusion injury damages tissue.
Though treatment of the different chronic wound types varies slightly, appropriate treatment seeks to address the problems at the root of chronic wounds, including ischemia, bacterial load, and imbalance of proteases.[12] Various methods exist to ameliorate these problems, including antibiotic and antibacterial use, debridement, irrigation, vacuum-assisted closure, warming, oxygenation, moist wound healing, removing mechanical stress, and adding cells or other materials to secrete or enhance levels of healing factors.[23]
Preventing and treating infection[edit]
To lower the bacterial count in wounds, therapists may use topical antibiotics, which kill bacteria and can also help by keeping the wound environment moist,[27][28] which is important for speeding the healing of chronic wounds.[3][25] Some researchers have experimented with the use of tea tree oil, an antibacterial agent which also has anti-inflammatory effects.[17] Disinfectants are contraindicated because they damage tissues and delay wound contraction.[28] Further, they are rendered ineffective by organic matter in wounds like blood and exudate and are thus not useful in open wounds.[28]
A greater amount of exudate and necrotic tissue in a wound increases likelihood of infection by serving as a medium for bacterial growth away from the host’s defenses.[12] Since bacteria thrive on dead tissue, wounds are often surgically debrided to remove the devitalized tissue.[27] Debridement and drainage of wound fluid are an especially important part of the treatment for diabetic ulcers, which may create the need for amputation if infection gets out of control. Mechanical removal of bacteria and devitalized tissue is also the idea behind wound irrigation, which is accomplished using pulsed lavage.[12]
Removing necrotic or devitalzed tissue is also the aim of maggot therapy, the intentional introduction by a health care practitioner of live, disinfected maggots into non-healing wounds. Maggots dissolve only necrotic, infected tissue; disinfect the wound by killing bacteria; and stimulate wound healing. Maggot therapy has been shown to accelerate debridement of necrotic wounds and reduce the bacterial load of the wound, leading to earlier healing, reduced wound odor and less pain. The combination and interactions of these actions make maggots an extremely potent tool in chronic wound care.
Negative pressure wound therapy (NPWT) is a treatment that improves ischemic tissues and removes wound fluid used by bacteria.[7][12] This therapy, also known as vacuum-assisted closure, reduces swelling in tissues, which brings more blood and nutrients to the area, as does the negative pressure itself.[7] The treatment also decompresses tissues and alters the shape of cells, causes them to express different mRNAs and to proliferate and produce ECM molecules.[2][7]
Treating trauma and painful wounds[edit]
Persistent chronic pain associated with non-healing wounds is caused by tissue (nociceptive) or nerve (neuropathic) damage and is influenced by dressing changes and chronic inflammation. Chronic wounds take a long time to heal and patients can suffer from chronic wounds for many years.[29] Chronic wound healing may be compromised by coexisting underlying conditions, such as venous valve backflow, peripheral vascular disease, uncontrolled edema and diabetes mellitus.
If wound pain is not assessed and documented it may be ignored and/or not addressed properly. It is important to remember that increased wound pain may be an indicator of wound complications that need treatment, and therefore practitioners must constantly reassess the wound as well as the associated pain.
Optimal management of wounds requires holistic assessment. Documentation of the patient’s pain experience is critical and may range from the use of a patient diary, (which should be patient driven), to recording pain entirely by the healthcare professional or caregiver.[30] Effective communication between the patient and the healthcare team is fundamental to this holistic approach. The more frequently healthcare professionals measure pain, the greater the likelihood of introducing or changing pain management practices.
At present there are few local options for the treatment of persistent pain, whilst managing the exudate levels present in many chronic wounds. Important properties of such local options are that they provide an optimal wound healing environment, while providing a constant local low dose release of ibuprofen during weartime.
If local treatment does not provide adequate pain reduction, it may be necessary for patients with chronic painful wounds to be prescribed additional systemic treatment for the physical component of their pain. Clinicians should consult with their prescribing colleagues referring to the WHO pain relief ladder of systemic treatment options for guidance. For every pharmacological intervention there are possible benefits and adverse events that the prescribing clinician will need to consider in conjunction with the wound care treatment team.
Ischemia and hypoxia[edit]
Blood vessels constrict in tissue that becomes cold and dilate in warm tissue, altering blood flow to the area. Thus keeping the tissues warm is probably necessary to fight both infection and ischemia.[25] Some healthcare professionals use ‘radiant bandages’ to keep the area warm, and care must be taken during surgery to prevent hypothermia, which increases rates of post-surgical infection.[12]
Underlying ischemia may also be treated surgically by arterial revascularization, for example in diabetic ulcers, and patients with venous ulcers may undergo surgery to correct vein dysfunction.
Diabetics that are not candidates for surgery (and others) may also have their tissue oxygenation increased by Hyperbaric Oxygen Therapy, or HBOT, which can compensate for limitations of blood supply and correct hypoxia.[16][31][32] In addition to killing bacteria, higher oxygen content in tissues speeds growth factor production, fibroblast growth, and angiogenesis.[2][16] However, increased oxygen levels also means increased production of ROS.[16] Antioxidants, molecules that can lose an electron to free radicals without themselves becoming radicals, can lower levels of oxidants in the body and have been used with some success in wound healing.[5]
Low level laser therapy has been repeatedly shown to significantly reduce the size and severity of diabetic ulcers as well as other pressure ulcers.
Pressure wounds are often the result of local ischemia from the increased pressure. Increased pressure also plays a roles in many diabetic foot ulcerations as changes due to the disease causes the foot to suffer limited joint mobility and creates pressure points on the bottom of the foot. Effective measures to treat this includes a surgical procedure called the gastrocnemius recession in which the calf muscle is lengthened to decrease the fulcrum created by this muscle and resulting in a decrease in plantar forefoot pressure.[33]
Growth factors and hormones[edit]
Since chronic wounds underexpress growth factors necessary for healing tissue, chronic wound healing may be speeded by replacing or stimulating those factors and by preventing the excessive formation of proteases like elastase that break them down.[4][5]
One way to increase growth factor concentrations in wounds is to apply the growth factors directly, though this takes many repetitions and requires large amounts of the factors.[5] Another way is to spread onto the wound a gel of the patient’s own blood platelets, which then secrete growth factors such as vascular endothelial growth factor (VEGF), insulin-like growth factor 1–2 (IGF), PDGF, transforming growth factor-β (TGF-β), and epidermal growth factor (EGF).[15] Other treatments include implanting cultured keratinocytes into the wound to reepithelialize it and culturing and implanting fibroblasts into wounds.[23][27] Some patients are treated with artificial skin substitutes that have fibroblasts and keratinocytes in a matrix of collagen to replicate skin and release growth factors.
In other cases, skin from cadavers is grafted onto wounds, providing a cover to keep out bacteria and preventing the buildup of too much granulation tissue, which can lead to excessive scarring. Though the allograft (skin transplanted from a member of the same species) is replaced by granulation tissue and is not actually incorporated into the healing wound, it encourages cellular proliferation and provides a structure for epithelial cells to crawl across.[2] On the most difficult chronic wounds, allografts may not work, requiring skin grafts from elsewhere on the patient, which can cause pain and further stress on the patient’s system.[3]
Collagen dressings are another way to provide the matrix for cellular proliferation and migration, while also keeping the wound moist and absorbing exudate.[5] Additionally Collagen has been shown to be chemotactic to human blood monocytes, which can enter the wound site and transform into beneficial wound-healing cells.[34]
Since levels of protease inhibitors are lowered in chronic wounds, some researchers are seeking ways to heal tissues by replacing these inhibitors in them.[21] Secretory leukocyte protease inhibitor (SLPI), which inhibits not only proteases but also inflammation and microorganisms like viruses, bacteria, and fungi, may prove to be an effective treatment.[21]
Research into hormones and wound healing has shown estrogen to speed wound healing in elderly humans and in animals that have had their ovaries removed, possibly by preventing excess neutrophils from entering the wound and releasing elastase.[20] Thus the use of estrogen is a future possibility for treating chronic wounds.
Chronic wounds mostly affect people over the age of 60.[12] The incidence is 0.78% of the population and the prevalence ranges from 0.18 to 0.32%.[15] As the population ages, the number of chronic wounds is expected to rise.[24]
2. ^ a b c d e f g h i j k l m Snyder, Robert J. (2005). "Treatment of nonhealing ulcers with allografts". Clinics in Dermatology 23 (4): 388–95. doi:10.1016/j.clindermatol.2004.07.020. PMID 16023934.
3. ^ a b c d Taylor, Jennifer E.; Laity, Peter R.; Hicks, John; Wong, Steven S.; Norris, Keith; Khunkamchoo, Peck; Johnson, Anthony F.; Cameron, Ruth E. (2005). "Extent of iron pick-up in deforoxamine-coupled polyurethane materials for therapy of chronic wounds". Biomaterials 26 (30): 6024–33. doi:10.1016/j.biomaterials.2005.03.015. PMID 15885771.
4. ^ a b c d e f g Edwards, J; Howley, P; Cohen, IK (2004). "In vitro inhibition of human neutrophil elastase by oleic acid albumin formulations from derivatized cotton wound dressings". International Journal of Pharmaceutics 284 (1–2): 1–12. doi:10.1016/j.ijpharm.2004.06.003. PMID 15454291.
5. ^ a b c d e f g h Schönfelder, Ute; Abel, Martin; Wiegand, Cornelia; Klemm, Dieter; Elsner, Peter; Hipler, Uta-Christina (2005). "Influence of selected wound dressings on PMN elastase in chronic wound fluid and their antioxidative potential in vitro". Biomaterials 26 (33): 6664–73. doi:10.1016/j.biomaterials.2005.04.030. PMID 15978664.
6. ^ a b Augustin, M.; Maier, K. (2003). "Psychosomatic Aspects of Chronic Wounds". Dermatology and Psychosomatics 4: 5. doi:10.1159/000070529.
7. ^ a b c d e f g h Moreo, Kathleen (2005). "Understanding and overcoming the challenges of effective case management for patients with chronic wounds". The Case Manager 16 (2): 62–3, 67. doi:10.1016/j.casemgr.2005.01.014. PMID 15818347.
8. ^ Krasner, D (1998). "Painful venous ulcers: Themes and stories about living with the pain and suffering". Journal of wound, ostomy, and continence nursing 25 (3): 158–68. PMID 9678007.
9. ^ Hofman, D; Ryan, TJ; Arnold, F; Cherry, GW; Lindholm, C; Bjellerup, M; Glynn, C (1997). "Pain in venous leg ulcers". Journal of wound care 6 (5): 222–4. PMID 9256727.
10. ^ Walshe, Catherine (2006). "Living with a venous leg ulcer: A descriptive study of patients'experiences". Journal of Advanced Nursing 22 (6): 1092–100. doi:10.1111/j.1365-2648.1995.tb03110.x. PMID 8675863.
11. ^ a b Trent, JT. 2003. Wounds and malignancy. Advances in Skin & Wound Care. Accessed January 1, 2007.
12. ^ a b c d e f g h i j k l m n o p q r s t u Mustoe, Thomas (2004). "Understanding chronic wounds: A unifying hypothesis on their pathogenesis and implications for therapy". The American Journal of Surgery 187 (5): S65. doi:10.1016/S0002-9610(03)00306-4. PMID 15147994.
13. ^ Williams, A.M.; Southern, S.J. (2005). "Conflicts in the treatment of chronic ulcers in drug addicts—case series and discussion". British Journal of Plastic Surgery 58 (7): 997–9. doi:10.1016/j.bjps.2005.04.024. PMID 16040018.
14. ^ Vennemann, B.; Perdekamp, M. Große; Weinmann, W.; Faller-Marquardt, M.; Pollak, S.; Brandis, M. (2006). "A case of Munchausen syndrome by proxy with subsequent suicide of the mother". Forensic Science International 158 (2–3): 195–9. doi:10.1016/j.forsciint.2005.07.014. PMID 16169176.
15. ^ a b c d e f g h Crovetti, Giovanni; Martinelli, Giovanna; Issi, Marwan; Barone, Marilde; Guizzardi, Marco; Campanati, Barbara; Moroni, Marco; Carabelli, Angelo (2004). "Platelet gel for healing cutaneous chronic wounds". Transfusion and Apheresis Science 30 (2): 145–51. doi:10.1016/j.transci.2004.01.004. PMID 15062754.
16. ^ a b c d Alleva, Renata; Nasole, Emanuele; Donato, Ferruccio Di; Borghi, Battista; Neuzil, Jiri; Tomasetti, Marco (2005). "Α-Lipoic acid supplementation inhibits oxidative damage, accelerating chronic wound healing in patients undergoing hyperbaric oxygen therapy". Biochemical and Biophysical Research Communications 333 (2): 404–10. doi:10.1016/j.bbrc.2005.05.119. PMID 15950945.
17. ^ a b c Halcon, L; Milkus, K (2004). "And wounds: A review of tea tree oil as a promising antimicrobial". American Journal of Infection Control 32 (7): 402–8. doi:10.1016/j.ajic.2003.12.008. PMID 15525915.
18. ^ Wysocki, Annette B.; Staiano-Coico, Lisa; Grinnell, Frederick (1993). "Wound Fluid from Chronic Leg Ulcers Contains Elevated Levels of Metalloproteinases MMP-2 and MMP-9". Journal of Investigative Dermatology 101 (1): 64–8. doi:10.1111/1523-1747.ep12359590. PMID 8392530.
19. ^ a b Foy, Yvonne; Li, Jie; Kirsner, Robert; Eaglstein, William (2004). "Analysis of fibroblast defects in extracellular matrix production in chronic wounds". Journal of the American Academy of Dermatology 50 (3): P168. doi:10.1016/j.jaad.2003.10.595.
20. ^ a b Kanda, Naoko; Watanabe, Shinichi (2005). "Regulatory roles of sex hormones in cutaneous biology and immunology". Journal of Dermatological Science 38 (1): 1–7. doi:10.1016/j.jdermsci.2004.10.011. PMID 15795118.
21. ^ a b c Lai, Jeng-Yu; Borson, Nancy D; Strausbauch, Michael A; Pittelkow, Mark R (2004). "Mitosis increases levels of secretory leukocyte protease inhibitor in keratinocytes". Biochemical and Biophysical Research Communications 316 (2): 407–10. doi:10.1016/j.bbrc.2004.02.065. PMID 15020232.
22. ^ a b c Reddy, Madhuri (2012). "Does This Patient Have an Infection of a Chronic Wound?". JAMA: the Journal of the American Medical Association 307 (6): 605. doi:10.1001/jama.2012.98.
23. ^ a b c Velander, Patrik E.; Theopold, Christoph; Gheerardyn, Raphael; Bleiziffer, Oliver; Yao, Feng; Eriksson, Elof (2004). "Autologous cultured keratinocytes suspensions accelerate re-epithelialization in the diabetic pig". Journal of the American College of Surgeons 199 (3): 58. doi:10.1016/j.jamcollsurg.2004.05.119.
24. ^ a b c Supp, Dorothy M.; Boyce, Steven T. (2005). "Engineered skin substitutes: Practices and potentials". Clinics in Dermatology 23 (4): 403–12. doi:10.1016/j.clindermatol.2004.07.023. PMID 16023936.
25. ^ a b c Thomas, David R.; Diebold, Marilyn R.; Eggemeyer, Linda M. (2005). "A controlled, randomized, comparative study of a radiant heat bandage on the healing of stage 3–4 pressure ulcers: A pilot study". Journal of the American Medical Directors Association 6 (1): 46–9. doi:10.1016/j.jamda.2004.12.007. PMID 15871870.
26. ^ a b Pressure ulcers: Surgical treatment and principles at eMedicine
27. ^ a b c Brem H, Kirsner RS, Falanga V (July 2004). "Protocol for the successful treatment of venous ulcers". Am. J. Surg. 188 (1A Suppl): 1–8. doi:10.1016/S0002-9610(03)00284-8. PMID 15223495.
28. ^ a b c Patel CV, Powell L, Wilson SE (2000). "Surgical wound infections". Current Treatment Options in Infectious Diseases' 2: 147–53. ISSN 1523-3820.
29. ^ Flanagan M, Vogensen H, and Haase L. 2006. Case series investigating the experience of pain in patients with chronic venous leg ulcers treated with a foam dressing releasing ibuprofen. World Wide Wounds. 2006
30. ^ Osterbrink J (2003). "Der Deutsche Schmerzstandard und seine Auswirkungen auf die Pflege". Die Schwester, der Pfleger 42: 758–64.
31. ^ Kranke, Peter; Bennett, Michael H; Debus, Sebastian E; Roeckl-Wiedmann, Irmgard; Schnabel, Alexander (2004). "Hyperbaric oxygen therapy for chronic wounds". In Kranke, Peter. Cochrane Database of Systematic Reviews (2). pp. CD004123. doi:10.1002/14651858.CD004123.pub2. PMID 15106239.
33. ^ Greenhagen, Robert M.; Johnson, Adam R.; Peterson, Matthew C.; Rogers, Lee C.; Bevilacqua, Nicholas J. (2010). "Gastrocnemius Recession as an Alternative to TendoAchillis Lengthening for Relief of Forefoot Pressure in a Patient with Peripheral Neuropathy: A Case Report and Description of a Technical Modification". The Journal of Foot and Ankle Surgery 49 (2): 159.e9–13. doi:10.1053/j.jfas.2009.07.002. PMID 20137982.
34. ^ Postlethwaite, A. E.; Kang, AH (1976). "Collagen-and collagen peptide-induced chemotaxis of human blood monocytes". Journal of Experimental Medicine 143 (6): 1299–307. doi:10.1084/jem.143.6.1299. PMC 2190221. PMID 1271012.
External links[edit]
Further reading[edit]
|
Deakin University » Communities »
The psychology of cash and credit
Printer-friendly versionPrinter-friendly version
Although the concept of credit has been around for thousands of years (the Latin word, credere, means ‘to believe’), legend tells us that the first credit card appeared in 1949 when Frank McNamara, head of the Hamilton Credit Corporation, went out to eat with Alfred Bloomingdale.
While charge cards for individual merchants had existed for some time, the idea of a card that could be used to buy products in more than one place was novel. Diners Club, the first multiple merchant charge/credit card was born, and by the end of the ‘50s more than 20,000 credit cards were in use by consumers.
In 2013, the Reserve Bank estimated that there were 15.41 million credit card accounts in Australia, with a total outstanding balance of $49.19 billion, and the average card had a credit balance of $3,198. Collectively, we Australians are paying interest on a balance of $35.28 billion—that's almost three-quarters of the country’s total credit card balance. Convenience is costly.
Although debit cards have been in Australia since the early 1980s, their alliance with Visa and Mastercard a decade ago has led to an substantial increase in the number of debit cards in the market. Now with the advent of proximity technology in cards, and near field communication (NFC) technology in telephones making life more and more streamlined, it seems we can’t get enough of the convenience that comes with not carrying cash.
Ouch, that hurts
Since the late 1990s, researchers have been interested in understanding the psychology of card use, partially because of the ubiquity of credit cards, and also because of the amount of debt in the marketplace. In particular, concern about the effects of debt on vulnerable consumers has led to more of this kind of research amongst the social policy, consumer advocacy and academic communities.
One of the first group of studies to look at the influence of cards on spending was conducted by George Loewenstein and his colleagues and published in 1998 and 2001. They looked at a broad range of different spending scenarios, including whether people experienced pleasure or pain when spending, the role of ‘coupling’ (the purchase of a product and payment together, as happens with cash), and whether consumers prefer to pay flat or variable rates for products such as internet subscriptions.
One of the many interesting findings was that paying in cash elicited greater psychological pain than other modes of payment, including the use of cards or delayed payment methods. The suggested reason for this was the ‘de-coupling’ of the actual purchase and the pain of paying for it. In other words, handing over cash to a shop assistant couples a loss of available money with a purchase. By paying with a credit card, consumers created a buffer zone between the purchase of the product and the loss of their money (both psychologically and temporally).
Continuing the theme of time gaps between consumption and payment, they also investigated whether the reality of paying off debt was captured by traditional economic models focused purely on utility. In one study, they investigated how debt payment preferences influence a person’s enjoyment of a product (or as they referred to it, ‘the hedonic evaluation of paying for products before or afterward’).
They found that we have different emotional responses to experiences as opposed to material goods. In one experiment, they looked at the purchase of a holiday and the purchase of a washer-dryer unit. If you can pay for your holiday before you go, then you will enjoy it more, they found, but paying later will not significantly affect your enjoyment of the washer-dryer. If we consider the research about materialism discussed in last week’s program of Talking Shop, we could perhaps conclude that it was always going to be difficult to enjoy the washer-dryer, so payment terms probably don’t matter.
In another experiment (this was a big project), Loewenstein and co wanted to see what students in their study would be willing to pay to attend two different sporting events. Depending on the event (one was with a ‘blockbuster’ team, while the other was not such a major event), they found that credit cards commanded a premium of between 60 and 113 per cent.
They concluded that when it came to credit cards, people should ‘always leave home without it’.
More recent research looked at how people felt when they spent vouchers as opposed to cash. In one of experiment, some participants were given $50 cash while others were given a $50 gift certificate. Researchers found that people spent more when they were given the certificate than when they were given an equivalent amount in cash.
In 2011, Nathan C. Pettit and Niro Sivanathan examined whether people might be likely to change their purchasing behaviour as a result of the global financial crisis, and in particular, whether the threat to social identity and status associated with the GFC might change their attitudes toward accumulating debt or spending credit. They referred to earlier research which found that when threatened, people often sought to restore balance in their lives by consuming material goods so that they could signal their social identity to others.
They found that the GFC presented a ‘perfect storm’ of vulnerability because of the interaction between self-threat, the need for high status products and the psychology of the payment method. This meant that people would not only seek to consume high-status goods, but because they would be more likely to use credit, this consumption would be done at a higher cost to themselves. When cash or savings were their only option, threatened individuals were no more likely than non-threatened individuals to purchase high-status goods, as the potential psychological rewards of consumption were outweighed by the psychological pain associated with payment.
Perhaps the key advice is not to ‘leave home without it’, but ‘hide it somewhere you can’t find it’.
Are you a virtuous eater?
According to studies published in 2011 a credit card can influence what you buy at the supermarket and the café.
One study, ‘How credit card payments increase unhealthy food purchases’, looked at whether credit cards and cash influenced what people put in their shopping basket at the supermarket. By examining the actual shopping behaviour of 1,000 households over a period of six months, the researchers found that shopping baskets have a larger proportion of food items rated as impulsive and unhealthy when shoppers use credit or debit cards. They also found that participants spent up to 40 per cent more on what they termed ‘vice’ products (biscuits, cakes, and pies, for example) when they were using credit cards. They also found that the mode of payment didn’t affect the amount spent on ‘virtue’ products (rolled oats, baked beans, wholemeal bread etc).
Mode of payment had a significant effect on participants identified as ‘tightwads’, who were likely to spend 56 per cent more on impulse products when they used a card than when they used cash. The researchers’ conclusion was that cards weaken impulse control, particularly for those people who normally would be very careful with their money.
One useful insight out of this study is that the day a shopping trip takes place has an effect on the purchase of vice products—people shopping on weekends are less likely to be impulsive. This could be a result of the shopping list effect: weekend shopping trips tend to be based on shopping lists, and therefore ‘purchases on such trips are less susceptible to impulsive urges’.
However, a different study, also published in 2011, found a little wrinkle. The authors of ‘Chocolate cake please! Why do consumers indulge more when it feels more expensive?’ found that when consumers were buying food items for immediate consumption, the greater the pain of payment, the more indulgent foods they chose. One study found that if we buy indulgent food for immediate consumption, the pain of using cash is offset by the excitement and anticipation of eating the delicious, exciting food.
In one of the experiments, the researchers created a cafe afternoon snack menu, and found that people who paid with cash consumed close to 80 more calories than those who used a card. Those who used cash consumed products higher in total fat (three grams or 15 per cent more), salt (130mg or 17 per cent), carbohydrates (eight grams or 13 per cent) and sugar (1.5 grams or six per cent), than those who used a card to pay.
So, the moral to all of these stories? Leave your credit cards (and cash) at home, don't shop when threatened, and eat more chocolate cake (well, that's my interpretation).
No votes yet
|
From Wikipedia, the free encyclopedia
Jump to: navigation, search
In analytic philosophy and linguistics, a concept may be considered vague if its extension is deemed lacking in clarity, if there is uncertainty about which objects belong to the concept or which exhibit characteristics that have this predicate (so-called "border-line cases"), or if the Sorites paradox applies to the concept or predicate.[1]
In everyday speech, vagueness is an inevitable, often even desired effect of language usage. However, in most specialized texts (e.g., legal documents), vagueness is distracting.
The philosophical question of what the best theoretical treatment of vagueness is - which is closely related to the problem of the paradox of the heap, a.k.a. sorites paradox - has been the subject of much philosophical debate.
Fuzzy logic[edit]
Main article: Fuzzy logic
Main article: Supervaluationism
Given a supervaluationist semantics, one can define the predicate 'supertrue' as meaning "true on all precisifications". This predicate will not change the semantics of atomic statements (e.g. 'Frank is bald', where Frank is a borderline case of baldness), but does have consequences for logically complex statements. In particular, the tautologies of sentential logic, such as 'Frank is bald or Frank is not bald', will turn out to be supertrue, since on any precisification of baldness, either 'Frank is bald' or 'Frank is not bald' will be true. Since the presence of borderline cases seems to threaten principles like this one (excluded middle), the fact that supervaluationism can "rescue" them is seen as a virtue.
The epistemicist view[edit]
Main article: Epistemicism
A third approach, known as the "epistemicist view", has been defended by Timothy Williamson (1994),[1] R. A. Sorensen (1988) [5] and (2001),[6] and Nicholas Rescher (2009).[7] They maintain that vague predicates do, in fact, draw sharp boundaries, but that one cannot know where these boundaries lie. One's confusion about whether some vague word does or does not apply in a borderline case is explained as being due to one's ignorance. For example, on the epistemicist view, there is a fact of the matter, for every person, about whether that person is old or not old. It is just that one may sometimes be ignorant of this fact.
Vagueness as a property of objects[edit]
Still by, for instance, proposing alternative deduction rules involving Leibniz's law or other rules for validity some philosophers are willing to defend vagueness as some kind of metaphysical phenomenon. One has, for example, Peter van Inwagen (1990),[9] Trenton Merricks and Terence Parsons (2000).[10]
Legal principle[edit]
See also[edit]
• Keefe, R. and Smith, P., ed. (1997). Vagueness: A Reader. MIT Press.
• Rick Nouwen and Robert van Rooij and Uli Sauerland and Hans-Christian Schmitz, ed. (Jul 2009). International Workshop on Vagueness in Communication (ViC; held as part of ESSLLI). LNAI 6517. Springer. ISBN 978-3-642-18445-1.
1. ^ a b c Williamson, T. 1994. Vagueness London: Routledge.
3. ^ Edgington, D. (1997). Keefe, R. and Smith, P., ed. Vagueness by degrees. MIT Press. pp. 294–316.
4. ^ Kit Fine, The Limits of Abstraction (2002)
5. ^ Sorensen, R.A. 1988. Blindspots. Oxford: Clarendon Press.
7. ^ Rescher, N. 2009. Unknowability. Lexington Books.
Uses vagrant predicates to elucidate the problem.
8. ^ Evans, G. (1978). "Can There Be Vague Objects?". Analysis 38: 208–. doi:10.1093/analys/38.4.208.
External links[edit]
|
Anxiety Disorders
Frequently Asked Questions
13. Is it difficult to diagnose an anxiety disorder in older adults?
Because anxiety disorders may look different in older adults compared to younger adults, they can be difficult to detect and diagnose. Doctors can have difficulty distinguishing between anxiety caused by adapting to difficult life changes, and a true anxiety disorder. For example, if you fell and broke a hip, you may be justifiably fearful of going out for a while. But that would not mean you have developed an anxiety disorder.
Also, older adults may express their anxiety with a doctor differently than younger adults. For example, they may express anxiety in physical terms such as feeling dizzy or shaky, while younger adults may express it in more psychological terms.
Older adults may also have more difficulty answering complex screening questionnaires if they have diminished cognitive abilities or memory problems.
Sometimes the physical symptoms of other illnesses can get mixed up with the symptoms of anxiety, making it difficult to determine if a person has a true anxiety disorder. For instance, a person with heart disease sometimes has chest pain, which can also be a symptom of a panic disorder.
Muscle tightness, feeling very tense all the time, and difficulty sleeping can also be symptoms of a physical illness or an anxiety disorder, or both, complicating diagnosis. As a result of these complications, doctors may miss the anxiety disorder.
|
Regression models
Notes on linear regression analysis (pdf file)
Introduction to linear regression analysis
Regression example, part 1: descriptive analysis
Regression example, part 2: fitting a simple model
Regression example, part 3: transformations of variables
What to look for in regression output
What’s a good value for R-squared?
What's the bottom line? How to compare models
Testing the assumptions of linear regression
Additional notes on regression analysis
Spreadsheet with regression formulas (new version including RegressIt output)
Stepwise and all-possible-regressions
What's the bottom line? How to compare models
After fitting a number of different regression or time series forecasting models to a given data set, you have many criteria by which they can be compared:
With so many plots and statistics and considerations to worry about, it's sometimes hard to know which comparisons are most important. What's the real bottom line?
If there is any one statistic that normally takes precedence over the others, it is the root mean squared error (RMSE), which is the square root of the mean squared error. When it is adjusted for the degrees of freedom for error (sample size minus number of model coefficients), it is known as the standard error of the regression or standard error of the estimate in regression analysis or as the estimated white noise standard deviation in ARIMA analysis. This is the statistic whose value is minimized during the parameter estimation process, and it is the statistic that determines the width of the confidence intervals for predictions. A 95% confidence interval for a forecast is approximately equal to the point forecast "plus or minus 2 standard errors"--i.e., plus or minus 2 times the standard error of the regression.
However, there are a number of criteria by which to measure the performance of a model in absolute and relative terms:
So... the bottom line is that you should put the most weight on the error measures in the estimation period--most often the RMSE (or standard error of the regression, which is RMSE adjusted for the relative complexity of the model), but sometimes MAE or MAPE--when comparing among models. (If your software is capable of computing them, you may also want to look at Cp, AIC or BIC, which more heavily penalize model complexity.) But you should keep an eye on the validation-period results, residual diagnostic tests, and qualitative considerations such as the intuitive reasonableness and simplicity of your model. The residual diagnostic tests are not the bottom line--you should never choose Model A over Model B merely because model A got more "OK's" on its residual tests. (What would you rather have: smaller errors or more random-looking errors?) A model which fails some of the residual tests or reality checks in only a minor way is probably subject to further improvement, whereas it is the model which flunks such tests in a major way that cannot be trusted.
The validation-period results are not necessarily the last word either, because of the issue of sample size: if Model A is slightly better in a validation period of size 10 while Model B is much better over an estimation period of size 40, I would study the data closely to try to ascertain whether Model A merely "got lucky" in the validation period.
Finally, remember to K.I.S.S. (keep it simple...) If two models are generally similar in terms of their error statistics and other diagnostics, you should prefer the one that is simpler and/or easier to understand. The simpler model is likely to be closer to the truth, and it will usually be more easily accepted by others. (Return to top of page)
|
Why this ad?
Skip navigation
no spam, unsubscribe anytime.
Skip navigation
WHO launches nutritional website to fight malnutrition
As a way to combat further malnutrition problems, The World Health Organization (WHO) has launched a new, web-based information system to teach prevention methods, IRIN reports.
A lack of proper understanding about what protocol should be used to stop malnutrition from spreading is the main problem, according to WHO. Organization leaders say they hope the website will help world leaders and humanitarian groups come together as a unified team to use the same ideas and methods consistently.
"What we need to do is to make clear what are effective interventions," Francesco Branca, WHO's nutrition director, told the news outlet.
The website, e-Library of Evidence for Nutrition Actions, goes over the three main forms of malnutrition: under-nutrition, vitamin and mineral deficiencies, and obesity.
"Several billion people are affected by one or more types of malnutrition," Ala Alwan, WHO's assistant director-general of Non-Communicable Diseases and Mental Health told the news source. "Countries need access to the science and evidence-informed guidance to reduce the needless death and suffering associated with malnutrition."
To date, roughly 115 million pre-school-aged children around the world are underweight, while 190 million children are affected by vitamin A deficiency, according to the news outlet. Another 500 million people are obese, according to the new website.
Share this page and help fund more food:
|
Metaphysics engine
From Uncyclopedia, the content-free encyclopedia
Jump to: navigation, search
Metaphysics Engine, circa 1950, if you believe it really existed.
“Do metaphysics engines really exist, or should they exist? Is it that I believe it should exist, or is it just somewhere stuck between the dirty sheets of time?”
~ Friedrich Nietzsche on Metaphysics Engine
The metaphysics engine is a quaint little device that owes its invention to a man living near the coast of southern Nebraska in the 1950's. This man, who went by the name of Fred Farnhurst, was attempting to create a physical manifestation of his own moral compass. Being a religiously unaffiliated man, i.e. atheist, he relied solely on the definitions of ‘’right’’ and ‘’wrong’’ as he saw fit. Whereas the legal definition of ‘’right’’ was most often wrong and corruptible and therefore correct and infallible. The moral compass, despite the size of the more popular magnetic compass, was in fact a colossal machine that sprawled across his entire basement, with all manner of wires, panels, monitors, and inscrutable devices hanging off its unpolished mahogany frame. The array, in theory, worked upon the principle that any action is either right or wrong, and the two qualities are polar opposites that cannot coexist. The machine used a series of detectors to compare the moral aura of a subject to two other calibrating subjects, one which was clearly morally right and another which was clearly morally wrong. The test subjects would be evaluated by comparison.
edit Challenge
The Twenty Dollar Bill was the polarizing opposite of all things good and holy.
The real challenge, Farnhurst found, was deciding on what calibrating subjects he could use. In the end, being born a Christian, Farnhurst decided to use a copy of the bible as a control for moral rightness and a stolen twenty dollar bill for the opposing control. On February third, 1954, Farnhurst threw the switch and activated the machine. After an hour of calibration, he prepared to enter his first test subject – himself.
edit Setbacks
Unfortunately for Farnhurst, his device was acting far more like a conventional compass than he realized. Just as a magnetic compass requires a magnet that both receives and gives force from the Earth's magnetic field, Farnhurst's moral compass exhibited its own moral polarity which both received and gave push from the theological fabric of the universe. Meanwhile, just as a direction must be specified for a magnetic compass to measure (and thus assign a bearing to), Farnhurst's moral compass required a subject (in this case, Farnhurst himself) to be directed upon. The drawback? The moment of moral inertia exhibited by Farnhurst's moral compass was of magnitude more than four orders higher than that of the universe's theological fabric. This is comparable to having a magnetic compass so large, the Earth spins while the compass stays still.
edit Results
Thus, the end result was as such: the theological fabric of space-time was aligned to fit the morals of Fred Farnhurst, and Fred Farnhurst's values became the objective truth of all philosophical and moral debate in the universe. Farnhurst consequently became the center of all moral mass in the universe and simultaneously morally perfect. Unable to bear the weight of all morality, Farnhurst promptly shot himself and having failed because he was so moral, he hired a morally unscrupulous person to kill him instead. The universe, no longer influenced by the moral polarity of Farnhurst's once-living mind, returned to its previous alignment in the multiverse over the course of the next few days.
edit Rebirth
So it remained until, decades later, a man named Linus Torvalds, founder of the Linux system, discovered the Nebraskan coastal home and the invention lying underneath it. Realizing that this was in fact a mere prototype, Torvalds quickly claimed it and set up camp inside the perimeter of Farnhurst's home. With the recent advent of computers, Torvalds was able to move the mechanical nature of many components of the moral compass into the realm of software; using a copy of the hypertext NKJV and a digitized image of a dollar bill, Torvalds created an open source Linux application under the GNU public license and began distributing the first beta binaries and source in spring of that year. The program, to avoid the confusion of multiple moral polarities within the same universe, was a network-based entity designed to contact a unified server and send in requests to be queued when attempting to change the polarity of the theological fabric of the universe. The main server would change the experimental focus (previously Farnhurst himself) as needed, thus altering the metaphysical nature of existence.
edit Revamping
Some years down the line, the program underwent several additions and improvements; it now enjoys a user-friendly GUI environment with glitch-free job processing. With the world wide web now a global entity, a single server can be used to change the metaphysical aspects of the world to perfectly harmonize with every Linux user with a 56k connection or better. More importantly, the metaphysics tower, or MePhyT, as the server is called, can exert force to change the moral nature of any given individual (or indeed all moral nature surrounding that individual) in a manner such that any perished or deceased person will be immediately rejected from all afterlives at once, leaving the person effectively trapped in the world of the living indefinitely. Such practices allow Linux users to enjoy perfect immortality; with no threat to the amount of time spent on this Earth, Linux developers are now free to enhance Linux to their heart's content, just so long as they remain under the GNU public license.
edit Epilogue
Farnhurst has yet to be revived using these methods but is instead enjoying a guaranteed afterlife of eternal bliss. Though Farnhurst never actually created a successful moral compass, MePhyT has made such an invention obsolete. According to Torvalds, Linux's next endeavors, having conquered death and morals, are to gain control of stock market prices, redirect the flow of time, remove the gravitic nature of matter, and to redefine the speed of light.
Personal tools
|
The Autobiography of Alice B. Toklas Chapter Abstracts for Teachers
Buy The Autobiography of Alice B. Toklas Lesson Plans
Chapter Abstracts
Chapter 1
* Alice B. Toklas is born in San Francisco.
* As a young girl, Alice studies music.
* Alice's mother dies.
* In Alice's teen years, literature becomes her passion.
* Alice meets Gertrude Stein's brother and sister-in-law.
Chapter 2
* Alice arrives in Paris in 1907.
* Alice meets Helene, Stein's housekeeper.
* Helene expresses dislike of Henri Matisse.
* Helene leaves the Stein home to spend more time with her family.
* Helene returns to the Stein home to find that Gertrude and Alice's friends have all become famous.
* Alice is introduced to famous artists, Matisse, Picasso, and Cézanne.
* Alice learns about art and culture in Paris.
* Alice receives French lessons from Fernande Picasso.
Chapter 3
* Gertrude Stein finishes school at Johns Hopkins.
* Mr. Stein discovers Cézanne while in Florence.
* The Steins visit Vollard's gallery for the first time.
* Gertrude Stein writes the poem "Vollard and Cézanne."
* The Steins begin to purchase works by C...
(read more Chapter Abstracts)
This section contains 405 words
(approx. 2 pages at 300 words per page)
Buy The Autobiography of Alice B. Toklas Lesson Plans
|
Chegg Guided Solutions for Heat and Mass Transfer Fundamentals and Applications 4th Edition Chapter 1 Problem 20P
5 Stars(4 users)
A 15-cm-diameter aluminum ball is to be heated from 80°C to an average temperature of 200°C. Taking the average density and specific heat of aluminum in this temperature range to be ρ= 2700 kg/m3 and cp= 0.90 kJ/kg·K, respectively, determine the amount of energy that needs to be transferred to the aluminum ball. Answer: 515 kJ
View this solution... try Chegg Study
Join now
|
Peeling, or delamination is a loss of adhesion between a paint film and the substrate (the material being painted), causing sections of paint to separate from the surface in sheets. Though all paint problems are aggravating, there's nothing worse than leaving the driveway with a shiny new paintjob and arriving at the donut shop in a bare metal car. Peeling is most commonly caused by poor surface preparation, usually insufficient sanding or cleaning. But there are other causes, too, like omitting or using the wrong primer for your substrate (the surface being painted), exceeding the paint product's recommended recoat time, or in the case of clearcoat colors spraying the colorcoat too dry, using an incompatible clearcoat, or incorrect colorcoat reduction. You can prevent peeling by, again, reading the instructions for the products you're using, properly cleaning and sanding your substrate, using the correct undercoats (primers) for your substrate, and making sure you topcoat within the recommended flash times for the material you're using.
Grit, sometimes referred to as seediness, is the dispersion of solid particles of different sizes embedded in the paint surface. This usually happens when your paint material isn't properly or completely stirred, or more commonly when you don't strain your paint or primer. You may also run into a grit problem when using old paint (like that can of color you've had stashed waiting for the "right" truck to use it on) or by using material past its pot-life (the amount of time before the catalyst really begins to kick in and starts to harden the material). Repair options in the case of grit are the same as those from runs and sags: you can wash the area with a solvent (thinner, or reducer) wetted rag and then clean and respray the area (seldom a first choice), or you can keep on going and wait till the paint fully cures and then sand and buff or sand and respray. Grit prevention can be achieved by mixing your materials thoroughly, straining all your under- and topcoats, and mixing up only enough material to use within its specified pot life.
If you don't get runs you are not trying hard enough. Also sometimes known as sags, hangers, or curtains. Runs are, along with dust, one of the most prolific paint problems for the hobbyist or occasional painter. The most common causes are holding the gun too close to the surface, moving too slowly across a panel, and double coating an area. Over-thinning/reducing is also a possible cause, along with trying to paint in an environment that is too cold. To fight runs and sags, you've gotta hold your gun perpendicular to the surface and keep it a steady and correct distance from the panel, all the while moving it fast enough that you don't pile the paint on, yet slow enough to get good coverage and flow-a process that comes with practice and experience. If you do 'hang some curtains," in some cases you can wipe the area with a solvent-wetted rag and then clean and respray the area (seldom a first choice), or you can keep on going and wait till the paint fully cures and then sand and buff or sand and respray.
Sand Scratches
Sand scratches show up as lines or marks in the paint film that mirror the marks in the surface being painted. They may also show up as streaks in the topcoat that magnify marks in the undercoat or substrate layer. These are caused by improper or incomplete final sanding of bodywork or primer coats (using too course of a paper), trying to cover scratches by filling 'em with primer, or in some cases by sanding single stage or basecoat finishes before clearing them. You can fix 'em by letting the finish cure and then carefully re-sanding the area with an ultra-fine paper and then refinishing it. You can avoid sand scratches by graduating your sanding from course to fine papers, and not sanding basecoat colors before clearcoat (though if you do have to sand the basecoat for some reason make sure you apply additional basecoat color before clearing). Also use 1,200-grit or finer paper for color sanding.
|
Edition: U.S. / Global
A Rarity Among Roman Towns
By MICHAEL FRANK; MICHAEL FRANK, who lived for two years in Italy, divides his time between New York and Los Angeles.
Published: December 23, 1990
IT is a place that whispers history in the way ruins must have a century ago, before armies of tourists arrived in stone-splitting buses and archeologists had to guard their treasures with spiked fences and wire ropes. Four hours southeast of Rome, three hours northeast of Naples, Saepinum is tucked into a corner of the Molise, one of the least populated and least known districts in all of Italy. Even for the Molise, Saepinum is a rarity: an ancient Roman town that is preserved rather than embalmed, seldom but easily visited, alive despite the tenets of museology. Although guides cluck with disapproval, a handful of farmers, whose forebears have worked this land since the Iron Age, continue to tend fields bordered by ancient walls and to inhabit houses that incorporate ancient stones, at times even older than Roman. Saepinum is full of such pairings: ancient and contemporary, urban and rustic, it is among the country's most evocative, unspoiled sites.
In Saepinum, chickens feed alongside chariot-pitted Roman roads. Apples ripen and drop into the remains of Roman houses, stocking the larders of a vanished civilization. Grasses regularly sprout in the forum, a tenacious, earthy reminder of Saepinum's origins as a key crossroads amid fertile pasture land. Here, it seems, man has always lived in close rapport with nature: Saepinum was no Rome, muscling the landscape, everywhere flexing its might. It was a provincial city, modest in both scale and scope. Never grand, always accessible, Saepinum is the kind of place where it's easy to add a few imaginary bricks to broken walls, fill the air with clapping hoofs and creaking carts, and fool yourself into believing -- for a moment anyway -- that you can catapult backward to the city's heyday.
Saepinum is situated at the intersection of two roads of age-old significance: the tratturo Pescasseroli-Candela (a tratturo is a cowpath or sheepway) that links the Abruzzo and Apulia regions, and the road that crosses the plain of the river Tammaro and climbs up into the nearby Matese hills. They are just a tangle of place names until you realize that for millennia the shepherds transformed these dry roads into rivers of undulating wool as they moved their flocks across these pastures and that their livelihood depended on meeting at this junction to trade with local farmers. Here, through commerce and conversation, a community set down its first fragile roots.
A small settlement is thought to have existed on the site before recorded history. At the beginning of the Iron Age (circa 1,000 B.C.) the Petri, a tribe of Samnites, the indigenous people of the region, founded a village at Saepinum called Saipins in Oscan, the language spoken locally until the second century B.C. (Official and educated classes continued to speak Oscan until the Social War of 88 B.C.: the language was still spoken at Pompeii at the time of its destruction in A.D. 79). Saipins, in Oscan, is linked to the Latin saepire -- to hedge in, to enclose -- and indeed this sense of enclosure is palpable in the ruins as they stand today, with their 275 yards of Roman walls that protect the city against attack and hug the valuable crossroads for which they serve as a kind of immense, ferocious toll booth.
In the fourth century B.C., threatened by the expansion of Rome, the Samnites withdrew to the nearby hills to fortify themselves against the new powers. Famous for their cyclopean walls throughout the Molise, they barricaded themselves at Terravecchia, one and half miles southwest of Saepinum. But in 293 B.C., during the Third Samnite War, the fierce Samnites, whom Livy called the strongest and the mightiest, fell to the Roman consul L. Cursor Papirius after a battle that left 7,400 dead and 3,000 prisoner. The few survivors crept down into the plains and once again settled around those essential crossroads. The territory was annexed by Rome during the first century B.C. The first stones of the walls were laid at the end of the Social War (88 B.C.), but most of Saepinum was built during the reign of Augustus (27 B.C. to A.D. 14).
The best way to orient yourself at Saepinum is to have a sense of the crossroads and the walls with their four heroic doors. The old tratturo became the Roman decumanus (or east-west road, though at Saepinum it's quite a bit skewed) and is demarcated by the Porta di Bojano to the northwest and the Porta di Benevento to the southeast. The road that led down from the Matese into the plains became the cardus (or north-south road) and is bound by the Porta di Terravecchia to the southwest and the Porta del Tammaro (northeast). Arriving from Campobasso, you park outside the Porta del Tammaro and enter Saepinum through its crumbling remains.
Almost at once you should have a sense of the modest proportions of the city, which covers an area of 29.6 acres within the walls, and can be traversed, along the cardus or the decumanus, in fewer than 10 minutes in either direction. Saepinum is small but not parochial.
|
October Giveaway top
more... IntermediateAdvancedLessonsFusionShredFuture RockLeadScales
Future Rock: Symmetrical Scale Silliness
Chops: Intermediate
Theory: Advanced
Lesson Overview:
• Learn how to use diminished, augmented, and whole-tone scales.
• Create angular lines that emphasize altered sounds.
• Break down complicated scales into easily understood ideas.
Click here to download a printable PDF of this lesson's notation.
Have you ever craved a little extra musical spice? Do you want your licks to have a sense of mystery, urgency, darkness, or zaniness that major scales and their corresponding modes can’t provide? If so, you may want to investigate symmetrical scales. The concept behind a symmetrical scale is pretty simple: The intervals within the scale follow a consistent and predictable pattern. For example, moving up in whole-steps until you reach the root creates a whole-tone scale. We’ll also cover diminished and augmented scales in this lesson, so let’s dig in!
Ex. 1 is based on a whole-tone scale. You may remember that we used this scale for a warm-up exercise a few months ago [“Future Rock: Hybrid Picking”]. As I mentioned, creating a whole-tone scale couldn’t be easier, just keep stacking up whole-steps until you get back to your root. You can hear this sound in the impressionistic work of Maurice Ravel and Claude Debussy. A whole-tone scale creates an unfinished, dreamy quality since it doesn’t really resolve anywhere (that’s because any note in the scale could be considered the root). Everyone from hardcore jazzers to shredders like Bumblefoot, Buckethead, and Shawn Lane have twisted and turned this scale far past what any of the classical composers ever envisioned.
Because of the whole-tone scale’s symmetrical construction, there are only two different versions and each one is a half-step apart. Whole-tone scales sound great over dominant 7#5 chords, but also create an interesting sound over minor 7 chords (just make sure to use the whole-tone scale that includes the b3 of the chord). In the example below, we have an E whole-tone scale (E-F#-G#-Bb-C-D) over an E pedal tone, and this creates an E altered sound.
We’re sticking with the E whole-tone scale in Ex. 2, but this one is inspired by Guns N’ Roses guitarist Bumblefoot, who is a wealth of creative whole-tone ideas. You can hear some interesting tension over the funky E7 rhythm. Because of the symmetrical nature of this scale, we can use both these examples over Gb7, Ab7, Bb7, C7, and D7.
We dig into diminished sounds in Ex. 3. Here, we have a half-whole diminished scale that, as the name suggests, is created by alternating half-steps and whole-steps. Again, because of the scale’s symmetrical nature, there are only three versions. When you look at the intervals the scale contains relative to the root—in this case, G—you get b9, #9, 3, #11, 5, 6, and b7. That combination of tones works great over 7#9, 7b9, 7b5, and even basic dominant 7 chords. Make sure you are hearing the scale and thinking about the function of each note over the chord. This will help free you up to improvise with the half-whole diminished scale.
|
Lectures Didn't Work in 1350—and They Still Don't Work Today
A conversation with David Thornburg about designing a better classroom
Students at Wooranna Park Primary School in Melbourne, Australia go on an outerspace mission from their holodeck-style classroom
“Of all the places I remember from my childhood,” David Thornburg writes, “school was the most depressing.” The now award-winning educational futurist and creator of the “educational holodeck,” Thornburg’s early experience in the classroom prompted him to help others rethink traditional classroom design. In his latest book, From the Campfire to the Holodeck: Creating Engaging and Powerful 21st Century Learning Environments, Thornburg outlines four learning models: the traditional “campfire,” or lecture-based design; the “watering hole,” or social learning; the “cave,” a place to quietly reflect; and “life”—where ideas are tested.
I spoke with Thornburg about his project-based approach to learning, why traditional models of teaching fail, and how to incorporate technology into education to teach students how to think creatively. Here's a transcript of our conversation, condensed and edited for clarity.
What was it about school that was so depressing for you?
I was in schools that didn’t support the way I learned. When I was a student in elementary school, our desks were bolted to the floor, and the desks were set up for right-handed people. I was left-handed. There was no way to accommodate me, so my teacher thought it would be a good idea to tie my left hand down with a belt, forcing me to use my right hand. I overcame the barriers, but I wanted to make sure I never did to other students what had been done to me. It’s kind of a bummer to come at it from that perspective, but that’s the reality.
How did you come up with the four types of learning?
The real breakthrough was attending a conference at the National Academy of Sciences. Every presenter at the conference was an absolutely breathtaking speaker. The whole focus was on the role of technology and learning. But a couple days in, I showed up and noticed that halfway through the event, a lot of people were getting up for breaks. There were no breaks scheduled during the day. The interesting thing was that they stayed out in the hall talking to peers about what they’d just seen in the lecture. Here we had great speaker presenting, but they were in the hall talking. It was meeting a need. That night, I reflected on the day and came up with the idea of different learning spaces. I thought that maybe we could think about technologies to support these different ways of learning.
So what is an educational holodeck, anyway?
The science-fiction holodeck that came with Star Trek: The Next Generation was just an empty room that could become a whole simulation of anything. A Victorian drawing. An ocean-going vessel. Anything you wanted, it could become. That included furniture and everything, controlled by a computer. We don’t know how to fabricate holographic furniture that people can sit on, so we need real furniture, but we’ve taken a good-sized room and covered the surfaces, no external light coming in, and in the front of the room put a large projection screen. Our first was 10 meters across and 1.5 meters high, which is big. On the side of the room, there was an interactive whiteboard and around the periphery, personal computers. Kids come into the room to go on a mission.
One that we did was a mission to Mars, to let kids explore whether Mars has or had, life. There are challenges when you’re taking off in a spaceship, and they have to solve problems. It’s very interesting, because it’s an immensely interactive environment, and after a little while they almost feel like they’re there. When the students enter the room, it’s already up and going. It’s only after they’re out of the room that I turn everything off and it goes back to the regular room. And I feel a difference. It’s like, “Whoa, where’d my spaceship go?” I get a funny feeling in my stomach.
How did you judge the success of the holodeck?
We brought back a bunch of students one year later to revisit the holodeck and asked these kids to talk about what they knew about Mars. What they knew then was much more than what they knew at the end of the mission. They were so interested in it that they continued to study the topic on their own. I don’t know about you, but if I’m asked to answer some questions from a year ago, I may have forgotten some stuff. The idea that they had grown is really exciting.
There’s been a lot of emphasis on testing recently. How do your ideas fit with these requirements? Is there room for exploration?
The emphasis on testing is changing. The new standards, especially in science and math, are radically different from what we had in the past. Basically, the function for the new math standards is to help children learn the way a mathematician thinks. The computational skills are just a byproduct. Most of the math instruction in American schools has been focused on computation, not on real mathematics. That’s changing. People are still anxious about the new assessments, but they’ll find a way to do that. The Next Generation Science Standards have been adopted by twenty-some odd states, and they pretty much mandate new types of assessments. For the first time in history, engineering is now a K-12 topic. I’m not even sure it’s even content the teachers even know, and in a way, that’s almost a blessing. It forces kids to go to projects on their own rather than you giving a lecture. It’s the idea of co-learning.
What do you think about the Common Core Standards?
In the Common Core math standards, I find a lot to like. The problem I’ve got with the standards—there are only eight—is that the illustrations use traditional topics. There’s nothing wrong with that, but someone who’s just skimming it might think they don’t have to change what they teach. Technically it’s true. You don’t have to change what you teach, but you have to change how you teach it.
You point out that we’ve been using the lecture-based model since the 1300s. Why have we kept replicating a model that doesn’t suit everyone’s needs?
It’s a fascinating question. There’s a painting of a classroom by Laurentius de Voltolina from 1350 that shows it’s not working. Students are talking to each other or falling asleep while the teacher drones on. Why has this perpetuated? I don’t know. In our workshops we tell people to go to Second Life and check out a classroom—and they’re exactly like they are in the real world. It’s strange, because this is a place you can move by teleporting, you can do whatever you want. So using space in the same way is strange.
Is it possible that the failure of students in lecture-based classrooms is due, in part, to a decrease in attention span of kids who’ve grown up in front of screens?
Jump to comments
Presented by
Get Today's Top Stories in Your Inbox (preview)
An Eerie Tour of Chernobyl's Wasteland
Join the Discussion
Is Technology Making Us Better Storytellers?
The minds behind House of Cards and The Moth weigh in.
A Short Film That Skewers Hollywood
In Online Dating, Everyone's a Little Bit Racist
What Is a Sandwich?
We're overthinking sandwiches, so you don't have to.
Let's Talk About Not Smoking
More in
Just In
|
Eagle Nest
"Like" us on our new "page":
Follow us on:
Make an Impact When You Shop
The American Bald Eagle
The eagle had represented power and majesty for thousands of years before John Adams, Thomas Jefferson, and our Nation's other founding fathers chose the bald eagle as our national symbol in 1782. Then as now, the eagle represented 'a free spirit, high soaring and courageous.' Moreover, the bald eagle has always been uniquely American, found only on our continent. At the time the Republic was formed, the bald eagle was common in North America, soaring through the skies over its timber-bordered lakes, streams, and coastlines. Perhaps as many as 25,000 lived in what we now call the lower 48 states.
Today, except for the Southwestern bald eagles the bald eagle has been removed from the 'Threatened Species List in the United States.' Its number dropped dangerously low over the last century, for a number of reasons. However, the outlook is getting better. Because of the efforts of many people, the decline in the bald eagle population had apparently been halted and perhaps, was even reversed for a few years, but is now on the decline once more.
Conservationists view recent improvements with cautious optimism. But much work remains to be done - increased preservation of crucial natural habitat, greater public awareness of the eagle's problems, plus more public and private funding for conservation and research.
A major part of this effort lies in informing Americans about our National Symbol. A greater understanding and appreciation of this majestic, threatened bird will help it to survive and flourish as a valued part of our heritage.
Symbol Adopted
On June 20, 1782, the bald eagle was formally adopted as the emblem of the United States, a living symbol of our nation's strength and freedom. Today, the bald eagle represents more than a nation. The eagle typifies the plight of all wildlife struggling to survive in a world dominated by the needs of human beings. The decline of the bald eagle reflects the reduced quality of our natural environment. If this powerful symbol of freedom continues to be abused and neglected as it has been for decades, what hope does the future hold for less recognized species?
Concern for the fate of all endangered species is growing. At the same time, environment. Clear thought and keen awareness of these issues are essential, if we are to preserve unique and important ecosystems.
What is the Bald Eagle?
The bald eagle is a bird of prey - that is, a flesh - eating bird. Its Latin name - Haleaeetus leucocephalus - means 'white-headed sea eagle.' It is called 'bald' because the word was used in times past to mean white or white faced.
Field Marks:
Adult: dark brown body, white head and tail, yellow feet, beak and eyes. Immature: normally dark brown body, showing white in the wing linings and breast. It has brown head, tail, beak and eyes and is similar in appearance to the Golden Eagle without white on the tail. Golden Eagles have feathered legs to the toes. Bald eagles have bare legs.
Some immatures may have a white breast, a brown breast or a mottled breast.
A 4 yr old, or subadult, will be similar to an osprey with a whitish head with a line through the eye and a whitish tail with a black line along the end of the tail.
Found only in North America, the bald eagle is also called the American Eagle. The other eagle native to this continent, the somewhat less rare Golden Eagle, occurs in other parts of the world as well. The bald eagle is the North American continent's second largest bird of prey, surpassed in size only by the California Condor.
As is true with most birds of prey, the female is almost always larger than the male. A female bald eagle may stand as much as 107 centimeters (42 inches) high, with a wingspan of up to 240 centimeters (8 feet). Males stand up to 90 centimeters (35 inches) high, with wingspans of nearly 200 centimeters (6 1/2 feet). Body weights of bald eagles range from 3.6 to 6.4 kilograms (8 to 14 pounds), with females generally about a kilogram heavier than males.
How Do I know A Bald Eagle When I See One?
In addition to their large size, adult bald eagles are identified by their snowy white head and tail feathers. However, a young bald eagle does not get these white feathers until it reaches sexual maturity in its fourth or fifth year of life. Immature bald eagles are a mottled light and dark brown all over, and in flight they often are mistaken for golden eagles or turkey vultures.
Osprey are often mistaken for bald eagles, but with a little close observation can be easily identified. The osprey has a smaller head, that is white, with a black line going through the eye. Some 4 year old eagles will show this same black line, so the wing shape is the best identification. The osprey has a narrower wing than the bald eagle and normally has a crook in it, while the bald eagle wing will be straight.
What do Eagles Eat?
Sixty to ninety percent of a bald eagle's diet consists of fish. The birds generally scavenge dead fish, although they will catch live fish as well. They will take an occasional heron, crow, grouse, duck, gull, or small mammal, especially if fish are not available. They will feed on dead animals, if other food is not found. This includes dead animals, such as road killed raccoon or deer, as well as chickens and small pigs, which farmers may throw out with their manure during the winter. During deer season bald eagles may come down to feed on the innerds of the deer that the hunters leave behind.
The bald eagle is an opportunist and will sometimes steal fish from an osprey or crow. But ospreys have been observed stealing fish from young eagles as well. The bald eagle uses several fishing techniques. A favorite method is to perch in a tree and watch for a fish swimming in open water nearby, and then swoop down to capture it. If a suitable tree is not available near the water for perching, the birds may also fly out over open water looking for fish below. In winter, they may perch on the edge of ice near open water and wait for fish to float by, or to wash up on the ice.
After catching a fish the eagle will either fly back to a perching tree to eat it, or if the fish is small enough, swallow the fish whole while the bird is in flight. Occasionally, eagles will carry a larger fish they have caught back to the ice or to the shore to be eaten. In over 80% of their feeding, wintering bald eagles along the Mississippi River, feed upon small fish they can eat while flying.
How Do Bald Eagles Nest?
Photo by Les Zigurski
from [email protected]
It has been stated for many years that a Bald Eagle pair will mate for life, but if one partner dies, or disappears, the other will if lucky find another mate. A newly bonded pair may work several years on a nest before actually breeding. They may desert one nest site and start again somewhere else, usually within 1/2 mile. A northern bald eagle will begin to nest sometime in February or April. A southern bald eagle will begin to nest in December and January Each nesting pair will spend a great amount of time preparing the nest before any egg is laid. The successful nest is generally located in a large tree, within one mile from water, either a lake or river, where adequate food is available.
Bald eagle nests are generally found from 15 to 36 meters (50 to 120 feet) above the ground, in a tall, sturdy tree. It takes at least two weeks for a pair of eagles to build their nest.
A pair of eagles, once established, may use the same nest several times over a period of years. Each year more materials are added to the nest, which increases the size of the nest each year that it is used. Nests weighing up to 2 tons have been found.
A Vermillion, Ohio nest measured 2.6 meters across its top and was 3.6 meters high and weighed nearly 1,000 kilograms! Sometimes, eagles will build more than one nest and use them alternately.
An eagle nest is constructed from large sticks, which are laid together to form the outside part of the nest. The center of the nest is filled with dead weeds, stubble and other softer materials, which may be available in the area. The nest cavity where the eggs are laid is lined with grass, dry moss and feathers. Many authorities believe that some bald eagles show such a strong attraction to their nesting site that, if displaced or overly disturbed, a pair may not return to the nest the following year. This fact places great importance on protecting nesting areas from disturbances such as logging operations, land development and recreational activities.
Human activity in the area of a nest during the breeding season must be strictly controlled to avoid disturbance to the nesting bald eagles. The United States Forest Service has developed, and is presently enforcing, human activity controls in bald eagle nesting areas in the National Forests of Minnesota, Michigan, Wisconsin, and Arizona. This may be a great part of the reason the bald eagles population has recovered so well in the past 40 years. The female eagle will normally lay two eggs, and occasionally three, which are about the size of a goose egg (74.4 mm x 57.1 m.) and colored dull white or pale bluish white. Both parents incubate or brood the eggs, which take 34 to 35 days to hatch, and care for the young eaglets. These eaglets remain in the nest for another 12 to 13 weeks before taking their first flight.
After learning to fly and feed themselves, the young immature eagles are allowed to return to the nest for the remainder of the summer. But most young eagles are usually not observed near their parent's nest after the first year.
Where in the United States are Bald Eagles Found?
The search for food forces bald eagles, which nest in the northern United States and Canada, to migrate south in late autumn and early winter, when lakes and rivers in their nesting grounds freeze over. Congregations of these birds may be seen during the winter along lakes and rivers where there is open water, often near dams and power plants.
Primarily fish eating birds, they are found along the coasts of North America and along inland lakes and rivers from the Gulf of Mexico to the Arctic. The birds will winter as far north as ice free water permits. At some time during the year, a bald eagle may be seen in nearly every state in the continental U.S. (there are none in Hawaii).
A majority of Bald Eagles nest in Alaska and remote areas of Canada. A small number nest in the United States in areas where isolation can still be maintained. Major nesting areas are concentrated in: the Far West, (Alaska, the San Juan Islands off the coast of Washington, as well as Washington, Oregon, and northern California); the Upper Midwest (central and northern Minnesota, northern Wisconsin and upper Michigan); and the East Coast (Maine, the Chesapeake Bay area, and Florida).
During winter months, bald eagles are widely scattered throughout much of the continental U.S. Substantial numbers may be found along the Upper Mississippi River and its larger tributaries. Smaller concentrations may be found in other areas, including the deserts of the West and Southwest.
How Many Bald Eagles Are There In North America?
This question is almost impossible to answer. Even if an exact number could be obtained, it would change from one year to the next. Also, because of the migratory habits of bald eagles, estimates of the populations are extremely difficult to make and may be misleading.
Based on several years of surveys in the United States and Canada, the total bald eagle population of North America is estimated to be somewhere between 35,000 and 50,000. That Seems Like Many Bald Eagles!
Why Should We Be So Worried About Their Fate?
There are relatively large eagle populations in Alaska and Canada. Alaska, for example, is thought to have approximately 10,000 nesting pairs, Saskatchewan Province alone is estimated to have between 3500 and 4000 pairs. 0ver 3000 pairs are believed to exist in Canada in addition to Saskatchewan. Prime nesting areas in the United States are unevenly distributed in Florida, Wisconsin, Minnesota, Michigan, Oregon & Washington. There are close to 400 active nests in National Forests alone, and perhaps as many on state and private lands.
Twenty five years ago most states had no nesting bald eagles at all. However, in the last few years many states have had one to thirty pairs of bald eagles successfully raise young. It is this low number of bald eagles in the lower 48 states, where large numbers once existed, which prompts concern for the fate of the species.
Now we are seeing an eight year decline in the percentages of young that are surviving to winter. What the reason is for this decline is unknown. The exact reason for the decline in the 1950s is unknown as well. The Fish & Wildlife Services states that the eagles declined in the 1950s because of DDT. The problem with that statement is the fact that the bald eagle was recovering for many years before the nation banned DDT.
How Close To Extinction Is The Bald Eagle?
The answer could be (and often has been!) debated for hours. Some recent events give encouragement to the belief that the species will never become extinct. Bald eagle eggs from Wisconsin and Minnesota have been transplanted successfully into nests in Maine, where the population suffered a decline in reproductive success because of pesticide contamination.
Young eagles have been successfully transplanted into nests in Maine, New York, Kentucky, Tennessee, Georgia, Indiana and Missouri. Illinois and Iowa now have more bald eagles nesting within their borders than they have had for more than 70 years. So there is hope for the eagles' future.
At the same time, there is a constant struggle to protect essential vital habitat against human disturbance and destruction. The threat of toxic chemicals in their food supply continues.
Wait A Minute!
DDT And Dieldrin Are Banned In The U.S. So They are No Longer A Threat To The Bald Eagle, Right?
Wrong! Although DDT and dieldrin are no longer used in the U.S. without goverment consent, their manufacture continues. United States corporations export large quanities of DDT and other pesticides to foreign countries where their use is legal and widespread. Some U.S. communities are once again trying to get permission to use DDT to kill mosquitoes for fear of West Nile Virus. These chemicals are constantly finding their way back to North America through the food chain. As long as these chemicals, whose residues persist in the environment for many years, are being used in other parts of the world, they continue to pose a global threat to wildlife.
Do Other Chemicals Pose A Threat to Bald Eagles?
During the 1970's, PCB's (polychlorinated biphenyls) received as much attention as DDT received in the 1960's. PCB's are used primarily as an insulating and cooling fluid in electrical transformers. They are known to cause cancer. They are widespread in the environment and persist without breaking down for many years. Like pesticide residues, PCB's accumulate in larger amounts at higher levels in food chains. There is evidence that PCB's interfere with reproduction in wildlife. Heavy metals such as lead and mercury are widespread in the environment. Lead used to be found in a wide variety of products, from gasoline to shotgun shells. Mercury is used in many industrial processes, such as paper and chemical manufacture.
When they enter the bald eagle's food chain, these metals pose a threat. In Minnesota, bald eagles feeding on Canada Geese killed or crippled with lead shot were found to have elevated levels of lead in their blood.
Lead is known to reduce the blood's ability to transport oxygen, which will limit an eagle's ability to fly very far. In Maine, where there are many paper mills, infertile bald eagle eggs have been found to have a higher mercury content than eggs from other areas.
The nation's large chemical companies like Monsanto and Bayer are constantly developing new and stronger chemicals that are used by farmers and homeowners. There is a real threat that one or more of these chemicals are working their way into the food chain like DDT did.
What Are The Penalties For Killing A Bald Eagle?
A federal law protecting both the bald and golden eagle specifies fines of up to $10,000 and/or a maximun of one year in jail, for the intentional killing of one of these magnificent birds. The penalty could be double that for the second killing.
How Much Disturbance Will A Bald Eagle Tolerate?
Bald eagles fear humans at all times, but will tolerate much less disturbance during the nesting season, than at other times of the year. A nesting pair will seek isolation, and any human interference, if prolonged, may drive the birds away from the nest.
During the winter, eagles will roost and feed in groups close to human habitation and activity. However, prolonged and repeated disturbances will send the birds on their way in search of another isolated roost or feeding area. Any disturbance that Disturbances at these sites can lessen the eagles' survival chances as secondary roosts if available, will in all probability not have the vital weather protection that the primary roost provided.
What Kinds Of Land-Use Practices Have Adverse Effects On Bald Eagle Habitat?
Eagles, being large birds, need large strong trees for nesting, roosting, and perching while hunting. Most trees have to be over 200 years old before they can be used as nesting sites for the bald eagle. Logging operations have disturbed or destroyed many nesting territories and potential nest sites, as well as winter roosts. U.S. Forest Service regulations protect nesting territories in the National Forests. But nests and roosts on private land may not be protected, and many times nesting trees used by eagles may be cut down, before their existence is known to the scientific community or general public.
Intensive recreational use of land near nests and roosts disturbs the birds. Increased traffic from snowmobiles, and all terrain vehicles, presents a serious problem, which must be addressed. As the human population expands and moves in greater numbers back into the countryside, the bald eagle is pushed back into smaller and smaller pockets of suitable habitat. Forests are cleared for farming. Vacation homes are built on the shores of lakes where bald eagles nest. In Illinois, the Central Illinois Expressway was constructed right up a valley that eagles used for a winter nighttime roost. In Maine and Washington, supertanker ports and oil refineries have been built near bald eagle roosting and wintering areas. All too often the eagle is forced, by people, to give in and move elsewhere. Most Bald Eagles Nest In The Northern United States And Canada - Where There Is Plenty Of Suitable Habitat,
So Why Is It Necessary To Protect Nesting Sites?
The critical point to remember is that bald eagles are very territorial birds, and most breeding pairs return to the same nest site year after year. They may use the same nest annually for as many as 35 years, or they may build additional nests in their nesting territory, and alternate the use of them from year to year. If their nests are disturbed or destroyed, the pair may never build again. So, although there are large tracts of wild land available to bald eagles, the territorial nature of the birds, their precise nesting requirements, and their past nesting habits, limit the number of suitable nest sites.
Why Are Winter Roosting Sites So Important?
Just as human beings need a warm, sheltered place to go during severe winter weather, so do bald eagles. Winter can be a time of great stress on all wildlife. If that stress can be reduced, there will be a larger and healthier breeding population to migrate North the following spring.
Where Do Bald Eagles Roost In Winter?
Bald eagles generally choose to roost in large trees in protected places within eight miles of their feeding grounds. Along the Mississippi River, for example, they most often roost in heavily wooded, steep-sided valleys, sheltered from northerly winds, or in cottonwoods on islands away from human disturbance.
Bald eagle winter roosting areas have been identified in many parts of the United States - Cobscook and Frenchman Bays in Maine, the Delaware River in southern New York, the Salt Plains National Wildlife refuge in Oklahoma, the San Luis River Valley in southern Colorado, Navajo Lake in northern New Mexico, and the Skagit River Bald Eagle Natural Area in Washington, just to mention a few.
By far the nation's most important bald eagle winter roosting sites are found along the Upper Mississippi River watershed. They include: Ferry Bluff on the Wisconsin River; Eagle Valley, Oak Valley, the Savanna Army Depot, Clarksville and Burlington Islands on the Mississippi River; Rice Lake on the Illinois River, Swan Lake and Squaw Creek on the Missouri River; and Land Between the Lakes in Kentucky, and Reelfoot Lake in Tennessee. We presently know of 59 bald eagle winter roosts that are located on both private and public land along the Mississippi and its major tributaries.
Are All Of These Winter Roosts Protected?
Not by a long shot. Some - Ferry Bluff, Oak Valley, Eagle Valley, Cedar Glenn, the Savanna Army Depot and Plum Island, for example-are protected by conservation organizations or government agencies that manage them. Others are in private hands and are still open to development. Still others are only partially protected and thus threatened by nearby development.
Unlike nesting sites, most of which are in remote areas, many wintering grounds are close to large numbers of people. Human contact with the eagles is inevitable and disturbance of the birds a constant threat. Not to be overlooked is the fact that the use of favored roosting sites varies from year to year, depending on the availability of food. Thus, it is important to preserve as much of the natural ecosystem of these wintering areas as possible. Many are not protected at all.
Why Do Private Organizations Have To Get Involved?
Isn't 0ur Government Supposed To Be Doing This?
Government agencies have a variety of programs which are supposed to help the bald eagle. The U.S. Fish and Wildlife Service is supposed to be keeping a watch on eagle numbers, and maintains laboratories which investigate the causes of death of eagles and other wildlife. The U.S. Forest Service protects and manages bald eagle nesting territories, monitors nesting success and in the past has surveyed bald eagle populations. The Bureau of Land Management and the U.S. Army Corps of Engineers have in the past instituted policies to help protect bald eagles.
However, bald eagles don't nest, roost and hunt just on government land. Also, government policies often force the eagle to take a back seat to other priorities. The Forest Service's policy of multiple use, and the Corps of Engineers' mandate to develop waterways, may conflict with the life requirements of the bald eagle. Thus, private organizations must get involved if the bald eagle is to survive.
What Research Is Being Done To Learn More About The Bald Eagle?
Bald eagles have only recently been the focus of intensive research, and a great deal of useful information is being gathered about their nesting behavior, feeding habits, roosting patterns, and their migrations. Researchers use a variety of techniques to study the birds. Nests are observed with closed circuit TV cameras. Individual birds are outfitted with leg bands as nestlings, so they can be identified later in life. Movements of eagles are tracked using colored markers or radio transmitters on the birds. The results of these research efforts are presented at scientific meetings such as International Bald Eagle Days, sponsored by the Eagle Nature Foundation.
|
Baby Jupiters must gain weight fast
Jan 05, 2009
This photograph from NASA's Spitzer Space Telescope shows the young star cluster NGC 2362. By studying it, astronomers found that gas giant planet formation happens very rapidly and efficiently, within less than 5 million years, meaning that Jupiter-like worlds experience a growth spurt in their infancy. Image: NASA/JPL-Caltech/T. Currie (CfA)
The planet Jupiter gained weight in a hurry during its infancy. It had to, since the material from which it formed probably disappeared in just a few million years, according to a new study of planet formation around young stars.
Smithsonian astronomers examined the 5 million-year-old star cluster NGC 2362 with NASA's Spitzer Space Telescope, which can detect the signatures of actively forming planets in infrared light. They found that all stars with the mass of the Sun or greater have lost their protoplanetary (planet-forming) disks. Only a few stars less massive than the Sun retain their protoplanetary disks. These disks provide the raw material for forming gas giants like Jupiter. Therefore, gas giants have to form in less than 5 million years or they probably won't form at all.
"Even though astronomers have detected hundreds of Jupiter-mass planets around other stars, our results suggest that such planets must form extremely fast. Whatever process is responsible for forming Jupiters has to be incredibly efficient," said lead researcher Thayne Currie of the Harvard-Smithsonian Center for Astrophysics. Currie presented the team's findings at a meeting of the American Astronomical Society in Long Beach, Calif.
Even though nearly all gas giant-forming disks in NGC 2362 have disappeared, several stars in the cluster have "debris disks," which indicates that smaller rocky or icy bodies such as Earth, Mars, or Pluto may still be forming.
"The Earth got going sooner, but Jupiter finished first, thanks to a big growth spurt," explained co-author Scott Kenyon.
Kenyon added that while Earth took about 20 to 30 million years to reach its final mass, Jupiter was fully grown in only 2 to 3 million years.
Previous studies indicated that protoplanetary disks disappear within 10 million years. The new findings put even tighter constraints on the time available to create gas giant planets around stars of various masses.
Source: Harvard-Smithsonian Center for Astrophysics
Explore further: Violent origins of disc galaxies probed by ALMA
add to favorites email to friend print save as pdf
Related Stories
Red tide off northwest Florida could hit economy
22 minutes ago
Japan's whaling bid tested by world panel
1 hour ago
Recommended for you
Mystery of rare five-hour space explosion explained
12 hours ago
Glowing galaxies in telescopic timelapse
12 hours ago
Violent origins of disc galaxies probed by ALMA
19 hours ago
User comments : 0
|
Feb 16 2012
Update To The Geothermal Basis For ENSO
Published by at 12:07 pm under All General Discussions
Many thanks again to Anthony Watts for the opportunity to Guest Post at WUWT. There were a lot of great comments on my hypothesis that geothermal sources where the basis for the ENSO. Many who support the conventional wisdom said there was no proof of connections between ENSO cycles and earthquakes, etc. A valid point I was going to try and tackle. But I also received lots of links in the comments section to others with the same theory – and with far more work behind them.
When I looked into what has been done by others, I ran across an incredibly detailed analysis that answers most of the critics challenges. Before we get to the study, lets review more background information.
First off, there is a lot of tectonic activity in this region, including undersea hydrothermal vents, ocean ridge spreading and the nearby Galapagos hot spot:
The Galapagos hotspot has a very complicated tectonic setting. It is located very close to the spreading ridge between the Cocos and Nazca plates; the hotspot interacts with both plates and the spreading ridge over the last twenty million years as the relative location of the hotspot in relation to the plates has varied. Based on similar seismic velocity gradients of the lavas of the Carnegie, Cocos and Malpelos Ridges there is evidence that the hotspot activity has been the result of a single long mantle melt rather than multiple periods of activity and dormancy.
Clearly this area is home to an incredible diversity of geological structures. When reviewing the Cocos Plate material at Wikipedia, I ran across this study (pdf) that documents the history of the region. It is worth a read (its pretty short). In essence this region has it all: subduction zones, spreading zones and a hot spot (which created the Galapagos Islands).
So there is no doubt the region is geologically active. Put that one to rest.
I have to say, Anthony Watts picked the title of the post and zeroed in on my initial feeling this was all undersea volcanoes. But the fact is this region also has many known hydrothermal plumes, which look to be the more likely source of the cyclic nature of the ENSO. For example, the Medusa Vent was recently discovered in the region:
A new “black smoker” — an undersea mineral chimney emitting hot, iron-darkened water that attracts unusual marine life — has been discovered at about 8,500 feet underwater by an expedition currently exploring a section of volcanic ridge along the Pacific Ocean floor off Costa Rica.
Using Jason’s mechanical arms and a temperature probe, they logged water temperatures of 335 degrees Celsius (635 degrees Fahrenheit) at the vent’s opening.
These vents can create enormous plumes of hot water:
In 1986, a large plume of hot, particle laden water approximately one million cubic meters in volume was discovered over the North Cleft segment of the Juan de Fuca Ridge. This plume was unique in its shape (horizontally and vertically symetric), size (100 km3) and rise height (~1km), indicating that an enormous volume of hot water had been released in a relatively short period of time.
The question is, how many plumes are there yet to be discovered? Now let’s focus in on that plume I highlighted in the previous post:
What I want folks to appreciate here is scale. Both physical scale and time scale. This hot post emerged in the 10/22/2008 data set and lasted for over 4 months! It is 100 meters below the surface and it is enormous. Here is a blow up of the hot spots with a size comparison overlayed (red circle) onto the Panama Isthmus:
This hot spot is 125-150 miles (200-240 km) wide! It dwarfs the mega plume referenced above. And it lasted for months, when it was then followed by an even larger plume. I went back to the Argo animation site and counted 14 plumes coming off this area from 2007 to 2012. That’s almost 3 mega plumes a year. And as stated before, it looks like more are coming off the coast of Panama and Peru, but are not being seen until they hit the surface (lack of Argo data in that area).
So far no one has explained how these mega plumes are caused by current and wind 100 meters below the surface. And I doubt they ever will.
Combining the 1st study linked above for this region to the mega plume work at Juan de Fuca Ridge, we begin to confirm the idea that these plumes are releasing vast amounts of heat into the ocean, which in turn effect the atmosphere (not the other way around).
One commenter stated there was no proven association between ENSO and geological activity. The study I discovered actually had references to other studies that had done this work already:
Increased seismic activity along the East Pacific Rise (EPR, Walker, 1988, 1995) and Juan de Fuca (Johnson et, al., 2001) ridges is known to precede increases in hydrothermal venting rates and corresponding SST temperature anomalies.
Going back to Juan de Fuca Ridge, which has been studied for many years, we find this:
Plotted along with the heat trend is the trend of the ratio of 3He to heat. The only source of 3He is from the degassing of magma. Early studies presumed that a 3He/Q ratio of ~0.1 units was “normal” for hydrothermal systems, …
Now this from the study on the Cocos Plate region:
Helium seawater profile data obtained during the World Ocean Circulation Experiment (WOCE) show high concentrations of Helium3 over trenches off the Mexican coast indicating increased hydrothermal venting activity within the trenches (Fig. 9). This particular profile from Pacific Marine Environmental Labs (PMEL) shows the largest anomalies directly over the trenches of the Mid-American Trench. The Galapagos rift just north of the Galapagos Islands has been a subject of study for nearly three decades by several groups (WHOI, PMEL, and RIDGE). This is the site of a documented mantle plume and there were several ongoing integrated site studies on hydrothermal vent systems.
Given all this supporting work (and there is more) I think we can conclude the following:
1. The region in question is seismically active
2. The region in questions is home to many hydrothermal vents, a hot spot and a spreading ocean ridge
3. Studies do exist linking seismic activity to ENSO cycles
4. The mega plumes discovered 100 meters down by the Argo data clearly form off the West Coast of Central and South America, and travel westward. And they are produced very regularly (not occasionally).
5. The general flow of warm water is East to West (see here for animations at the surface, 100 m and 1000 m)
6. Wind and waves can and do travel in different directions than the underlying currents
7. The ocean floor is thin and cracked and we have a very limited understanding of how water and underlying magma interact on various scales.
We know less about the deep ocean that we do about the Earth facing side of the Moon. We only discovered Plate Tectonics within my lifetime, hydrothermal vents within my children’s short lifetimes. We are just beginning to explore the ocean’s geological wonders, which cover an area 3 times the size of the land surface of this planet.
All that is settled about science today is we still have a lot to learn, and what we know is still dwarfed by what we don’t know.
Some interesting background material:
1. Nazca Plate
2. Cocos Plate
3. Statistical analysis of the El Niño–Southern Oscillation and sea-floor seismicity in the eastern tropical Pacific
4. Hydrothermal Plumes Over Spreading-Center Axes: Global Distributions and Geological Inferences
8 responses so far
8 Responses to “Update To The Geothermal Basis For ENSO”
1. Neo says:
When climate scientists look at the surface temperature of the Earth, rarely do they treat the Earth as having having self-generated warmth of it’s own.
2. AnonyMoose says:
As I pointed out at WUWT, the mountain jets coming across Central America are causing localized warming, cooling, and eddies of water. If you look at the surface temperatures you can see the eddies. The 100m temperatures are showing the bottom of eddies which are mixing warm water down from the surface.
Your source:
This other view shows that the west side waters reflect the temperatures on the east side of Central America:
I have not done frame-by-frame comparison between surface and 100m, but if you try that you might be able to see if there is a surface eddy above the 100m hot spots. I can also see similar movements faintly at 1000m, which implies some deep horizontal eddies rather than vertical plumes because the eddies are moving horizontally more than a vertical plume would tend to.
3. AJStrata says:
Air cannot do that much warming to that much depth. Physically impossible.
But thanks for the link and heads up. I was not aware of that phenomena and love to learn new things (everyday if I am lucky).
4. AnonyMoose says:
Yes, obviously most of the ocean is not being warmed to that much depth. But warm water in a small eddy does not require warming all the surrounding water.
If my idea is correct, there should be a connection between the deep warm spot an the surface. Predictions are useful.
Does the location of the 100m warm spot match the location of a surface eddy, or do the rate of horizontal movements match?
I discovered that the Maps option shows each frame of the animation. Going back to 2/25/2009, as shown in your image, shows the same hot spot at 100m.
Then click on the Surface option. I certainly didn’t expect a swirl like that. It is a thing of beauty.
5. AnonyMoose says:
Here is an animated GIF of the surface and 100m on 02/25/2009.
6. AnonyMoose says:
Come to think of it…. is the 100m temperature from an Argo float? Might a float get trapped in an eddy? Especially because I think I see a ghost of the eddy down at 1000m — a float might have been guided down the middle of the eddy, which increases its chances of measuring warmth.
7. Layman1 says:
I don’t think the Argo floats can get caught in an eddy because they submerge to depth, take readings, then rise to various points to take more readings, then surface to take more readings, and then report their data via satellite.
The ocean is filled with known layers of varying temperature (related to salinity I think). Our Naval submarine force has used these for years to hide from eneny sonar. These layers are known predictable, and relatively thin. For a float to become “caught” it would heve to remain at a given depth.
8. AnonyMoose says:
Actually, I was thinking about the effects upon a surfaced float which was submerging. Objects in a vortex are drawn to the middle, and as a float submerged it would probably follow the center of the vortex down.
But you described the measurements being taken on the way up, which makes sense because the location will be known shortly when the float surfaces. Even if a float follows a vortex down, if it is not measuring then the vortex won’t skew the regional measurements. The floats at depth are probably too deep for a vortex to bother them much. So when they surface they will tend to be sampling the surface randomly, without much effect by near-surface patterns. Surfacing through a vortex might draw them toward the center somewhat, but the float will have chosen the surfacing location based upon deep currents so being centered in, and by, a vortex wont matter much.
|
The impact of the changing global energy map on geopolitics of the world | CHINA US Focus
January 24, 2013
Share on FacebookFACEBOOK
Share on TwitterTWITTER
An energy source shift known as the "shale gas revolution" is unfolding in the United States. Shale gas production, which was hardly noticeable only 10 years ago, now accounts for about one-fourth of the country's natural gas output. And the ratio is expected to reach 50 percent in 2035. Today a new energy axis is taking shape in the Americas, comprising proven shale gas deposits in Alberta, Canada, North Dakota and Texas of the US, the French Guiana and a newly discovered super-large reserve under the ocean near Brazil, promising a new rising oil-gas production center of the world in the foreseeable future. Because energy is a strategic resource and is inseparably linked to world politics as well as national strategies, any change in the world energy map will have a profound impact on global geopolitics.
Feng Zhaokui
Feng Zhaokui
Apparently the changing world energy map favors the US as a significant shot in the arm for the American economy. The shale gas industry is a massive and complex undertaking in system engineering of which most of the technological breakthroughs and applications took place first in the US. The achievements helped silence critics of shale gas exploration and production over environmental and water consumption and gave birth to a new industrial value chain characterized by innovation and technology-driven growth that also boosts employment and tax revenue beside returns from exports of technology as well as finished product. As the rising output of shale gas pushes oil price downward, it could bring about the "re-industrialization" of the USA that will not only benefit the country's manufacturing industry but also likely attract manufacturers from around the world to invest. For example, Airbus now outsources the production of some aircraft parts to America and so do many other international conglomerates because of the relatively low energy cost. Then there is the potential to boost individual consumption by bringing gasoline price down with growing shale gas output. Some experts estimated that increased shale gas production may help US economic growth rise by 2.0 percent to 2.3 percent in 2020.
Energy is an important part of the material basis for human society. In today's world, wind and solar power is still a long way from becoming a major component of energy structure of most countries in terms of electricity generation and heating, hence the so-called power in energy resources (ERP) still means power in traditional energy resources such as oil and natural gas, including control of energy resources and transport channels, and markets of energy resources. ERP is now a game-changing weapon in international politics to protect national interests, seize geopolitical power, maintain a say in international affairs and build up influence.
Currently the US is the sole country in the world with complete ERP -- control on energy resources as well as their transport channels and markets. And the all-out development of shale gas production will no doubt strengthen its ERP and consequently its hegemonic status.
The racing development of unconventional oil/gas exploration and production in the US, or North America for that matter, will likely play a significant role in strengthening cooperative relations between the US and energy consuming Asian nations such as Japan, China, the republic of Korea, some Southeastern Asian countries and even India.
Japan, in particular, was forced to reduce energy consumption significantly after the massive earthquake on March 11, 2012 shut down 43 of the 54 nuclear reactors in operation, putting tremendous pressure on thermal power plants to increase electricity generation as much as possible and raising demand for oil, natural gas and coal imports dramatically. Natural gas was the most sought after for its superior environment-friendly and cost-effective quality compared with other traditional energy resources. Even though natural gas exports to Japan are profitable, the US still refuses to supply Japan with liquefied natural gas (LNG) because the two countries have yet to sign a free trade agreement. As a result Japanese energy companies have turned their eyes on unconventional fossil fuel resources such as shale gas in North America, which is regarded as a possible" energy treasure trove", and are increasing investment in related development projects in the hope of forming a shale gas production in the US to export shale gas to Japan. That means Japan's dependence on the US will expand from the current security alone to energy resources and probably food, too. Odds are that shale oil/gas will become an extremely important "political resource" and, given its expected impact on geopolitics, a "Pacific Rim energy supply chain stretching from Alaska, Canada, mainland USA, Australia to Japan will overtake the southern oil shipping route from the Middle East to Japan through the Malacca Strait and South China Sea in terms of strategic significance.
Feng Zhaokui is an honorary academician of the Chinese Academy of Social Sciences.
Related Articles
What China Is Doing to Clear the Air and Fight Climate Change
09/08/2014 Feng Zhaokui, Honorary Academician, CASS
The Bumpy Road to China’s Shale Revolution
08/18/2014 Michal Meidan, Director, China Matters
Primer on Beijing’s Slice-and-Dice Approach to Energy and Climate Reform
07/09/2014 Melanie Hart, Director, Center for American Progress
China and Russia Form an Enduring Partnership
05/28/2014 Gordon Chang, Writer
Obama’s Climate Position and his “Grand Strategy”
07/12/2013 Zhao Xingshu, Associate Professor, Chinese Academy of Social Sciences
Special Coverage
Sponsored Reports
This week in China-US Focus
Related Articles
Real Time Web Analytics
|
Click here to Skip to main content
Click here to Skip to main content
Go to top
Debugging and testing made easy. (Part 1)
, 9 Mar 2003
Rate this:
Please Sign up or sign in to vote.
Some macros and tips to take the air out of bugs, and classes to make unit testing painless and simple.
How often have you found yourself in the situation that you are debugging some code and everything seems logical but it just doesn't work? Quite often as a beginner. Well, I found myself in this situation much less these days because of some great tips I have picked up. I want to share them with you'll and hope they will help you'll too.
I program in VC++, but quite often I find myself writing console utilities for various stuff. VC++ and MFC have some great macros (c++ #defines) to help you out in debugging. But if you aren't using MFC in your project, then you are stuck. Well, those macros aren't a lot of magic and are quite easy to reproduce for your non-MFC projects. (and non VC++ projects too.)
One of the biggest reasons (according to me, at least) there are bugs in the code, is incorrect assumptions about what a particular piece of code is supposed to (and not supposed to) do. So the way out is document your assumptions in code, and make your code give you warnings when it is not working as expected, or is being used (or misused) in a wrong way.
Some theory (I hope its not boring). Bertrand Meyer introduced the concept of design by contract to do exactly the above. (and much more, but we don't want to get in those gory details in this article.) But c++ does not support contracts like the way eiffel does. So what do we do. We try to come up with our ways of implementing contracts.
What the hell is this contract I'm talking about? I don't need no legal mumbo jumbo for programming. You don't. Basically a contract for a function (or method) indicates what conditions it expects to be satisfied to be able to work correctly, what conditions that are imposed upon it by the caller, and finally if there are some global conditions which shouldn't be violated.
The conditions which must be satisfied for a function to work correctly are its preconditions or "REQUIRE"-ments. For e.g. arguments shouldn't be null is a very common requirement. The conditions imposed by the caller of a particular function must be met. This is to "ENSURE" that result is as expected. For e.g. if function involves working with files, that the operations were indeed completed, and not aborted. The global conditions are typically called invariants or "ASSERT"-ions.
When we are debugging or in development, we want to be informed when the contracts are broken. Because when the contract is broken, it indicates that the result may not necessarily be correct and something is amiss. One of the easiest way is top print messages to the console like preconditions not satisfied, assertions failed, or so and so variable has this value while this assertion failed. Well here are some macros that do exactly that.
#define DBGOUT cout //you could just as easily put
//name of a ofstream here and log everything to a file.
#define REQUIRE(cond) \
if (!(cond))\
DBGOUT << "\nPrecondition \t" << #cond << "\tFAILED\t\t" \
<< __FILE__ << "(" << __LINE__ << ")";\
//precondition tracing macro
#define ENSURE(cond)\
if (!(cond)) \
DBGOUT << "\nPostcondition \t" << #cond << "\tFAILED\t\t" \
//postcondition tracing macro
#define ASSERT(cond) \
if (!(cond)) \
DBGOUT << "\nAssertion \t" << #cond << "\tFAILED\t\t" \
//invariant tracing macro
#define TRACE(data) \
DBGOUT << "\nTrace \t" << #data << " : " << data \
<< "\t\t" << __FILE__ << "(" << __LINE__ << ")";
//dump some variables value.
#define WARN(str) \
DBGOUT << "\nWarning \t" << #str << "\t\t" \
//print some warning indicating that some code is being executed which <BR>//shouldn't really be executed.
Using the code
Lets take the example of a function to divide two numbers and see where these macros would help us.
int div (int a, int b)
REQUIRE(b != 0)
int result = a/b;
ENSURE(b*result <= a)
return a/b;
The above example may look trivial, but consider more complicated functions, and with proper use of the above macros you will at least the most common causes of errors. TRACE and WARN macros will be particularly useful in figuring out why something is not what it should be, and ENSURE, REQUIRE and ASSERT will give you confidence that your functions, if given correct input, will produce correct output.
Slightly more complicated example:
//to parse some string in numbers (thousand) to its equivalent integer(1000)
int Parse(char* str)
REQUIRE(str != NULL) //null strings cannot be parsed
int result;
//some table lookup is done here
//assert that index is within array bounds
ASSERT (lookupindex < tablesize)
ENSURE(result >= 0) // the caller doesnt expect negative answers
return result;
1. There are some drawbacks and misuse of these macros. The first is, they slow down processing. So, you typically don't want them in the release code, only while developing or debugging, in that case you conditionally define the macros to produce code in debug, but not in release (see the zip file, it is defined that way in the dbgmacros.h).
2. Secondly, the macros expect to test invariant conditions and don't expect some processing to be done in them. for e.g. ASSERT(i++!=10) this code will most possibly fail in you release. Don't do processing in the macros. Only check conditions.
Points of Interest
To dig further, I urge you to dig deeper into the design by contracts philosophy. It has tremendous impact on how object oriented systems are designed and implemented.
I will come up with similar macros to ease up your testing in the next part of this article, probably next week.
• First revision, March 10th, 2003.
A list of licenses authors might use can be found here
About the Author
Kumarpal Sheth
Web Developer
India India
No Biography provided
Comments and Discussions
GeneralModAssert is better PinmemberJasonReese18-May-06 1:42
Generalif-else parse error PinmemberGisle Vanem12-Mar-03 8:41
GeneralRe: if-else parse error PinmemberKumarpal Sheth12-Mar-03 18:06
GeneralRe: if-else parse error PinmemberVictor Boctor29-Sep-03 21:58
GeneralRe: if-else parse error PinmemberGisle Vanem29-Sep-03 22:30
GeneralInteresting article Pinmembernerd_biker12-Mar-03 4:30
GeneralRe: Interesting article PinmemberKumarpal Sheth12-Mar-03 8:27
GeneralRe: Interesting article Pinmemberjerikat30-Dec-03 6:58
GeneralRe: Interesting article - Correction! PinmemberCorwin of Amber11-Sep-06 10:04
GeneralI found it helpful. Pinmember73Zeppelin10-Mar-03 16:35
GeneralRe: I found it helpful. PinmemberKumarpal Sheth11-Mar-03 5:48
GeneralRe: I found it helpful. Pinmembernerd_biker12-Mar-03 4:11
GeneralNot full DBC PinmemberWilliam E. Kempf10-Mar-03 9:22
GeneralRe: Not full DBC PinmemberKumarpal Sheth10-Mar-03 15:21
| Advertise | Privacy | Mobile
Web04 | 2.8.140916.1 | Last Updated 10 Mar 2003
Article Copyright 2003 by Kumarpal Sheth
Everything else Copyright © CodeProject, 1999-2014
Terms of Service
Layout: fixed | fluid
|
@article {Pinheiro:1999-03-01T00:00:00:0007-4977:243, author = "Pinheiro, Marcelo Antonio Amaro and Fransozo, Adilson", title = "Reproductive Behavior of the Swimming Crab Arenaeus Cribrarius (Lamarck, 1818) (Crustacea, Brachyura, Portunidae) in Captivity", journal = "Bulletin of Marine Science", volume = "64", number = "2", year = "1999-03-01T00:00:00", abstract = "In this study, the reproductive behavior exhibited by Arenaeus cribrarius in captivity was described, and the duration of each behavioral stage was measured. Swimming crabs were trawled in Ubatuba, northern littoral of São Paulo State, Brazil, and maintained in aquaria. Water conditions and food items were provided according to this species' natural requirements in the wild. In the presence of premolt females, intermolt males exhibited a courtship display that became intensified when the potential mate was visually perceived. After mate selection, the male carried the female under itself (precopulatory position) for 29.8 ± 5.1 d until the female molted. Afterwards, the male manipulated the recently molted female, and inverted her position under itself as to penetrate her with his first pair of pleopods (copulation), a process that took 17.1 ± 4.6 h. After copulation the male continued to carry his soft-shelled mate for 29.7 ± 5.8 d (postcopulatory position). The time elapsed between copulation and spawning was 57.8 ± 3.8 d, and the time interval between successive spawns 33.8 ± 7.1 d. Total embryonic development took 13.5 ± 2.1 d in temperature conditions of 25.0 ± 2.0° C. During the last 4.7 ± 1.4 d, embryos' eyes were already visible. The reproductive behavior pattern in A. cribrarius is very similar to those previously described in other portunids.", pages = "243-253", url = "http://www.ingentaconnect.com/content/umrsmas/bullmar/1999/00000064/00000002/art00005" }
|
Picture of Visualizing Sound With Fluids
After a brief foray into the realm of visualizing sound through fire I decided to head in the other direction and watch vibrations propagate through some different kinds of fluids. I used water and a corn starch solution to get what I thought were some pretty cool results.
Remove these adsRemove these ads by Signing Up
Step 1: Materials
Picture of Materials
- Speaker (That you don't intend to use again...)
- Driver for the speaker (I had one laying around but you can build your own, use a guitar amp, etc)
- Plastic lid that is about 4'' - 6'' in diameter with about a 1/2'' lip to contain the fluid
- Corn Starch
- Water
- Hot Glue Gun
- Wire strippers, electrical tape, etc.
- Clear Plastic wrap (this is optional but when using corn starch it can replace the lid and just cover the speaker for some cool results)
- Frequency generator (I just used a program called AudioTest that I downloaded to my computer)
Step 2: Preparing the Speaker
Picture of Preparing the Speaker
Essentially this device is just a lid glued to a speaker. This will allow the vibrations from the speaker to be transfered directly into whatever fluid is in the lid allowing us to observe patterns. The other part of the setup is wiring the speaker to the driver. This part of the process will differ depending on what setup you have but is should be pretty straight forward. I would recommend cutting wires long enough so that your speaker can sit on the table and be far enough to not splash any water on the sensitive electronics.
Step 3: Running the setup with water
Picture of Running the setup with water
To use this device simply pour water into the lid and play some sounds out of the speaker. If you play a slow sweeping wave you might be able to find out at which frequencies the fluid resonates as their will be a well defined pattern. The fun part of this is experimenting and changing up these frequencies to produce cool patterns in the water allowing you to visualize sound.
Awesome! I've always wanted 3D pattern visualization of frequencies.
ToolboxGuy1 year ago
Oobleck! The quasi-solid!
Brilliant idea!
zast_bing1 year ago
I did not understand the tentacles of corn starch. How can corn starch have any tentacles at all? Or, do you mean that after one poring corn starch into the speakers ribbon covered with plastic wrap and as sound waves are being fed, one will see tentacles?
ExperiencingPhysics (author) zast_bing1 year ago
Yeah you will need to follow all of the instructions if you want to see "tentacles"
zast_bing1 year ago
This is good, I want to try it out with my language as a project
bryan31411 year ago
I believe the bit with the corn starch was featured on Big Bang Theory....thanks for more details!
You should try this with a sheer thinning fluid, like paint.
olliebaum1 year ago
They did this on QI! Really cool by the way. Check it out -
hey, great instructable. using cornstarch liquid is a brilliant way to watch sound wave effects depending on pitch and tone. if you did want to use the speaker again you could of used clingfilm and tightly placed it over the top without tearing it. brilliant instructable though.
sunshiine1 year ago
I liked the example when the corn starch was added, interesting instructable! Thanks so much for sharing your hard work and do have a splendorous day!
|
One of the weaknesses of this version of the encyclopedia which I hope to fix in future versions is the lack of references to primary sources - books and articles on the topics discussed here, including non-WG sources. However there are already a few references to the following books and articles:
By Dick Hudson
1984. Word Grammar, Blackwell
1990. English Word Grammar, Blackwell
1995. Word Meaning, Routledge
1996. Sociolinguistics (2nd edition), CUP
1998. English Grammar, Routledge
Articles or chapters
1995. Does English really have case?
1997. The rise of auxiliary DO: verb-non-raising or category-strengthening?
1997. Inherent variability and linguistic theory.
1998. Trouble on the left periphery:
1999. (with Chet Creider) Inflectional morphology in Word Grammar.
1999. Subject-verb agreement in English.
2000. (with Jasper Holmes) Re-cycling in the encyclopedia.
2002. (with Amela Camdzic) Serbian-Croat-Bosnian clitics and Word Grammar.
1998. *I amn't.
1998. Language as a cognitive network.
1999. Case Agreement, PRO and Structure Sharing.
2000. Grammar Without Functional Categories.
2000. Quantifiers in Word Grammar.
2000. Discontinuity.
2000. Gerunds and multiple default inheritance.
2001. Clitics in Word Grammar.
2001. Word Grammar (for Handbook of Cognitive Linguistics).
2002. Case-agreement, PRO and structure sharing.
2002. Buying and selling in Word Grammar.
By others
Deacon, T 1997. The Symbolic Species. Penguin
Langendonck, Willy van. "Determiners as Heads?." Cognitive Linguistics 5 (1994): 243-59.
Levelt, W, Roelofs, A and Meyer, A 1999. A theory of lexical access in speech production. Behavioral and Brain Sciences 22, 1-45.
Pinker, S 1997. How the Mind Works. Penguin.
Reisberg, D 1996. Cognition. Exploring the Science of the Mind. Norton.
Rosta, A 1997. English Syntax and Word Grammar Theory, London PhD
Sperber, D and Wilson, D 1995. Relevance. Communication and Cognition. Oxford: Blackwell.
Tesnie`re, Lucien. 1959. Éléments de Syntax Structurale. Klincksieck.
If you want more references there are lists on my web site and on the site for Word Grammar. There are also links there to web-sites for other general theories of language structure.
|
Travel Guide Places to visit More
North Korea
Educational, Youth travel, Regenerating...
Resources For
North Korea
North Korea Vacations and Tourism Guide
North Korea Tourism and Travel: online guide for vacations to North Korea
Overall rating: 7
Surface:North Korea has a total area of around 120,500 sq. km.
Population: North Korea has a population of nearly 23,113,000 people with Buddhists and Confucianists forming a major portion of the population.
System of government: North Korea is a communist state with a highly authoritative regime.
Capital: Pyongyang with a population of approximately 3,000,000 is the capital of North Korea.
Religion: Buddhism, Confucianism and Christianity are the most important religions.
Official Language: Korean is the official language of this country.
Government: North Korea is a communist state with a highly authoritative regime.
Climate: North Korea experiences temperate climate.
Units of measure and electricity: North Korea follows the metric system of measurement and the official electrical unit is 220V and 50Hz.
Time Zone: The North Korean standard time is GMT + 9.
Currency: North Korean Won (1$=142.45KPW)
Travel documents required: Visa is a must for all; visitors from South Korea and the US are generally not allowed into the country.
Markers North Korea
Popular Tags
Adventure Casino Art & Culture Food and Wine Family fun Sun & beach
Top Destinations
Educational, Youth travel
Educational, Youth travel
Other Destinations
|
Animalism / Communism
Several commandments were set up to establish early laws in the Animalism system. Later, some were changed to accommodate the pigs' desires for alcohol, beds, and the milk that the cows produced. Most of the animals passed it off as "necessary for the pigs otherwise Jones would come back,aE which was all part of Napoleon's scheme.
Old Major, an old boar that foretold the coming revolution, represents Lenin. He was merely a philosopher of the horrors to come. Though he believed the ideals of Communism, he did not share Stalin's viciousness and malicious spirit. While the people of Russia were dying in poverty, most of the officials and leaders of the Russian Communist Government were living the high life off of the deceasing lower class.
When Napoleon took over command of the farm, things changed drastically. Meetings were no longer to be held for everything would be decided by a small committe
Some topics in this essay:
Russia Napoleon, Animalism Napoleon, Animal Farm,
George Orwell's Animal Farm... If anything has changed since the rebellion, it is that under communism, or Animalism, the animals suffer worse than they had under the capitalism of Jones. ... (1583 6 )
Animal Farm and Maus... We understand the point that Orwell is trying to make---that the ideal of Communism, or Animalism, is soon corrupted and results in crimes by animals against ... (1657 7 )
Join Now
Get instant access to over
85,000 Term papers and
Saved Papers
Save your essays here so you can locate them quickly!
"I got the best grade I've ever gotten A+"
Mary P.
"This information was helpful and easy to find."
Kris DD.
Karen F.
Dave M.
Steve D.
All papers are for research and reference purposes only!
|
look up any word, like swoll:
20 definitions by Ironbrand
Many examples abound, from the liberal screaming out News Media sound Bites to the clergy who says trust their judgment while molestation lawsuits rain upon them.
by Ironbrand April 23, 2005
In geo-political denotation, an agreement that once signed, causes an end to hostilities PROVIDED that the conditions to the aforementioned agreemnet are met. Failure to comply with the accord conditions can result in the original action stopped by the Accord to be carried out as if no agreement was ever effected.
Germany and Japan are still bound by Accords both signed in 1945, stating that if they ever became belligerents in their respective theatres again, the allies would crush them. These Accords are still in effect.
Saddam Hussein signed an Accord in 1991 allowing him to retain power PROVIDED:
1.) He destroyed his WMD stockpiles and showed proof of their disposal.
2.) That he stand down his military arm and cease to be a threat to the gulf region
3.) That he forfeit and disavow all claims to Kuwait and other regions.
Said Accord trumped any other agreements that he had with such entities as the UN. When Saddam refused to comply with the Accord, Iraq was invaded (An action that was halted due to the 1991 Accord) and Saddam was removed as a leadership entity. It is interesting to note that the UN refused to enforce the Accord due to not only the UN profiting from trading with Saddam ( in violation of a trade embargo) but France, Russia and Australia were also found complicit in this action.
by Ironbrand January 27, 2007
An alternative name for the compound Polonium-210 that was used to kill a critic of the Putin Administration in London. Polonium is so rare or a naturally occuring radioactive Isotope that it has to be created. The sheer gall of the action, however, is nothing in comparison to the big picture. Russia is but a shadow of what the USSR once was; they are in essence a third world country with no real infrastructure of consequence. On top of this assasination, two Anti-Putin Journalists were also killed. The Bear is not buried at all; it lays silent in the shadows, wounded, angry, and above all, dangerous.
The Putinium (Polonium-210) used in the assassination of the former KGB operative in London was traced back to a Russian Nuclear plant. Go figure.....1918 replay....the Bear Rising.
by Ironbrand January 28, 2007
In the political realm, an aspect of a candidate that will cost them votes no matter how they spin the situation. This is factually based on the Election of 1960 where John F. Kennedys's religion cost him votes. A woman running for president would have such a factor attached.
John F. Kennedy was a practicing Catholic and the first such Catholic running for president. Due to this fact of his religion, it cost him votes in the election. A number of Nixon votes were for the most part Anti-Kennedy, not Pro-Nixon. Since that time, such an unavoidable factor against a candidate was referred to as a Kennedy Factor.
by Ironbrand January 20, 2007
An Omnicider is one who has the urge to kill all around them, indiscrimate to the havoc and ruin they cause. This is a fitting term for the Suicide Bombers that seem to be prevalent these days, since that is their ultimate goal: To effect an act of Omnicide
And omnicider detonated themselves today, killing 7 and wounding 10.
by Ironbrand April 17, 2006
When the Bolsheviks took over Russia in 1917, they eventually spread across the world to promote their concept of paradise. How they were treated varied from country to country. While some welcomed them, others forced them into a entity of legal discourse. In England, they were not tolerated at all. Any group outside of Labour and Tory was marginalized as much as possible. The political groups thus marginalized away from any real power became known as factions. A Red Factioner is someone from England who is in support of Marxist principles of governance.
That bloke there thinks that the Queen is one of the Bourgeois! Bloody Red Factioner he is.....
by Ironbrand August 09, 2006
a person who is misunderstood at times, but intelligent, affectionate and very cuddly
Her husband treats her like shit, but she sure is a Beccles!
by Ironbrand August 06, 2006
|
Empowering grassroots linkages
by Helen Ingram and Robert G. Varady
Robert Frost probably was wrong about fences, but among nations good borders surely make good neighbors.
Our shrinking planet is hosting a growing number of nations. The world is rapidly being reconfigured politically, leading to the proliferation of new boundaries in a manner reminiscent of lines on a fractured mirror. The collapse of communism in the Soviet Union and Eastern Europe has drastically changed the political landscape. New, often ethnically-based nations are inventing themselves and securing their borders. Over the past five years, 49 new international boundaries have been created. Simultaneously, the distances among nations are being narrowed through migration and travel, communications, and trade.
By their very nature, borders create stress and contradictions, and so the increase in the numbers of borders is a cause for concern. Borders can be flashpoints between neighboring countries over issues held in common. Borders have powers of magnification; at the border, conflict or cooperation among neighbors becomes conflict or cooperation among nations. Robert Frost probably was wrong about fences, but among nations good borders surely make good neighbors.
It is unfortunate that more attention has not been paid to the lessons good borders have to teach. We argue here that the U.S.-Mexico border, while not often cited for the good example it presents, has functioned quite well. Despite joining nations that could hardly be more different, and despite being subjected to enormous pressures of industrialization, population expansion, and environmental degradation, this border is, surprisingly, a zone of cooperation. The many barriers and discontinuities political boundaries and national sovereignty erect have regularly been bridged through the flexibility, openness, tolerance, empathy, and good will of these border neighbors.
While our conclusions are largely positive, it is necessary to begin by reciting the ways in which the gulf that separates the United States from Mexico is amplified by the international boundary, which serves to divide and disconnect. The dividing influence of borders is especially evident in regard to the environment, which ignores human constructs and observes instead the laws of airsheds, watersheds, or ecosystems.
In this article, we describe the way borders (1) separate problems from solutions, (2) create perverse economic opportunities, (3) marginalize the interests of border residents in national policy making, and (4) erect barriers to grassroots problem-solving. Once the impediments to coherent treatment of borders are identified, we focus on the informal links that lend coherence and foster cooperation before moving on to the need for better institutions that reinforce cooperation.
Borders Separate Problems from Solutions
Political boundaries, whether domestic or international, often separate the location where problems are felt from the location where the most effective and efficient solutions need to be put in place. When they are international, the lines become gulfs that are enormously hard to bridge.
The 14 pairs of twin cities and the extensive rural areas between them along the U.S.-Mexico border share common airsheds, so that particulate matter and other pollutants move freely across the border. A number of border cities on the U.S. side are not achieving minimum air-quality standards and so face regulatory limitations on future growth unless air quality can be improved. The cheapest and most efficient means of reducing pollution often is available on the Mexican side of the airshed. It is a good deal less expensive to retire or replace some of the older factories, pave some streets, and install some controls in Juarez or Tijuana than to make more costly technical improvements on the U.S. side, where the least expensive strategies already have been put in place. The international border makes such responses difficult, if not impossible, to implement.
The border region, particularly that part within the Sonoran Desert of northwestern Mexico and southwestern United States, is a biological treasure. There are, for example, 460 migratory species that are endangered, threatened, proposed for listing, or candidate species. Numbers of national parks, recreation areas, and wildlife sanctuaries have been established to protect habitat, but international borders inhibit comprehensive and coordinated efforts. As a consequence, enormous efforts to save some endangered species may be doomed.
The separation by international borders of problems from the location where the most effective solutions need to be put in place is especially common when it comes to transborder water resources. The U.S.-Mexico border region is drained by numerous transborder river basins, and there are many shared aquifers upon which both sides depend. Yet the past pattern of water policy, particularly on the U.S. side, has been myopic and nationalistic. An enormously valuable fishery in the Gulf of California and important habitat for bird life created by the Santa Clara Slough near the mouth of the Colorado River are greatly dependent upon U.S. water management policies that are blind to consequences for the environment in Mexico.
The separation of problems from solutions works both ways. The rich riparian habitat on the Santa Cruz River north of Ambos Nogales (the cities of Nogales, Sonora, Mexico, and Nogales, Arizona, U.S.A.) is utterly dependent on wastewater flows from an international waste treatment plant that, for the most part, treats water coming from and belonging to Mexico. This streamside habitat is especially valuable in Arizona, where only 5 to 10 percent of all original riparian areas in the state still exist. Mexicans are not being compensated for these flows and have the legal right at any time to reclaim the water and leave the riparian habitat on the U.S. side high and dry. Similarly, there is some evidence to suggest that groundwater pumping in agricultural areas of Sonora adjacent to Quito Baquito Springs in Organ Pipe National Monument in Arizona is affecting water flows in these valuable springs.
Borders Create Perverse Economic Opportunities
The economic opportunities that exist at borders create a number of incentives that appear perverse from an environmental perspective. Border enterprises spring up and prosper because businesses can offer easy access to something not found so inexpensively or of such quality on the other side. For instance, because water is a small part of costs of production on borders, business location will poorly reflect the actual scarcity of water resources. Instead, the low wages of Mexican workers and their high productivity attracts foreign investment and job creation at the border despite water scarcity. Income per capita, for example, shows that in the early 1990s residents on the Mexican side of the border earned approximately one-seventh the average per capita income of residents on the U.S. side; Mexico's current economic crisis has no doubt widened this gap. Consequently, huge population expansion has occurred in arid areas with diminishing and irreplaceable water resources. Competition among water users in arid lands to gain at others' expense is exaggerated at borders, where restraint may simply mean an opportunity forever forgone to enjoy the dwindling resource. Restraint is especially unlikely when the forces of global economic competition and the need to repay debt reinforce the focus on immediate economic opportunities for profit.
There are other examples of the ways in which borders introduce perverse incentives into the global economy. U.S. demand for mesquite charcoal and shrimp has caused the disappearance of ironwood bosques in the Sonoran Desert and overfishing in the Gulf of California. The artificially low prices U.S. consumers pay is a poor reflection of true costs.
Borders Marginalize Grassroots Interests
Borders often are the areas furthest from a nation's center, and as such they marginalize the border region's concerns as not central and, therefore, of secondary importance in designing policy. It is not surprising that policies framed in national and state contexts frequently are at odds with border needs and priorities. For instance, newspaper accounts in 1993 suggested that the voice of Nogales, Arizona, residents was all but ignored by the U.S. General Services Administration in an upgrading of the Nogales Port of Entry and that local business people lost millions of dollars unnecessarily. Similarly, natural resources managers on both sides of the border face sets of laws, institutions, and decisionmaking processes that are unresponsive, complicate their problems, and impede cooperation.
A number of well meaning state and national policies in water resources have adversely impacted Ambos Nogales. The Arizona Groundwater Management Act, which was intended to conserve water, may force the city of Nogales, Arizona, to be stingy with its neighbor even though it has a long tradition of generously transferring water during droughts. Similarly, the national laws regulating effluent quality from wastewater treatment plants may force Nogales, Arizona, to withdraw from an internationally operated plant that ought to serve as a model for transborder cooperation.
National governments pass laws that make heavy demands on a border region's human and financial resources. Border areas are diverted from pursuing their own priorities and are not provided the tools to respond to priorities set elsewhere. Along the U.S.-Mexico border, construction of environmental infrastructure and implementation of environmental laws lag far behind other areas in both the U.S. and Mexico. Investment of national funds in border water and sewer projects has increased but still is not consistent with pro-growth industrialization policies and lags far behind the rise in demand. The border is remote from huge Mexico City, which has in the past attracted most of Mexico's environmental concern. Similarly, the U.S. Environmental Protection Agency (EPA) placed the border near the bottom of its priorities until the recent debate on and passage of the North American Free Trade Agreement (NAFTA).
Borders Introduce Barriers to Grassroots Problem-Solving
But if borders can often result in official policies that are unresponsive to border interests, they can also inhibit bottom-up, grassroots problem-solving efforts. The two influences are not unrelated. In fact, it is the very maze of regulations, awkward institutional frameworks, and lack of official interest that constrains and frustrates community-based actions. While border residents have strong reasons to search for understanding and agreements across boundaries, they lack sufficient authority or control to deliver on any cooperative agreements they might negotiate. Regularly and notoriously, immigration, drug control, and other nationally driven policies have ignored and destroyed good feelings and working arrangements among border residents that have taken decades to construct.
Similarly, international treaties and agreements that represent the sovereign national interest, as dictated by central governments, may poorly reflect the needs and preferences of border people. International agreements depend upon political processes internal to national governments. Therefore, they often fall short of achieving goals precisely because they do not sufficiently take into account the locally-based actors whose behavior will determine the extent to which laws are implemented. Incentives for international institutions as a rule poorly reflect realities in the field. Instead, high-level policy makers are rewarded for setting ambitious goals without providing the appropriate understanding, tools, and capacity at the local level to achieve those goals. Goals and objectives in this way become burdens and impediments to firing-line actors. For instance, the side agreements to NAFTA have created new institutions that feature much more stringent environmental protection along the border. But the collection and dissemination of comprehensive transnational data on the state of and threats to the border environment -- a necessity for successful implementation -- are not included.
Overarching Border Links
The divisive and disintegrating characteristics of borders cited above would make effective problem-solving next to impossible if counteracting forces were not present. Fortunately for the U.S.-Mexico border, longstanding cross-national ties facilitate cooperation. What formal, governmental machinery treats as distinct and separate national interests, informal arrangements unite in a less fragmented border community interest. Together, these links create a transnational border region that is different from both the United States and Mexico. Integrating forces include a common history, shared border culture, kinship ties, common language, integrated economies, and informal networks among officials and groups.
Among the most important factors binding border region residents, even if unexpressed, is the realization that movement and migration are permanent and that political dividing lines are temporary social constructs. Indeed, perhaps the most permanent fixture of the border region is the fluidity of populations. The shared history of border residents has imparted an expectation and tolerance for new and different neighbors, a belief that they are both inevitable and an opportunity. As a result, the border, better than other parts of either the United States or Mexico, is poised to accept diversity and to profit from increased commerce.
Border residents have a common stake in movement and are damaged when it is impaired. When relationships sour between Washington and Mexico City, it is border residents who are affected most directly because it is they who are delayed at border crossings. Consequently, border residents, more than others, have a stake in good transnational relations and in decisions that are mutually beneficial to both nations.
Border culture provides a further unifying influence. The U.S.-Mexico border region combines the characteristics of Anglo, indigenous, and Latin cultures in a mix that is dissimilar to and richer than that of either country alone. The special character of the border is recognized by others in both nations. El Norte is believed to be practically a different nation by many Mexicans, and U.S. citizens visiting border towns often describe them as more Mexican than American. Bolstering and transmitting this separate culture are a number of border institutions, including cultural centers, historical societies, research organizations, newspapers and electronic media, and social organizations. An optimism, flexibility, and creativity often found on frontiers characterizes the border and leads to accommodation of interests.
The many cross-border families contribute to the border culture and are an inducement toward transnational integration. Kinship ties provide family members with many resources, and members and resources flow to areas of need and opportunity. The border thus is characterized by an empathy and other-directedness that favors common understanding between people in different nations. Border culture is supported, too, by the resulting bi- or multilingualism.
The integrated border economy is a powerful stimulus for agreement. Sonora and Arizona are linked economically. For instance, the economy of Nogales, Arizona, and Santa Cruz County depend heavily upon purchases made by Mexican visitors. Schools, hospitals, and other infrastructure are supported by sales taxes paid by Mexicans and many residents are employed in enterprises that in one way or another relate to Mexico. A large share of the winter fruits and vegetables consumed in the United States come north through the Port of Nogales. The customers of many of the fisheries in the Gulf of California are in the United States.
The links between border residents are matched by networks that have grown up between professionals and public officials to deal with shared problems. Medical personnel exchange information, equipment, and patients. Local police have informal arrangements to cooperate in the pursuit of suspects and crime prevention. Firefighters and rescue teams observe the professional ethic of going where they are needed, even if that means ignoring the border from time to time. Local public health officials maintain informal contact and often are able to share information that is unavailable through official channels. Border ties such as these have remained strong over time and new links have been forged as new needs have arisen.
A growing number of regional nongovernmental organizations, such as Pro Natura and the Border Ecology Project, operate on both sides of the border. Binational groups such as the Arizona-Mexico Commission, whose members are appointed by the governors of Arizona and Sonora, have intensified their activities. The Border Trade Association, initiated by U.S. and Mexican business people, exemplifies the channels of cooperation that are expanding as trade relationships intensify. And key individuals and organizations, such as the International Sonoran Desert Alliance (ISDA), have been able to forge even broader informal alliances (see "La Frontera Nueva: Building Transborder Cohesion in the Sonoran Desert" in this issue for details on ISDA's activities and methods).
While binational ties are strong, the stresses to which they are being subjected are increasing. The population boom and accompanying environmental degradation have introduced problems that are difficult to handle through informal, face-to-face means.
The Need for New Institutions To Support Transnational Linkages
New institutional arrangements to fully recognize the shared, transnational environment are much overdo. The North American Free Trade Agreement recognizes this need. To achieve the necessary reforms, national governments will have to relinquish some of their sovereignty to new institutions that can take a transnational perspective. However, transnational linkages that permit national agencies to speak to each other but remain deaf to local interests, particularly those on the border, are doomed to fail. The need for a bottom-up approach is especially critical when it comes to border areas. Border regions need to be considered as coherent entities in their own right. When viewed as centers of concern rather than peripheries, possibilities for bargaining and accommodation across borders emerge. New regional and local institutions with transborder jurisdiction need to be established and given the mandated authority to collect and disseminate data, to plan, and to apply for and dispense funding to recently created environmentally beneficial projects and programs. NAFTA included one such institutional innovation, the Border Environmental Cooperation Commission, which has jurisdiction over environmental infrastructure. Others need to be created.
Borders and border problems, many of them related to transborder water resources, are multiplying everywhere. While not often cited for the good example it provides, the U.S.-Mexico border has many lessons to teach the world. Despite great economic and cultural disparities, the two nations have been able since 1848 to resolve most of their differences peacefully. Further, a unique border culture has evolved, one that displays many of the best characteristics of both nations. Resolution of transborder environmental problems, however, has been neither as effective nor as sensitive to actual physical and social conditions as necessary.
Both the United States and Mexico are prepared, more than they ever have been, to work together to solve problems. The limitations of hierarchical institutions that centralize power and authority at national or international levels are increasingly obvious. At a time when officials and the public are receptive to the idea of reinventing government, structures need to be designed to engage border residents and to reinforce longstanding border linkages in resolving problems.
bar denoting end of article text
Helen Ingram is Director and Robert Varady is Associate Director of The Udall Center for Studies in Public Policy at The University of Arizona.
About the Arid Lands Newsletter
|
Tips and Tricks
From Scribus Documentation Project
Jump to: navigation, search
Tips and Tricks
Creating a Tiled Image
This tip shows how a collection of shapes can be used to create an irregular image frame. The techniques used are duplicate/multiple duplicate of shapes, combining them as polygons and then converting to an image frame.
• Create a rectangular shape. In the example, it has a width and a height of 16 mm. Of course, any other type of shape can be used.
• With the menu command Item > Multiple Duplicate, create 7 duplicates with a horizontal shift of 18 mm and a vertical shift of 0 mm.
• Group the resulting 8 rectangles.
• Multiply the resulting group again 5 times, with a horizontal shift of 0 and a vertical shift of 18 mm. This results in a two-dimensional group of rectangles, looking like a fence.
• At your option, ungroup the first row and change the position of the first rectangle. Resize it, rotate it, or whatever you like.
• Now, select all the rectangles and combine the polygons with Item > Combine Polygons.
• Apply Item > Convert to > Image Frame.
• Last, but not least, load an image into your new image frame.
Creating a Circled Diagram
• Create two round shapes, one of them a bit smaller than the other. Keep a copy of the smaller circle – you’ll need it later for creating the text! To make an exact copy, select the circle, pressing Ctrl-C and then Ctrl-V, and now move the smaller circle. You will find there are two on top of each other. Hint: Holding down the Shift key as you make your circle keeps it from being an ellipse.
• Put one of the smaller circles in the middle of the bigger one. Now select both circles (you can do this by clicking and dragging a selection box around the two circles). Next, in the menu bar, use the command Item > Combine Polygons. This combines the two and creates a ring. Set the stroke color to “None” and the fill color to whatever you like.
• Now we build the arrows. Create a new triangular shape and use the same fill color you applied to the ring. Set the stroke color to white.
• Rotate the triangle, so that it points to one of the four directions we’ll need.
• Place the triangle at its correct position above the ring.
• To hide the triangle’s white stroke above the ring on the triangle’s long side, just draw a small, filled rectangular shape without the stroke’s color above the triangle.
• Group the triangle and the rectangle, create three duplicates, rotate the duplicates so that they are pointing to the four directions and place them above the ring.
• Now we add the text, using the copy of the small circle you made, but consider that you may want to go to Properties > Shape > Edit Shape to slightly enlarge it, since what you actually want is your text on an imaginary rim slightly larger than the smaller circle. Convert it to a Bézier curve. Create a text frame with your text and use Item > Attach Text to Path. Move this text until it has its correct position on the ring. You may need to adjust font size and spacing until it fits perfectly, and you can also further edit the circle after you have applied the text. See the section Attach Text to Path if you need more help.
• The text on the path is still editable – create three copies of the first text-on-path item, use the Story Editor to edit their content and move and rotate until the texts are all in the desired place above the ring. Once you have completed this project, select them all, then Item > Group to then place the entire ensemble precisely where you want it.
• Now that you’ve gone through this process, remember that Scribus may offer more than one way to accomplish your end result. For example, you could have started with a single circle shape, made the fill color “None,” made a copy for your text on path work, then increased line width of the original to a large number, say 40 to 50 pts, then added the triangles. You could also have done all the text at once in a long string, adding spaces in between word groups as needed to get them spaced properly around the circle. Neither one of the methods is inherently any better, though one may seem easier for you.
Creating Borders
For certificates, images or similar things sometimes complex borders are needed. This is a very simple trick that simply uses the duplicating functions for some shapes and – at your option – the combining of shapes. The results can be used in a lot of ways.
• We start with the creation of a polygon or a shape which will be the basis for the border. In this case, it’s a polygon with four corners.
• Use Item > Multiple Duplicate and create 10 – or any other number of – duplicates with a horizontal shift that is slightly smaller than the width of the original shape. Vertical shift is 0.
• At your option, use Item > Combine Polygons.
• Duplicate the resulting object, rotate it and build a border for an image.
• With some white circles, combined with a paper-colored rectangle, you can build a memo.
• By the way, this seems to be the only way to build a dotted line from circles in Scribus:
Text Over Images
When placing a text frame over an image, the problem often occurs that the color of the text and the underlying images are interfering. Here are two simple tricks to avoid this.
Create a rectangular shape with approximately the same size as your text frame, fill it with white or another color, set its opacity to 30% or another value smaller than 100% and put it under your text, but above the image:
Create a text frame for your text, convert it to outlines by using Item > Convert to Outlines. Move it over a shape, for example a rectangular. Select both objects and use Item > Combine Polygons. If you move the new object over an image now , the image can be seen through the text:
The “wave” has been produced in a similar way: I started with a rectangular shape. By using Edit Shapes > Move Control Points in the Shape tab of the Properties Palette, the points of the shape can be rearranged and changed to curves:
In this case, a polygon with 7 corners and an applied factor of -45 was used to create a star. As described above, the price is “embossed” into the star and a duplicate of the result is used as a shadow:
Filling Text With an Image
Please note that the effect probably might need a lot of computing power when it’s applied to a larger text!
• Create a text frame and type your text, or load it from a file. Use a strong and bold font.
• Convert the text frame with Item > Convert to to outlines.
• Use Item > Combine Polygons.
• Change the new polygon via Item > Convert to to an image frame.
• Load an image into the new shape.
• At your option, play around with the fill and line color. In this case both are set to “none”:
In a similar way, you can create “letters consisting of letters.”
• Create a text frame and type one letter in it. Use a strong and bold font. Increase the size of that letter to a high value.
• Convert the text frame with the letter inside to outlines.
• Now, convert it back to a text frame.
• Double click it or use the Story Editor to add a lots of the same letters to that text frame:
Custom Frames
At first glance, frames in Scribus are always rectangular. But that’s not true: a frame in Scribus can have an arbitrary form. There are two ways to obtain a non-rectangular frame in Scribus:
Create a (rectangular) frame in Scribus and edit it as described in Working with Frames
Create the outer form of the frame in an external application like Inkscape and import it into Scribus as vector graphic via Menu File > Import. SVG, PostScript (PS) or Encapsulated PostScript (EPS) are suitable for this purpose
Use the menu Item > Convert to, and choose either image frame or text frame
Now you can handle the new text or image frame in the same way as you do it with other frames:
The vector graphic you import into Scribus should contain only one path. If there is more than one path in the file, Scribus will convert only the first one.
A Rising Sun Text on Path
Here is an example of an interesting effect combining “Attach Text to Path” with other graphics.
Start out with a semi-circle – here, one of the shapes is used, but of course with Scribus there are many ways to get this result. Make a copy of this since we’ll need it later – slide it off to the side. Take the original and in the context menu click Convert to > Bezier Curve.
Now for the rays for our sun, we’ll use the inverted question mark: ¿
Make a text frame, then enter about 15 or so of those inverted question marks – find them in Insert > Glyph, or press F12, then “00bf” (without quotes). So now your have your text, your Bezier curve, so select both, then use Item > Attach Text to Path.
What you will find is that your question marks follow your path, but may not start at your sun’s horizon. Also you see your sun has disappeared, and check “Show Path” in the Shape tab of the Properties Palette won’t make it appear. This is where your copy of your semi-circle can now be slid into place. In Properties: Shape > Start Offset adjust to get your characters above the horizon.
You will probably need to adjust your font size – this example used Nimbus Roman Bold, 20pt – and perhaps kerning. Finally, make your background sky using a frame with a blue background.
This final example adds a radial fill gradient to our sun – and of course you could use a gradient for the sky as well.
Scribus doesn’t support footnotes yet, and unfortunately, loading texts with foot- or endnotes into scribus doesn’t work with files. In Scribus versions prior to the text ends after the last foot-/endnote, and the notes themselves appear in the text where the foot-/endnote mark is placed. Since the situation has improved, but now all footnotes are stripped during import, so that one has to save and load them separately.
There is, however, an easy workaround for the problem. Write your text in and save it as a HTML file. Then import the HTML file into Scribus. All text is preserved, including foot-/endnotes, which are placed at the end of the text.
1. Note that you still have to place your footnotes manually if the text or the text frames are changing, but at least all of them are imported correctly and placed as separate items.
Personal tools
|
ST elevation
From Wikipedia, the free encyclopedia
Jump to: navigation, search
ST elevations refers to a finding on an electrocardiogram, wherein the trace in the ST segment is abnormally high above the isoelectric line.
An ST elevation is considered significant if the vertical distance between the ECG trace and the isoelectric line at a point 0.04 seconds after the J-point is at least 0.1 mV (usually representing 1 mm or 1 small square) in a limb lead or 0.2 mV (2 mm or 2 small squares) in a precordial lead.[1] The isoelectric line is either the PR interval or the TP interval.[2] This measure has a false positive rate of 15-20% (which is slightly higher in women than men) and a false negative rate of 20-30%.[3]
The ST segment corresponds to a period of ventricle systolic depolarization, when the cardiac muscle is contracted. Subsequent relaxation occurs during the diastolic repolarization phase. The normal course of ST segment reflects a certain sequence of muscular layers undergoing repolarization and certain timing of this activity. When the cardiac muscle is damaged or undergoes a pathological process (e.g. inflammation), its contractile and electrical properties change. Usually, this leads to early repolarization, or premature ending of the systole.
Associated conditions[edit]
It can be associated with:
See also[edit]
1. ^ Family Practice Notebook > ST Elevation Retrieved Nov 2010
4. ^ a b c d e f g Thaler, Malcolm (2009). The only EKG book you'll ever need. Lippincott Williams & Wilkins. p. 2009. ISBN 9781605471402.
5. ^ Tingle LE, Molina D, Calvert CW (November 2007). "Acute pericarditis". Am Fam Physician 76 (10): 1509–14. PMID 18052017.
6. ^ Chew HC, Lim SH (November 2005). "Electrocardiographical case. ST elevation: is this an infarct? Pericarditis". Singapore Med J 46 (11): 656–60. PMID 16228101.
8. ^
|
About Dialysis
What is dialysis?
When kidneys fail, the kidneys stop filtering the blood in your body. This is dangerous to your health because toxins and waste products build up further causing kidney diseases and disorders which can become life threatening.
Dialysis treatment replaces the function of the kidneys, which normally serve as the body’s natural filtration system. Through the use of a blood filter and a chemical solution known as dialysate, dialysis treatment removes waste products and excess fluids from the bloodstream.
There are two types of dialysis treatment: hemodialysis and peritoneal dialysis. In peritoneal dialysis (PD), the filter is the lining of the abdomen, called the peritoneum. In hemodialysis (HD), the filter is a plastic tube filled with millions of hollow fibers, called a dialyzer.
Treatment for hemodialysis typically takes place in an outpatient hemodialysis unit, although hemodialysis may also be administered at home under the right conditions. Patients generally go to the dialysis unit three times a week for treatment. The schedule generally is either Monday, Wednesday, and Friday or Tuesday, Thursday, and Saturday.
Before treatment, patients weigh themselves so that excess fluid accumulated since the last dialysis session can be measured. Patients receive treatment on special dialysis chairs similar to recliners. Treatments generally last from three to five hours.
Peritoneal Dialysis
Peritoneal dialysis can be conducted in the patient’s home or in a location which is clean and sanitary.
In peritoneal dialysis, the patient’s peritoneum, or lining of the abdomen, acts as a blood filter. A catheter is surgically inserted into the patient’s abdomen. During treatment, the catheter is used to fill the abdominal cavity with dialysate. Waste products and excess fluids move from the patient’s bloodstream into the dialysate solution. After a waiting period of six to 24 hours, depending on the treatment method used, the waste-filled dialysate is drained from the abdomen and replaced with clean dialysate.
|
SSM Cardinal Glennon Children's Medical Center
(314) 577-5600
What Is Paronychia?
Paronychia (pronounced: par-uh-nik-ee-uh) is an infection of the skin around a fingernail or toenail. The infected area can become swollen, red, and painful. Sometimes a pus-filled blister may form.
Most of the time, paronychia is no big deal and can be treated at home. In rare cases, the infection can spread to the rest of the finger or toe. When that happens, it can lead to bigger problems that may need a doctor's help.
You're not likely to get paronychia in a toe (unless you have an ingrown toenail). But fingernail paronychia is one of the most common hand infections there is.
paronychia illustration
What Causes Paronychia?
Common paronychia causes include:
• biting or pulling off a hangnail
• clipping a nail too short
• cutting or pushing back the skin around the sides and bottom of the nail (the cuticle)
Some people get paronychia infections after a manicure or using chemicals like nail glue. Certain health conditions (like diabetes) also can make paronychia more likely. And if your hands are in water a lot (if you wash dishes at a restaurant, for example), that ups the chances of getting paronychia.
What Are the Signs of Paronychia?
Chances are, if you have paronychia, it will be easy to recognize. There will be an area of skin around a nail that is painful and tender when you touch it. The area probably will be red and swollen and feel warm. You may see a pus-filled blister.
What Should You Do?
If paronychia doesn't get better after a week or so, call your doctor. You'll want to call a doctor right away if you have an abscess (a pus-filled area in the skin or under the nail) or if it looks like the infection has spread beyond the area of the nail.
If paronychia becomes severe and you don't see a doctor, infection can spread through the finger or toe and move into the rest of the body. Luckily, this is very rare.
What Do Doctors Do?
Usually, a doctor or nurse practitioner will be able to diagnose paronychia just by examining the infected area. In some cases, a doctor may take a pus sample to be examined in a laboratory to determine what type of germ is causing the infection.
If you have diabetes, let your doctor know if you notice any signs of paronychia, even if it seems mild.
Don't try to puncture or cut into an abscess yourself. Doing that can lead to a more serious infection or other complications. The doctor may need to drain the abscess and possibly prescribe antibiotic medications to treat the infection. Once an abscess is treated, the finger or toe almost always heals very quickly.
Can Paronychia Be Prevented?
Here are some things that can lessen your chances of developing paronychia:
• Don't bite your nails or pick at the cuticle area around them.
• Don't cut nails too short. Trim your fingernails and toenails with clippers or manicure scissors, and smooth the sharp corners with an emery board or nail file. The best time to do this is after a bath or shower, when your nails are softer.
• If you have diabetes, make sure it is under control.
• Practice good hygiene: keep your hands and feet clean and dry.
• If you get manicures or pedicures at a nail salon, consider bringing along your own clippers, nail files, and other tools.
As much as possible, try to avoid injuring your nails and the skin around them. Nails grow slowly. Any damage to them can last a long time.
Reviewed by: Elana Pearl Ben Joseph, MD
Date reviewed: March 2012
|
Reinventing the wheel -- naturally
Jun 14, 2010
This is a fanciful rendering of Leonardo da Vinci's Vetruvian Man as a wheel. Credit: Adrian Bejan
Adrian Bejan, professor of at Duke's Pratt School of Engineering, argues that just as the design of wheels became lighter with fewer spokes over time, and better at distributing the stresses of hitting the ground, animals have evolved as well to move better on Earth. In essence, over millions of years, animals such as humans developed the fewest "spokes," or legs, as the most efficient method for carrying an increasing body weight and height more easily.
"This prediction of how wheels should emerge in time is confirmed by the evolution of wheel technology," Bejan said. "For example, during the development of the carriage, solid disks were slowly replaced by wheels with tens of spokes."
The advantage of spokes is that they distribute stresses uniformly while being lighter and stronger than a solid wheel. "In contrast with the spoke, the solid wheel of was stressed unevenly, with a high concentration of stresses near the contact with the ground, and zero stresses on the upper side," Bejan said. "The wheel was large and heavy, and most of its volume did not support the load that the
vehicle posed on the axle.
"If you view animal movement as a 'rolling' body, two legs, swinging back and forth, perform the same function of an entire wheel-rim assembly," Bejan said. "They also do it most efficiently - like one wheel with two spokes with the stresses flowing unobstructed and uniformly through each spoke. The animal body is both wheel and vehicle for horizontal movement."
Bejan's analysis was published early online in the American Journal of Physics. His research is supported by the National Science Foundation and the Air Force Office of Scientific Research.
"An animal leg is shaped like a column because it facilitates the flow of stresses between two points - like the foot and hip joint, or paw and shoulder," Bejan said. "In the example of the Neolithic stone wheel, the flow of stresses is between the ground and the whole wheel."
Bejan believes that the constructal theory of design in nature (, which he started describing in 1996, predicts these changes in the wheel and animal movement. The theory states that for a design (an animal, a river basin) to persist in time, it must evolve to move more freely through its environment.
Since animal locomotion is basically a falling-forward process, Bejan argues that an increase in height predicts an increase in speed. For a centipede, each leg represents a point of contact with ground, which limits the upward movement of the animal. As animals have fewer contacts with ground, they can rise up higher with each stride.
"The constructal theory shows us this forward-falling movement is dictated by the natural phenomenon, which is required for the minimal amount of effort expended for a certain distance traveled," Bejan said.
An earlier analysis by Bejan showed that larger human swimmers are faster because the wave they create while swimming is larger and thus carries them forward faster.
While wheel-like movement evolved naturally, it also describes what Bejan likes to call "nature's gear box." Humans have two basic speeds, Bejan said - walking and running. A running human gets taller, or higher off the ground, with each stride, which increases his speed.
A horse has three speeds - walk, trot and gallop.
"The horse increases its speed by increasing the height from which it falls during each cycle," Bejan said. "Then, from the trot to the gallop, the body movement changes abruptly such that the height of jump increases stepwise for each stride. Nature developed not only wheel-like movement but also mechanisms for changing speeds."
Related Stories
Researcher explains mystery of golden ratio
Dec 21, 2009
Why Winning Athletes Are Getting Bigger
Jul 17, 2009
Physics Explains Why University Rankings Won't Change
Feb 13, 2008
Collaboration of soloists makes the best science
Dec 04, 2008
Spirit Rover: Right-Front Wheel Rotations
Dec 18, 2009
Recommended for you
And so they beat on, flagella against the cantilever
Sep 16, 2014
Tandem microwave destroys hazmat, disinfects
Sep 16, 2014
Cornell theorists continue the search for supersymmetry
Sep 16, 2014
How did evolution optimize circadian clocks?
Sep 12, 2014
User comments : 2
Adjust slider to filter visible comments by rank
Display comments: newest first
1 / 5 (1) Jun 14, 2010
Actually the paper is out and not "early online". see
Am. J. Phys. / Volume 78 / Issue 7 /
The constructal-law origin of the wheel, size, and skeleton in animal design
American Journal of Physics -- July 2010 -- Volume 78, Issue 7, pp. 692-699
1 / 5 (1) Jun 14, 2010
Just curious, but for the life of me i cant find one example of stone wheels being used as transportation devices. I see many examples of use for grinding grain and such, but have yet to actually find an honest-to-goodness stone wheel cart. At least i can't on the interwebs. Does anyone know of any actual examples of stone wheels being used as cart wheels? I am also operating under the assumption that our ancestors where just as capable as us of innovation. I can see a prototype wheel of stone, but after one test run you think someone would decide they loved the shape, but hated the weight, and reproduced it out of wood. Any thoughts?
|
Math You Need > Hypsometric Curve > Instructor Page
Guiding students through the Hypsometric Curve
An instructor's guide to Reading the Hypsometric Curve
by Dr. Eric M. Baer, Highline Community College Geology, and Dr. Jennifer M. Wenner, University of WI Oshkosh Geology.
Jump to: Difficulties for students | What's left out | Resources
What should the student get out of this module?
The hypsometric curve The hypsometric curve from Marshak "Earth - Portrait of the Planet" 2nd ed. p. 37.
After completing this module, a student should be able to:
Why is it hard for students?
The hypsometric curve is generally unlike any graph ever seen by students. Often, the plots in textbooks also have significant amounts of extraneous data which can distract students. Finally, students often have never thought about distributions in any depth and so the graphical representation of a distribution is a novel concept that requires them to wrestle with numerous new ideas.
Finally, because reading the graph can sometimes be a challenge, students lose sight of the significance of the graph: that the Earth's surface is bimodally distributed, prima facie evidence that there are two types of crust on the Earth.
What have we left out?
This page does not include a histogram of the distribution of topography of the Earth. In many textbooks the histogram is smoothed, which is an invalid way of graphically representing a distribution, even if it may be more intuitive.
This page does not explicitly teach the significance of slope on a cumulative percent graph, even though it is the slope that many geologists look at. This is because I have found that students eventually come to understand the importance of slope after they have interacted with cumulative percent graphs for a while, and that teaching the importance of slope on one of these graphs before they have mastered extracting data from a cumulative percent graph is overwhelming.
Instructor resources
The Cumulative percent graphs: Teaching the Hypsometric Curve/Graph and other similar plots page in the Teaching Quantitative Literacy in the Geosciences website is a terrific place to look for ideas on teaching the hypsometric curve and other cumulative percent graphs.
« Previous Page
|
Corollary discharge, I-function, personal resonsibility
Biology 202
Neurobiology and Behavior
Spring 2008
Course Notes
Class discussion - 1 April
If there is conflicting information presented to a central pattern generator or other systems of organization within the brain, perhaps it is the CDS that makes the decision towards resolution of this conflict. Through repeated decision-making opportunities, a new central pattern generator might be created through self-reinforcement. For example, when thinking about emotions, if there is something that makes you anxious, after repeated exposure to this situation, a new central pattern generator will be created (in coalescence with a CDS).
Further questions: What is the original purpose of CDS? Is the I-function a mediator in this process (or resolution and conflict)? With talk therapy, does that change the CPG or does it create new ones?
Group Members: Emily Alspector, Paul Bloch, Kendra Smythe, Sophie Feller, Isabelle Winer, Maggie Simon, Michelle Khilji, EB Ver Hoeve, Angel Desai
What is the I-function not needed for?
One definition of the I-function is that it is a part of the nervous system, which controls awareness and consciousness. It was thought that the I-function is the "higher power" of the nervous system that conducts most of the actions produced by the body. However, we were surprised to discover that this is not true and most of what was believed the I-function controlled actually was not effected by the I-function. Some of these behaviors thought to be controlled by the I-function include decision making (choice), motor behavior, learning, and the perception of others and ourselves.
Group members: Rica Dela Cruz, Lyndsey Carbonello, Caitlin Jeschke, Anna Giarratana, Ashley Savannah, Simone Shane, Tara Raju, Jackie Marano, Jessica Varney, Madina Ghazanfar, Margaux Kearney, Anne Kauth, Paige Safyer, Nelly Khaseler, Mahvish Qureshu
Can personal responsibility coexist with neuroscience?
We feel that although maybe not reflected in the nervous system, personal responsibility can (sometimes) co-exist with neuroscience. Perhaps in guiding larger, “conscious decisions,” the I-function opens toward the possibility of personal culpability. However, it does seem that advances in the study of how brains “brain” ever increasingly encroaches on the realm of free-will. Regardless of the outcome, would such a finding (that determinism holds sway), given that our cultural evolution produced such an influential and pervasive concept, have any real bearing on how and why we behave?
Group members: Michelle Crepeau, Evan Stiegel, Jen Benson, Meredith Tuohey, Zoe Fuller-Young, Caroline Feldman
What will I do differently next time?
Think of central pattern generator as capable not only of generating motor patterns but patterns generally, including "templates" against which incoming sensory signals can be compared. Patterns within the nervous system that communicate what is going on in one box to another box (including templates) are CD's. What needs to be made clearer is the distinction between the concepts as empirically well-defined tools, and the potential relevance of those tools in additional cases where they have not necessarily been well-defined empirically. Particulary interesting along these lines are the questions of where "comparisons" are occuring and how comparisons alter behavior both immediately and in the longer run. Do they, in the longer run alter pattern generation, CD, and/or the act of "comparison" itself? Is the comparator a distinct entity/tool?
Can the I-function be conceived using the same tools, or does it involve some additional ones? Ditto "personal responsibility".
Paul Grobstein
Post new comment
|
Representatives from Georgia Tech Research Institute and the Maneuver Battle Lab prepare to test an unmanned aerial system last week at McKenna MOUT.
FORT BENNING, Ga. (June 12, 2013) -- Last week, the Georgia Tech Research Institute and the Maneuver Battle Lab conducted multiple tests of unmanned aerial systems at the McKenna Military Operations in Urban Terrain site.
A variety of tests were performed, all focused on improving the systems' ability to be autonomous and to work in collaboration with other UAS.
The aircraft used for the tests were built by GTRI using off-the-shelf airframes and autopilot systems.
GTRI later added an autonomous mission payload computer to enable the aircraft to run its autonomous behaviors and algorithms.
Because of that, a representative from the Maneuver Battle Lab said the real interest for the Maneuver Center of Excellence was not the aircraft, but the algorithms.
"Our interest is what's inside the aircrafts, and the algorithms they're testing to enable that autonomous flight," said Harry Lubin, chief of the experimentation branch of the MBL.
During the week, the aircraft demonstrated several capabilities.
One was the ability to perform squad formations without human input or interaction. Once an aircraft is designated as the leader, other aircraft can communicate and arrange themselves into a wingman formation autonomously.
Aircraft also underwent testing in which they were given a number of checkpoints to visit, and were allowed to calculate which specific aircraft should visit each checkpoint.
Initial testing has also begun on capabilities that would allow aircraft to observe one another and learn which UAS is best suited for future tasks.
"What we'd like to see and we envision in the future is a situation where these aircraft have a higher degree of autonomy, and you can task them with high-level missions," said Charles Pippin, a senior GTRI scientist. "From there, the aircraft would be able to negotiate amongst themselves to determine which aircraft is most suited for each task."
In addition to the military applications of the systems, Pippin said there could also be applications for the civilian sector, namely in emergency situations.
"In a disaster scenario, such as after Hurricane Katrina, if there were multiple UAS assets available, you could get them to quickly work together and perform autonomous missions to search for survivors," he said.
The aircraft are able to communicate using point-to-point technology, meaning transmissions are sent directly, rather than flowing through a centralized ground station.
This allows one Soldier to be assigned to multiple aircraft, rather than requiring each UAS to have an operator.
"An eventual goal of this type of research is to have humans in the loop, but very minimally and in a supervisory role rather than in a low-level operations role," Pippin said. "We want to remove the load from the human beings and allow them to supervise the aircraft. When the aircraft need assistance, they can request that from the humans, and that's where we want to move things."
Lubin said allowing the UAS to fly autonomously would help lessen the strain on the combat force.
"The current capabilities we have fielded come with a price," Lubin said. "They provide a great capability for reconnaissance, but we have to pull our Soldiers from the units to control these unmanned systems. The more unmanned systems we have, the more of our combat power we deplete. So, the more we can get these systems operating on their own, without having a direct controller and operating collaboratively, the more that will increase our combat power."
While GTRI has made strides in autonomous UAS research, Pippin said the technology is still in development.
"This is very much a research-in-progress project," Pippin said. "We've been working with this particular project for about three years now. … As we perform more and more tests, we shake the bugs out of our technology. That's why we're here this week; to find the problems and weak links and continue to improve the technology."
The autonomous UAS research is a GTRI initiative, with the MBL in a supporting role.
However, Lubin said the research is helping the maneuver force to understand the capabilities of unmanned systems, something that is becoming more and more important.
"Normally, we have the advantage as far as unmanned systems go, but the battlefield is now becoming proliferated with other unmanned systems," Lubin said. "We've got to look at addressing that not only in our doctrine on how we deal with unmanned systems, but also in our training, because it's been a long time since the U.S. Army has had to look up. Normally, we have air superiority wherever we go. But, at the tactical level, we can no longer make the assumption that all unmanned systems are friendly."
The partnership also has benefits for GTRI, as Fort Benning is able to provide a testing facility that is not available in Atlanta.
"Specifically, the range has a lot of benefits for us," Pippin said.
"We have the ability to fly multiple systems simultaneously, whereas we're not able to do that due to FAA regulations outside of Fort Benning."
Page last updated Wed June 12th, 2013 at 00:00
|
Published Online: August 25, 2014
Idaho students to take new test this school year
Here are five things you should know about the new test and the standards.
What is Common Core?
State lawmakers adopted the Idaho Core Standards in 2011, but there has been growing opposition calling for reconsideration, even repeal, during the past three years.
There's been concern that the standards had to be approved and regulated by the federal government, but state officials have repeatedly said the new measures have never been subjected to a federal review. Instead, it's up to the local school districts to adopt curriculum to meet the Idaho standards in math and English language arts.
What's different this year?
Idaho school districts began teaching the new standards last school year. At the same time, students took a field-test version of the Idaho Common Core-aligned examination. Called the Smarter Balanced Assessment Consortium (SBAC), the practice tests were given to provide educators, parents and students experience with the assessment.
In spring of 2015, students will be given tests in which the results count.
What is SBAC?
SBAC is an adaptive test, meaning the questions will differ between students. Those who answer initial questions correctly will be given tougher questions, while students who do a poor job of answering the initial questions will be given easier questions. The test is supposed to be given online, but paper tests will be made available for the first three years for schools without Internet or enough computers. Finally, there is no time limit on the test for students, but schools have a month to finish administrating the assessment.
The SBAC replaces the Idaho Standard Achievement Test (ISAT). If all this sounds like alphabet soup, it gets slightly more complicated. State officials are now referring to the SBAC as the ISAT 2.0.
Will my child take the test?
Probably. If your child is in grades 3 through 10 and enrolled in a public school, then she or he will most likely be given the test.
There is a possibility that whoever wins the state superintendent seat in the November election could attempt to delay the implementation of the test. But so far, none of the candidates have explicitly said they would do so.
Does this even matter? Aren't the Idaho Common Core standards going to be repealed?
So far, efforts to repeal Idaho's standards have failed. In the May GOP primary election, tea-party favorite candidates who supported repealing the standards didn't win the key statewide offices that would allow them to revoke the new measures. As Idaho gets closer to the November general election, both the Republican and Democratic candidates for state superintendent have said they will work to improve implementing the standards, but they have not said the standards must be repealed.
Idaho's Republican Party attempted to include a plank in its platform condemning the standards and calling for their repeal. However, at this year's chaotic Idaho's GOP convention, political infighting caused the event to end in chaos and the plank was never considered.
Ground Rules for Posting
All comments are public.
Back to Top Back to Top
Most Popular Stories
|
Even Camels Aren't Safe From Global Warming
| Wed Mar. 14, 2007 6:57 PM EDT
Australia's current drought, the worst in a century, is driving its feral camels mad with thirst. The country's 1 million wild camels, the largest population in the world, are stampeding through Western Australian towns looking for water. "They did a lot of damage searching for water," a townswoman told Reuters, "trampling air conditioning hoses, taps, and pipes." Despite these attempts, thousands of the animals are being found dead along the dried-up banks of the Docker River.
The camels, which usually travel in groups of about 100 animals, were first introduced to Australia around 1840 to provide transportation through the dangerously hot and expansive deserts of Western and Central Australia. The several different breeds--slender riding camels from the Middle East, two-humped camels from China, and draft camels from India--were essential cargo vehicles for the country's many infrastructure projects. But by 1930, autos had replaced camels and the animals were left to fend for themselves.
--Jen Phillips
|
Skip Header
UniProt release 10.2
Published April 3, 2007
Spider dermonecrotic toxin family
Loxosceles is the genus of spiders that includes the infamous brown recluse spider Loxosceles reclusa. These spiders, also called violin spiders or fiddleback spiders because of violin-like marks on their cephalothorax, are brownish-yellow in color, and spin small, irregular webs under rocks, or in nooks and crannies of your house. These spiders are found in the USA, South America, Europe and Africa. Their most characteristic feature is actually their eyes: most spiders have eight eyes, but Loxosceles have six, arranged in three pairs, or dyads, that sit side-by-side.
The bite of a Loxosceles spider is not deadly, but it is very unpleasant - the venom is necrotoxic, causing tissue to die and fall off. Pain usually doesn't begin until 6-12 hours after the bite occurs. Loxosceles' necrotoxic venom is cytotoxic and hemolytic. It contains at least 8 enzymes; the enzyme thought to be responsible for most of the destructive effects is called Sphingomyelinase D. This enzyme catalyzes the hydrolysis of sphingomyelin and causes hemolysis and dermonecrosis.
The annotation of this family of toxin has just been updated in UniProtKB/Swiss-Prot (e.g. Q8I914 and P83045.
UniProtKB News
Cross-references to BuruList
Cross-references have been added to the Mycobacterium ulcerans genome database. This database is dedicated to the analysis of the genome of Mycobacterium ulcerans, the Buruli ulcer bacillus: BuruList. BuruList provides a complete dataset of DNA and protein sequences derived from the epidemic strain Agy99, linked to the relevant annotations and functional assignments. It allows one to easily browse through these data and retrieve information, using various criteria (gene names, location, keywords, etc.).
The Mycobacterium ulcerans genome database is available at
The format of the explicit links in the flat file is:
Resource abbreviation BuruList
Resource identifier Ordered locus name.
DR BuruList; MUL_4631; -.
Changes concerning keywords
New keyword:
|
Neuron Connection
Supported by NSF DUE-0231019 and DEB-0336919
INTRODUCTION to Parkinson’s Disease:
Parkinson’s disease (PD) is a neurodegenerative movement disorder that afflicts over half a million people in the United States, most of them senior citizens. Each year, about 50,000 new cases are diagnosed; and this may under-represent PD prevalence since symptoms develop so gradually. Victims progressively deteriorate in terms of coordination and movement, and may experience tremors, mental deterioration and dementia, muscle rigidity, and loss of the ability to walk, speak, or perform other activities. The disorder results from the loss of dopaminergic neurons, although the exact cause for this deterioration is as of yet undetermined. There are several methods of treatment for PD, but there is currently no cure (Isacson, 1996; Rajput et al., 1984).
Animal models of PD allow researchers to study the pathology of PD while tracking physical and behavioral changes during the entire disease course. Due to compensatory mechanisms, human PD symptoms usually do not show up until more than 80% of striatal dopamine (DA) levels are depleted, making it difficult to study the onset and earliest stages of the disease in humans (Hornykiewicz, 1993). Moreover, parkinsonian symptoms appear in several other human disorders, leading to a misdiagnosis rate of about 24% (Lang & Lozano, 1998). Animal models designed to simulate PD eliminate these problems, making them a useful tool for understanding and treating PD (Betarbet, Sherer, & Greenamyre, 2002; Deumens, Blokland, & Prickaerts, 2002; Beal, 2001; Zigmond & Stricker, 1984).
BACKGROUND of model:
Urban Ungerstedt developed the rat 6-OHDA unilateral-lesion model of Parkinson’s disease, which can be used to model hemi-parkinsonian symptoms, early stages of PD when one hemisphere deteriorates more rapidly than the other, and acute catecholaminergic neuron loss. It offers easily quantifiable behavior and a short time course, making it an efficient model for testing drugs that act on DAergic neurons and DA receptors. Rats with unilateral lesions to the substantia nigra (SN) rotate in response to apomorphine, d-amphetamine, and other dopaminergic receptor agonists. Several studies have found that the number of rotations correlates with DA denervation. Currently there is no method of predicting the amount of rotation that will occur with different amounts of damage, which makes it difficult for researchers to quantify the efficacy of experimental PD treatments. Thus our project sought to synthesize existing rotational data and determine a mathematical equation that fit with experimental observations, in order to more fully charactarize the nature of the 6-OHDA model.
SURGERY: You are going to perform a virtual experiment using the unilateral 6-OHDA model. Your rats will be given neurotoxic 6-OHDA injections to the substantia nigra of the right hemisphere. The toxin will selectively destroy dopaminergic neurons beginning within 24 hours, leading to Parkinson-like symptoms. You will select the amount of denervation you would like to induce, choose a drug with which to challenge your animals, and view graphs of the rats' rotational behavior both pre- and post- lesion. To proceed to the simulation, click here.
For a one-semester lab-based course in behavioral neuroscience, interactive 6-OHDA wetlab CDs are available for institutional purchase by contacting
|
You are here
Gram Flour
Gram Flour is a powdery substance obtained by grinding the chickpeas. It is also known as Chickpea flour, Garbanzo flour, Besan, and Beshon. It is a popular ingredient used for making sweets and savories in South East Asia, chiefly in India, Pakistan and Bangladesh. It has a high content of carbohydrate and protein. However, it has no gluten. Hence, it is widely used as a substitute in wheat flour recipes for gluten-intolerant people.
Chickpeas have been found to be first cultivated around 6,700 B.C. in southern France, and were also found in and around Turkey during the Neolithic Age. By the Bronze Age, they were also found to be used in Italy, Greece, Rome and Germany.
Preferred Methods of Cooking
Gram flour can be kneaded into dough or combined with water or yoghurt to form a batter. The kneaded dough can roasted, baked, grilled, steamed and fried to make a host of savory items like bhujia, gatta, farsan, roti, paratha and puri. The batter made from gram flour can also be baked, steamed, deep fried and shallow fried, to make various popular snacks like dhokla, khandvi, chilla, pakora and bhajia.
Cuisines and Popular Recipes
Gram flour is popularly used in Indian, Pakistani, Bangladeshi and some other Southeast Asian cuisines. The most popular dishes prepared are sweets like halwa, ladoo, saatu, bundiya and savories like chilla, dhokla, puri, roti, farsan, puri, kachori, bonda, pakora, papad and so on. It is also mixed with yoghurt and water and cooked as a curry for various main course dishes. It is also used in Italian cuisine to prepare farinata, in Cadiz cuisine to make tortillitas de camarones and in French cuisine to make socca.
Nutritive Value
Gram flour provides 18% and 40% and 41% of the daily requirement of carbohydrate, fiber and protein respectively. It is also a rich source of vitamin K and B-complex vitamins like thiamin, riboflavin, niacin and folate; along with minerals such as calcium, iron, sodium, zinc, copper, selenium, phosphorus, magnesium and potassium.
Buying and Storing
Gram flour can be purchased from grocery stores under the health food or Indian food section. It can also be ordered online or made at home. Approximately 1 lb. of Bengal Grams or chickpeas will produce 2 cups of gram flour. The dry kernels simply need to be ground in a food processor, either raw or roasted.
Gram flour is best stored in airtight containers and remains fresh for up to 3 weeks. However, refrigerating the airtight containers will extend the shelf life of gram flour up to 6 months.
Gram flour can be made from both raw as well as roasted grams which could be black grams or desi chana, green grams and white grams also known as kabuli chana.
Non-Food Uses
Gram flour is extensively used as a natural scrubber and exfoliant in combination with other ingredients like, egg, turmeric, etc.
• In 1793, the grams or chickpeas were roasted and used as a substitute for roasted coffee beans in Germany.
|
Pin it!
Google Plus
Fraction Game
3-5, 6-8
Number and Operations
Math Content:
Number and Operations
This applet allows students to individually practice working with relationships among fractions and ways of combining fractions. For a two person version of this applet see the Fraction Track E‑Example.
The object of the game is to get all of the markers to the right side of the game board, using as few cards as possible.
Click on the pile to turn over one card. This is your target fraction. Move the markers so that the sum of your moves is a fraction that is less than or equal to the target fraction.
For example, if the first card turned over is 4/5, you could move the fifths marker to 3/5 and the tenths marker to 2/10, because 3/5 + 2/10 = 3/5 + 1/5 = 4/5. These moves are shown below.
4148 instructional example
In addition, any of the following moves would also be acceptable:
• The fifths marker to 4/5.
• The tenths marker to 8/10, because 8/10 = 4/5.
• The thirds marker to 2/3, because 2/3 < 4/5.
• The fifths marker to 1/5 and the tenths marker to 6/10, because 1/5 + 6/10 = 1/5 + 3/5 = 4/5.
• The halves marker to 1/2, the sixths marker to 1/6, and the eighths marker to 1/8, because 1/2 + 1/6 + 1/8 = 12/24 + 4/24 + 3/24 = 19/24 < 4/5.
There are many other moves that would also be acceptable, as long as the sum of the moves is less than or equal to 4/5.
When you have finished your move(s), click on the pile to get a new card.
The Sound On button can be used to add some excitement!
Play the game several times.
1. Which are the best cards to get at the beginning of the game? That is, which ones are most helpful when you first start? What cards are better to get later in the game?
2. Should you move the markers for fractions with larger or smaller denominators first?
3. The deck contains every possible card with a denominator of 2, 3, 4, 5, 6, 8, or 10. Knowing this, what is the fewest number of cards needed to complete the game? Justify your answer.
|
A finless-variety 'Pembroke welsh pearlscale' goldfish dog.
Goldfish Dogs-- Genetic Abomination, or 'The Next Big Thing' in Exotic Animal Husbandry?
by Maranda Cromwell
Since humans emerged as an evolutionary powerhouse, harnessing agriculture and claiming ownership over the lands, they have also taken control of animals. I'm talking about domestication.
With the power we hold over the animals, we can control their fates. From wolves, we have made chihuahuas and maltese. From small wild cats in Africa we have created the common house cat. Fancy goldfish are no exception-- from the sleek and noble koi's ancestors to the 'fancy' goldfish: round and wiggly fish kept by aquarium enthusiasts all around the world.
But humans did not stop at wolves and koi.
In a frenzied quest to create the perfect companion, breeders and geneticists working together have made a breakthrough in the field of domesticated companion animals. The rationalization is clear: combine the loyalty and adaptability of dogs with the simple-mindedness and somehow cute appearance of fancy goldfish. At the time, perhaps that seemed like a good idea.
Like the fabled story of the Two Headed Cat, scientists quickly discovered their folly as the first goldfish dog was created.
With all the enthusiasm of a puppy and the relative brainlessness of a goldfish, the goldfish dogs proved to be many things, none of which was the scientists and breeders's idea of a good companion. Scatterbrained, moronic, clumsy on land and water, the goldfish dogs had one thing going for them: they were cute in the same way a two-legged-dog is cute. Pathetic, but well-wishing. In training classes and standard obedience courses, they only barely passed: just for trying their best.
Most fancy goldfish owners will tell you that goldfish do not have a 3-second memory, and that their memory is closer to 3 months instead. And as we all know, dogs are among the smartest animals on the planet, the smartest of which can remember vocabularies up to 2,000 individual words and are the only animals to understand the concept of pointing-- looking at the object being pointed at instead of the hand. Sadly, goldfish dogs did not receive the long end of the stick. They remain famous for being "the most unintelligent animal ever to be created by man." And that includes chickens and guinea fowl, which are rated 2nd and 3rd on the same chart.
But despite their lack of redeeming qualities in the mental faculty department, they somehow remain popular as household pets. The scientists and breeders are not sure how their popularity came about-- the operation was supposed to be top secret, yet somehow strains of several breeds were loosed upon the exotic pet trading networks. The smooth-coat shubunkin terriers, the Pembroke welsh pearlscales, and the black moor chow chows are among the most popular, though mutts are gaining quite a following as well. Thankfully, the goldfish dogs are considered too simple to experience advanced emotions such as fear or aggression, and generally have easygoing and aloof personalities.
As of yet, there are no ongoing attempts to reclaim the "top secret" goldfish dog population, because as the head developer claimed, "We really just don't care anymore."
"My goldfish dog is the best animal I've ever had the pleasure of keeping," one owner told us excitedly. I interviewed her at the 2nd annual Goldfish Dog Fanciers Association Meeting, which consisted of a grand total of 13 individuals, all claiming ownership of one or even several of the strange creatures. "When she borks at me at the door when I come home from work, my heart just melts. I love Bella so much," the owner went on to say. As I came to understand, "borking" is the sound the goldfish dog make: a garbled version of a dog's bark.
The Goldfish Dog Fanciers Associations meeting consisted of various competitions, including 'Best Bork', 'Cutest Face', 'Buggiest Eyes', and 'Most Endearing Gait'. As opposed to the dog shows put on by the AKC, these meetings tend to be more casual, and 'breed standard' is more like 'breed suggestion'. Goldfish dog judging is more based on personal preference than a set-in-stone set of rules. Even the finless varieties of goldfish dogs, considered 'improper', have been known to win ribbons in a few categories.
"The good thing about goldfish dogs, or gofogs as we call them, they're just so cool with everything! Marshmallow lets me put little shoes and sweaters on him, and he doesn't care! They don't bite, they don't scratch, and they only wet the carpet sometimes. I mean, compared to an actual dog, I'd much rather have a gofog," another fan said.
The public seems to have other ideas regarding the 'gofogs', however. A random passerby, when asked what her opinions of the animals were, simply said, "Those things? They're gross." When I pressed further, she explained, "They're horrible, like, inbred things. I mean, either have a goldfish or a dog, don't put them together in some science lab!"
Other opinions varied from outraged, to somewhat amused, to downright apathetic. It may be a few more years until the Goldfish Dog Fanciers Association picks up more members.
But what does this mean for the well-meaning goldfish dog? "We have a very extensive breeding regiment," the president of the GDFA said, "we have a very small gene pool to work with, but thanks to one of our members who also breeds pomeranians, we have a genetic expert on our side. In a few generations, we should have a healthy breeding stock so that more people can obtain the gofogs and see what amazing, loving, adorable pets they make."
Who knows what the future holds for the goldfish dogs? But somehow, many people doubt that 'gofogs' will ever reach the popularity of dogs or cats, or even goldfish. But time will tell. In the next few years, don't be surprised to see someone walking a celestial calico shepherd down the street, or see a red ranchu pug peeking out of a woman's purse on the subway.
Usually, Halloween for adults means one of a few things. You either bum around a disappointing party in a half-assed costume, answer the door for trick-or-treaters while watching TV, or seek refuge from the Halloweenies at your friend's house in the middle of nowhere.
But, as I'm quickly finding out, that's just not the case for me or my friends. Boring is out of the question.
So here's what went down.
I got off work after my new manager bought me a frappuchino for no reason. Nice. Then my fiance, Stark, and I shopped for ingredients for a recipe for leeks*. While picking out mushrooms, our friend Jessie ambushed us. I turned around and found that she had Day of the Dead skull makeup on. Nice. So we agreed to go back to our place so Jessie could drop off the eggnog and rum in the fridge. Nice. Then we were going to her place so she could pick out things to make a costume to match mine. My costume can be summed up as... well, just take a look.
I call it, "Nightmare-Spawn Skully Demon".
She happened to have a boar skull that would work nicely, and a big awesome cloak to wear, too. It was fate.
After that we headed to the dollar store, where glowsticks and Christmas candy were obtained. Handing Christmas candy out at Halloween? Nice
Once we got home, the guys were there watching Adventure Time and drinking cider. Jessie and I immediately set to cooking, but not before a healthy dose of 'nog. It's never too early for 'noggin' it up. She said we should cook more often, and I agreed. Who else would get liquored up to cook a frittatta on Halloween night? I pick my friends well, ladies and gentlemen.
We shoved the frittatta in the oven and set a timer for 30 minutes-- just enough time to go to the store and get more alcohol. Nice. Wednesday night at the Maranda-Stark Household, everybody.
We got home, quite literally, in the nick of time. I ran into the house and pulled the glorious frittatta out of the oven. Spoilers: it was delicious
More cartoons were had and then Jessie and I decided it was costume time. Long story short, we scared a little kid dressed as a football player, but our shenanigans were cut short when Jessie tripped because her skull was obscuring her vision. (I have a feeling normal people don't have that problem, usually.)
Back at home base, we ate some more and gave candy to a group of kids. Then Jessie decided it was far too tame a night and proposed we go trick-or-treating instead. Us, a group of four, ages 21-26, no proper costumes, well-past tipsy.
I was hesitant at first, but then when someone suggested I take Zaphod the ferret, I was all for that. I got his homemade spider costume and put together a very hasty costume for myself. Derrick went as "drunk guy with a deer skull on his head", Stark was a Mysterious Hood, and Jessie went as a homeless werewolf.
One year, when I went trick-or-treating at 16 years old, I had someone outright refuse to give me candy on the basis that we were too old. That was my old neighborhood. My new one... completely different.
We made friends with trick-or-treaters and folks giving candy, most of which were very surprised and mostly delighted to see a ferret come to their door. At one point, a van pulled up to us, and a woman asked if we had seen her son walking around. He was apparently dressed up as Steve from Minecraft. Nice. Then she saw Zaphod and exclaimed, "Oh, we have sugar gliders at home! Boys, look!" And her two little sons were very excited to see Zaphod, who accepted their petting. And what's even better, the two sons actually gave us candy out of their own candy stashes
As we continued, we came across a house with cool decorations, which included a fish tank half-full of water with a plastic snake and rat floating in it, with a sign that read, "Beware of the killer snake". I got one of those feelings that told me I already liked these people. The door opened to reveal a big fluffy Bernese mountain dog, very interested in my ferret and Jessie's bone accessory. We got to talking to the couple that lived there about skulls and animals and the couple let us in their well-decorated home. We learned that their son and daughter are both in the arts, they had exotic types of parakeets, and they collected fossils. I thought, "Where have these neighbors been all my life?!"
(Tessy, the dog, was very good at shaking hands. "Shake my paw. Oh my god just shake it. You're holding an animal and I don't know what to do with myself SHAKE MY PAW.")
We left the cool neighbors on the note of, "You guys should come by sometime!" Warm-fuzzies were had by all.
We headed home, leaving our trick-or-treating escapade on a good note. Later, more cartoons were watched, candy and Taco Bell was feasted upon, and we all had the surreal moment of, "We haven't been trick-or-treating in 10+ years. No one refused us candy and everything went way better than expected."
* I'm the only person I know who impulse-buys leeks. "But look at how big they are! And only for a dollar?! I can't afford not to buy these!"
Resisting... urge... to murder cute fuzzy animals...
Seen above is a 6"x6" gouache painting on clayboard. It features a demonic golden fox with a long tongue and six legs. Or, well, it did, until one of our foster ferrets had her way with it. She got onto the table, turned over a glass of water, waded around in the puddle, then decided my art needed a little more... chaos.
You can imagine how mad I was to find my painting I was actually happy with was reduced to a smeary mess. The detail on the face, the delicate background, all shot to hell. The painting then endured a vicious toss into the garbage bin, accompanied with a tapestry of swears.
Eventually I calmed down, pulled it out of the trash, and reconsidered it. I had the technology to fix it. And by technology I mean very tiny brushes and a dark purply-black-colored paint.
Here's what it looks like now.
Still pretty chaotic, but I think it's better. And thankfully, the ferret decided to scratch and smudge the area near the paws, so it kind of looks intentional. Or that's what my friends think, anyway. They're all like, "Wow, I love it! Wouldn't have been able to tell it wasn't intentional if you hadn't said anything!"
Damn, maybe I should let wet animals walk over my paintings more often. And why stop at ferrets? I can dabble in snakes and dogs, and maybe even chickens... The possibilities are endless. Revolutionizing the art world, brb.
Some days, you metaphorically wake up on the wrong side of the bed. Something's weird, and you know it, and like a sequin bra, it rubs you the wrong way. You can't seem to take that bra off. This is especially uncomfortable if you're a dude. And at this point I've lost my metaphor so I'll just say: today is going to be one of those days that tests my strength and resolve. So to take my mind off things, I usually turn either to yoga, tea, rain sounds, and cartoons. Well, yoga is proving too difficult for my strained mind, the rain stopped, but at least I have tea and cartoons, so that'll have to be good enough.
All this to say I did two paintings of my ferret recently. HERE YA GO.
"Tree Ferret"
"The Littlest Carnivore"
Using the words "ferret" and "photography" in the same instance should be treated the same as "matter" and "antimatter". It's a big deal when the two are associated. Not because ferret shots are the Holy Grail of photography or anything, but because it's nigh impossible to pull of half-decent shots of the wiggly bastards. I still attempt the impossible, though.
But hey, at least the antler and springbok pelt held still.
Zaphod wonders why the heck there's an antler and springbok pelt on the couch. Humans are weird.
Oh, Lucy's coming to say hello.
H-hi Lucy, that's close enough...
Strike a pose.
Man, check out that sweet antler. Oh I guess a ferret is in the shot too, huh, would'ja look at that.
I'm totally that person who buys a pet, thinking they're going into it with the right setup and proper knowledge, then finds out weeks later they were horribly, horribly wrong. Same goes for my collection of kittyfish. Turns out they like sand, not big rocks. So needy. So I got them some sand, begrudgingly, so they can root around in it like their ancestors did in the wild. I don't actually know where their "ancestors" came from, though. Valhalla, maybe.
I've painstakingly documented the process. Not for educational purposes, because I'm not good at fish, but maybe just because you have nothing better to do than live through my tank maintenance adventure. We're going with that.
So here's the tank before I dumped sand in it. By "dumping sand in it" I actually mean "siphoning half the water out, taking out all the decor, rinsing the sand, potting the loose plants, pouring the sand in, mixing it all together, adding the decor back in, and finally putting more water in". So there I've basically outlined this whole post and you don't even have to read the rest but you're going to stay because you like me! ...Right? Please don't go. The internet is a lonely place. Like space.
From the top counter-clockwise we have the siphon bucket (very high-tech), the decor bucket (also high-tech), the container to hold the kittyfish while I destroy their home, and last but not least, a pillow that may or may not be a Pokemon.
I took out all the decor. The big rocks, the crocodile skull, the flower, the plants, the fake plants, and the tiny Buddha. Not even the Buddha was spared. (Please don't take that last sentence out of context. )
Then I sucked out most of the water and all the gunk in the gravel. So much gunk. Stupid, messy kittyfish.
Now the tank is mostly empty inside. LIKE MY SOUL
And here Buddha laughs at my pathetic attempts at rinsing the sand. Which kind of looks like the charcoal ER technicians make you eat if you drink too much. Not like I know about that kind of thing. I'm a good girl.
I decided to pot all the lighter-colored loose plants because they always manage to suck, for lack of a better term. Is there a term for "those stupid live plants that always grow too tall and lose all their leaves to the point where you think they're going to die but they never do and keep growing too tall anyway"? Because that was what was happening to those stupid plants and I was having none of it. Into little pots you go, jerk-plants. (Not you, Mr. Anubias, you're a good plant. A nice, un-murder-able plant.)
I added a bigger pot for the kittyfish to hide in as well. Also LOOK AT ALL THAT GUNK, UUHHHGH
Here's the tank all nice and not-empty. I like the look the potted plants add to it. Maybe now I won't be shunned by the catfish community for keeping the kittyfish in pebbley substrate. These are valid concerns.
So after I got everything back in order I put the kittyfish back in. Most of them sat in the corner and had a veritable fish-panic-attack ("oh god oh god oh god oh god there's sand in heeeereee"), but not Habernathy. Habernathy is the albino corydoras who is also a badass and nothing phazes him. Phases? Whatever, he's the best. He pretty much started zipping around and shoving his face into the sand like he'd died and gone to kittyfish heaven. Or like a kid in a ball-pit. His enthusiasm was not shared by the others who, to this day, are still not cool with the renovations. Whatever, man. It's cool. Don't appreciate my hard work. See if I care. I'm over it already.
Today was the day.
I say "was" because all the exciting parts of it are essentially over. Even though my boyfriend is making meatloaf for dinner. And meatloaf is totally something be excited about. But the most exciting things already happened.
Because today was the day of the Reptile EXPO.
I really had no idea what to expect when I got there. It was held at a community center, so basically it was in a giant gymnasium. But it was better because it had snakes and geckos and carnivorous plants. More on that later.
We arrived at 8:15am. It started at 10am. My friend insisted that we arrive early. Like, 2 hours early. She assured me that lines were long and pickings (in terms of animals) were slim. So if you wanted to get a particular snake or lizard or tarantula or goodness knows what else they had, you had to be there before everyone else.
So I dragged my buddy Jessie along so I wouldn't be that weird chick who went by herself to the Reptile EXPO because I imagine that's as socially awkward as going to a movie by yourself. Who does that
So we drained our phone batteries waiting in line (we were second) and we got our photo taken by a worker. (I bet the caption for it is going to be "Can you believe these nerds showed up at 8?! Oh man!")
When we first walked in, we were ushered along a queue and handed a swag bag, like at a Comic Convention, full of little packets of turtle chow and water dechlorinator. At first it was like sensory overload.
"Cute animal! Another cute animal! Oh my goodness they sell carnivorous plants! What is in that glass box over there? Look at the size of that boa!"
Aaaaand you get the point. On to the photo-dump.
I have to compare the EXPO to the Emerald City ComiCon.
I feel more at home around the geekiest of the geeks than I do most places. I think one of the reasons for this is because of how casually the con-goers treat potentially strange situations. We rode the escalator next to a Tuskan Raider, waited in line next to Deadpool, and ate pizza next to The Doctor. And it felt normal. No big deal.
The EXPO is a lot like that. If you see someone walking around with a snake around their neck or brushing an iguana's head with a toothbrush, it's casual. Nothing weird going on here. Everyone there loves their reptiles and is comfortable around them. It's not like at the zoo where the animals are practically drowned by fake rocks and pretentious backdrops of rain forests and deserts. There's such an air of easiness about the Reptile EXPO. If you're there, you're among friends automatically.
Lizzy is awesome.
For instance... We approached a table that had a giant iguana lounging on it. I greeted her as if she were a dog ("Hey there pretty lady, lookit'chu! Beautiful girl!") and the owner was just all smiles. She then proceeded to give Lizzy a brush with a toothbrush between her eyes and Lizzy leaned into it and closed her eyes. It might have been the cutest thing ever, except for all the baby snakes everywhere
I was more surprised than anything at the sheer variety of the animals for sale. Really beautiful, all of them, and I stopped to admire almost every single critter. The vendors were amazing, too. There was none of that awkward "I-walked-up-and-looked-at-your-wares-but-didn't-buy-anything-and-you-gave-me-the-stink-eye" kind of stuff. If you've been to a craft fair, farmer's market, or convention, you know what I mean. These vendors were positively glad just to see you liked their collection, saying "thank you" and striking up conversation about particular morphs and variations. So basically it was wonderful.
Also, worms. With horns. Goddamn Caterpies.
Aside from reptiles there were a host of other crazy things.
Crickets, mice, rats, cockroaches, stick bugs, betta fish, and carnivorous plants, to name a few. Well okay that may have been all of them but still
We found Hypnotoad.
Giant tortoise in a bucket next to a table. NO BIG DEAL.
Chameleons are photogenic as shit.
Is there a word that means "super cool" and also "completely terrifying" at the same time? Insert that word here.
As well and fine as all the critters were, I had a nagging thought in the back of my head-- where are the hognoses?! Western hognose snakes, to be precise. Combine a shovel with a corn snake and give it angry eyebrows and you have yourself a hognose snake. They have snubby noses. And spots. And black tummies. And when they're threatened, they either flatten their necks or flip over backwards, open their mouth, and play dead. So essentially they're drama queens who also think they're badasses and HRRNG I WANTED ONE. In reality, I specifically came to the Reptile EXPO to get a hognose.
Bleeeh, I am dead, also cute, bleeehhh
Moments into the EXPO, a lady we befriended in line earlier approached me excitedly and said, "Did you see the hogs?" And my mind went kind of stupid for a second and I think I replied, "Buhh?" And she led me to the table where they had a perfect little baby hog nose. Just sitting there. It was looking up at me, as if to say, "Take me home!" Although, more likely they were saying, "Human! Release me! This force field is too strong! I SEEK VENGEANCE!" But I don't speak snake, so it's really a tossup.
So, $70 later, I walked out with a baby hognose snake. But not before looking at all the rest of the reptiles and somehow convincing Jessie to get her own snake. Unintentionally.
Hey cutie!
It was a love-bite.
We actually only spent a couple hours at the EXPO, which is surprising since it felt so long. We did end on an amazing note, though. The lady we met in line was sitting outside of the main room and we were chatting about what we got. She came away with a ball python and then opened up a cardboard box and a chameleon just waltzed right out. Again, no big deal
Wait what
Who put that chameleon there
Get back in that box
On another (more critical) note, my hognose has no name. I've recieved a few suggestions, but nothing is sticking. I'm thinking the word "Banter" is cute, since it's a writing reference and has kind of a spunky attitude, but I'm not sure.
Leave your name suggestions in the comment box! Then I'll post his photo, bio, and name on my "About" page with a shout-out to you for picking his name.
I bought these catfish thinking they were upside-down catfish. But they are never upside-down. On the positive side, I can now add "why is my upside-down catfish not upside-down" to my Google search history. So there's always that.
Despite my slight grudge against them, I did some gesture drawings of them because upside-down or not, they're still cute as sin
Yeah okay there's also a lot of corydoras in this but WHO CARES
Catfish are hard to draw. On a scale of Hands and Feet (very difficult) to Budgies (hella easy), I'd mark catfish closer to Hands and Feet.
Because seriously to hell with feet. Ugh.
On that note, I'm going to go find out what it means to have a catfish as a spirit animal guide. Especially a right-side-up catfish.
Catfish-people are independently-minded but still rely on groups for emotional support from time to time. Catfish-people may also have an unexplained urge to eat food off of the ground.
So, if you read my last post, you knew my red betta, Levi, has a fungus.
Well, had
He's better. Missing one pectoral fin, but the fungus is gone. 100% not there anymore. I'd sort of resigned myself, told myself he was a goner, but I was wrong! Horribly wrong! Which is good!
But dammit I just bought three upside-down catfish and uuuugh the tank would be too crowded if I added Levi back in, now.
A bit of a fishy update! Irwin is a bachelor in his own bowl now and Levi has a fungus. It doesn't appear to be killing him, though, and soon I fear he may have to be moved to an actual tank as opposed to the beer glass he's in now.
Because both bettas are out of the 10-gallon, I decided to fill it with something else.
Enter the kittyfish, a.k.a. corydora catfish.
Mendelson is on the flat rock and Habernathy is the shiny guy wedged in the corner. Dork. You can also kiiinnnnd of see d'Artagnan in the little cave.
The cories, or kittyfish as I fondly refer to them as, are very fun little fish. But very hard to photograph.
I wonder if this is how wildlife photographers feel?
Not speaking of which, Jomo, the mountain horned lizard, now lives with my coworker and his water dragon Wall-E. Yes, like the robot.
No worries, though. Jomo is much happier (or so we think) at his new place.
They're bros.
I'm still writing and enjoying every moment of it, in other news. Snippets to come.
Still working on that hermit crab cowboy as well. Putting a cowboy hat on a hermit crab is harder than it sounds. I'll keep you posted.
(Oh, get it? Posted? Because this is a blog? Yeah ok I'll leave. )
|
Drexel dragonThe Math ForumDonate to the Math Forum
Ask Dr. Math - Questions and Answers from our Archives
Was Euler wrong? 2*Pi=0?
Date: 03/13/2002 at 17:06:43
From: Warren
Subject: Was Euler wrong? 2*Pi=0?
It is well known that e^(Pi*i) = -1, according to Euler's formula.
While I was surfing the Internet last week, however, I stumbled
across a website with an interesting proof that shows that 2*Pi = 0
by using Euler's famous equation. As far as I can tell, all of the
steps are mathematically sound. I've been puzzling over this problem
over the past few days and I can't seem to make much sense of it.
Here's the proof:
Let x = e^(Pi)
1. x^i = -1
2. (x^i)^i = (-1)^i
3. x^(-1) = (-1)^i
4. [x^(-1)]^i = [(-1)^i]^i
5. x^(-i) = (-1)^(-1)
6. x^(-i) = -1
7. x^(-i) = x^i
8. e^(-Pi*i) = e^(Pi*i)
9. [e^(-Pi*i)]^i = [e^(Pi*i)]^i
10. e^(Pi) = e^(-Pi)
11. ln[e^(Pi)] = ln[e^(-Pi)]
12. Pi = -Pi
13. 2*Pi = 0
The key step is #7, where step #1 is combined with step #6. I've even
checked this on my TI-83 calculator: when I enter e^(Pi*i) it returns
a -1, and, likewise, when I enter e^(-Pi*i) it returns a -1. If both
are equal to -1, this implies that e^(Pi*i) = e^(-Pi*i). Raise both
sides to the power of i and you end up with e^(-Pi) = e^(Pi), which
makes no sense whatsoever. One value is approximately 23.141 and the
other is about 0.043, yet they are equal? From this, you can do some
more mathematical manipulation and end up with 2*Pi = 0. If this were
true, then that would mean that the circumference of any circle is 0.
Obviously, this can't be true.
If you can help clarify this situation, or come up with a possible
answer as to why this proof is not mathematically sound, I'd be very
grateful. Thank you.
Date: 03/13/2002 at 23:25:52
From: Doctor Peterson
Subject: Re: Was Euler wrong? 2*Pi=0?
Hi, Warren.
See if this explanation of a very similar "proof" helps:
Find the Flaw
It is very tricky; even though I wrote that answer, I had trouble with
this one. Your step 8 is fine; it still just says that -1 = -1. (In
fact, most of the previous steps could be left out.) But whereas in
"Find the Flaw" the problem lies in taking the logarithm, here step 10
is already bad before you've done that. That's because complex powers,
as well as logs, can have multiple values. That is mentioned at the
bottom of this page:
Imaginary Exponents and Euler's Equation - Dr. Math FAQ
What you've done here is to show, not that -pi = pi, but that raising
any number, even -1, to an imaginary power can give multiple values,
and therefore is not allowed in a proof. And that's what false proofs
like this are really all about: teaching us to be careful when we do
the "obvious" in algebra!
Here are some pages from the Dr. Math archives that more directly
explain the idea that complex powers are multivalued:
I'll add a little further discussion of my own.
We can write any complex number as r e^(it). Let's calculate this
number raised to a complex power:
(r e^(it))^(a + bi) = (r e^(it))^a * (r e^(it))^(bi)
= r^a e^(iat) r^bi e^(-bt)
= r^a e^(iat) e^(ln(r)bi) e^(-bt)
= r^a e^(-bt) e^[(at + b ln(r))i]
\_________/ \____________/
abs val angle
But wait a minute: the angle t is not uniquely defined for a given
number. Any angle t + 2k pi could have been used, for any integer k.
Let's repeat using any such angle:
(r e^(i(t + 2k pi)))^(a + bi)
= r^a e^(-b(t+2k pi)) e^[(a(t+2k pi) + b ln(r))i]
= r^a e^(-bt) e^(-2kb pi) e^[(at + b ln(r))i] e^(2ka pi i)
= r^a e^(-bt) e^[(at + b ln(r))i] e^(-2kb pi) e^(2ka pi i)
\_________/ \____________/ \_________/ \____/
abs val angle dilation rotation
\___________________________/ \______________________/
principal value varies with k
This tells us that the absolute value of a complex power has
infinitely many values, whose spacing depends on b, while the angle
can take different values dependent on a. In fact, if a is an integer,
the angles will all be equivalent, but when it is not an integer, the
angle will spiral around while the absolute value changes. Weird,
isn't it? But in a way it's not that surprising; we see the same with
fractional real exponents, which are likewise multivalued (there are
two square roots and three cube roots, for example). Would you expect
imaginary numbers to be better behaved than fractions when you use
them as exponents?
In your case, you have a pure imaginary exponent and a real base:
(-1)^i = (e^((1 + 2k)pi i))^i = e^(-(1+2k)pi)
So you get infinitely many positive real numbers. Your "proof" just
assumes that two of them are equal, namely those for k=0 and -1.
- Doctor Peterson, The Math Forum
Associated Topics:
College Imaginary/Complex Numbers
College Number Theory
High School Imaginary/Complex Numbers
High School Number Theory
Search the Dr. Math Library:
Find items containing (put spaces between keywords):
Click only once for faster results:
parts of words whole words
Submit your own question to Dr. Math
[Privacy Policy] [Terms of Use]
Ask Dr. MathTM
© 1994-2013 The Math Forum
|
Radio-frequency-identification (RFID) systems have seen widespread use in supply-chain management as well as in healthcare industries.1-3 Hospitals have been investing massively in information technology (IT) to reduce operating costs and improve patient safety, and RFID is expected to become critical to healthcare organizations in achieving these goals. But passive UHF tags used to track equipment and inventory may need to withstand gamma radiation at levels of 25 to 40 kGy typically used for sterilization.4,5 This report will evaluate the reliability of passive UHF RFID tags in such a high-radiation environment.
For hospital equipment and materials subject to gamma radiation for the purpose of sterilization, the RFID tag and its internal integratedcircuit (IC) chip and antenna must withstand that radiation in order to effectively track that equipment and material. RFID tags currently on the market are not well suited for such a high-radiation environment; for a tag to withstand gamma radiation, it must have the capability to be hardwired. Testing RFID technology in a medical environment can be difficult because of the complexity of the environment, as cases in implementing information technology (IT) in hospitals have shown.6,7
In a hospital, the adoption of RFID may not necessarily be as involved as in supply-chain applications, since medical services rely more on staff and internal processes than on external suppliers. Nevertheless, any organization that plans to adopt RFID must face multiple challenges, which may include the following technological, managerial, and organizational problems. Sarma9 considers three major challenges, mainly from the technical viewpoint: non-line-of-sight reading, handling of serial numbers, and handling large volumes of realtime data. Sarma notes that solutions may depend on building an RFID infrastructure, together with middleware and impedance-matching of the RFID system to current systems such as Enterprise Resource Planning (ERP) systems.
The medical and life-science fields have deployed gamma radiation for a variety of applications, including sterilization and removing potential infestation of insects and bacterial contamination in produce imported from other countries. Early experimenters working with item-level RFID tags have discovered that gamma radiation levels used in typical sterilization cycles can permanently damage or affect the data and electronic memory contained in an RFID tag. A gamma radiation level of 25 kGy is typically used for sterilization of disposable medical items, while some medical institutions will use levels as high as 40 kGy for their sterilization procedures. Gamma radiation consists of the emission of massless particles called photons as a result of the decay of a radioactive material. Gamma radiation has a relatively long wavelength and can be stopped by a lead barrier (Fig. 1), in contrast to heavier forms of radiation, such as alpha and beta rays, which are more dangerous when in contact with living organisms. Gamma radiation falls within the same electromagnetic (EM) spectrum as visible light, ultraviolet light, and infrared light, but with much higher energy levels. Medical devices are often sterilized by using gamma radiation emitted by cobalt- 60 as a radiation source.
As defined by the United States Food, Drug, and Cosmetics Act, a medical device is "an instrument, apparatus, implement, machine, contrivance, implant, in vitro reagent, or other similar or related article, including any component, part or accessory, which is intended for the use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment or prevention of disease, in man or other animals, or intended to affect the structure or any function of the body and which does action and which is not dependent upon being metabolized for the achievement of its primary intended use."9
RFID is an Auto Identification and Data Capture (AIDC) technology that makes use of omnidirectional wireless radio communications to transfer and store identification data, unlike the line-of-sight operation of barcode scanners.10 Passive RFID transponders or tags are small, resourcelimited devices that are inductively powered by the energy of the request signal sent by an RFID reader; this is known as the forward communication or downlink signal. Once the RFID tag receives enough energy to "power up" its internal electronics, the tag can decode the incoming query and produce an appropriate response by modulating the request signal using one or more subcarrier frequencies. The tag responds by employing the use of backscatter or uplink communication. These RFID tags can do a limited amount of processing, and have a small amount (<1024 b) of storage. Semi-passive and active RFID tags require a battery for operation, but provide more functionality. Battery-powered RFID chips present fewer security and privacy challenges than passive ones. (See the follow-up article for more details on the types of technologies employed in RFID systems.)
Three general categories are being practiced in healthcare environments first established in retail inventory. The first is as a control setting for improving asset management of drugs and medical devices, to ensure that a patient receives the correct drug in a timely fashion. The second is for the sterilization of hospital equipment using gamma radiation. The third is automatic capture of streaming data to ensure that at any time, reliable data is available for clarification of medical questions. RFID tags and RFID badges allow hospital managers to locate and effectively deploy staff and assets such as crash carts, portable defibrillators, wheelchairs, and infusion pumps. Previous deployments in this area required expensive active tags containing a battery and high infrastructure costs. Recent market developments are allowing passive tags to provide much of this functionality, and overcome any concerns around patient and staff privacy. Apart from sterilization applications, RFID has the capacity of reducing the labor cost of scanning items; reduction of out-of-stock items; reducing theft loss; and providing proof of delivery, inventory reduction, and facilitating promotions at stores. Recently, researchers at the Cantonal Hospital of St. Gallen (St. Gallen, Switzerland) have found that high-frequency (HF) 13.56-MHz RFID tags do not significantly interfere with the functionality of imaging devices, nor do those imaging devices affect the functionality of the HF RFID tags. The study, undertaken by physicians and researchers at the hospital and at the University of ETH Zurich, found that magnetic resonance imaging (MRI) radiation could raise the temperature of tissue around an RFID tag by, at most, 4C (7F), but had no effect on a patient's health. RFID has been observed to interfere only in analog hospital equipment, although most newer communications devices are based on complex digital modulation formats that tend to be more robust in the presence of interference.
RFID technology consists of tags, readers, computer networks, and other systems that may include middleware, databases, and software. The RFID industry consists of suppliers of these various components as well as systems integrators. With their long wavelengths, the range of UHF RFID systems is often detuned in the presence of water and metal objects. Healthcare environments often experience high humidity, for example, because of water baths. Any RFID solution used for healthcare applications, for tagging, sterilization, or even inventory management, must be rugged enough to withstand such a harsh environment.
RFID systems, interrogators, and tag performance test methods are covered under the three parts in ISO/IEC 18046. The most common methodology used in the testing of RFID tags is in a free-space environment. This test approach measures the response rate and attenuation of tags by varying the distance (d) between the tag and the reader. In experiments reported here, collected by the a tag's antenna in free space (Pa) can be stated in Eq. 1 as
Pa = tGt/4πr2)(λ2/4π)> (1)
Pa = the total power received by an RFID tag's antenna in free space,
Pt = the power of the transmit antenna,
Gt = the gain of the transmit antenna,
λ = the electromagnetic wavelength of the RFID's antenna, and
r = the distance from the transmitter.
Continue on page 2
Page Title
Equation 1 shows that the power received by an RFID's antenna is affected by antenna gain, signal wavelength, and distance from the transmission source. 13
The power available at an antenna, Pa, is a function of various factors including power and gain (efficiency) of the transmitter antenna (Pt and Gt), the distance from the transmitter (r), electromagnetic wavelength (?) and gain (efficiency) of the RFID tag's antenna (Gtag). To improve the antenna read range (r) without increasing transmitted power, it is necessary to increase the gain of the antenna.
Often the characterization of RFID tags involves characterization of the antenna over a wide range of frequencies. 14-16 The RFID reader uses the re-radiated power to demodulate the signal from the reader. The greater the re-radiated power from the tag, the better for the reader to decode the signal from the RFID tag. The re-radiated power is also influenced by factors such as antenna gain and tag-antenna impedance matching. Equation 2 expresses the reradiated power as a function of several factors, including tag gain, Gtag :
Re-radiated power = (4Ra2)/(Za + Zc2) PaGtag (2)
Za = the RFID antenna impedance and
Zc = the RFID chip impedance.
Equation 3 shows that reradiated power is highly dependent upon the impedance matching between the inlay and the RFID tag's antenna:
K = 4Ra2/|Za + Zc|2) (3)
When the impedance of the antenna is zero (a short circuit), the tag reradiates four times as much power as a matched antenna. When the antenna impedance is highly reactive (a capacitance), a complex-conjugate loaded antenna actually reradiates more power than an antenna with zero impedance.14 The antenna and inlay impedance/reactance can have an impact on RFID tag performance. Other factors that can improve the efficiency of an RFID are described in Eq. 4 ,which shows that the maximum read range (Rangemax) is a function of distance and equivalent isotropic radiated power17
Rangemax = d(EIRP/Ptag-minGtag)0.5 (4)
Equation 4 suggests that the read range of an RFID interrogator or reader is a function of the distance from the tag and the EIRP.14
Additional factors that can affect the read range of an RFID reader are summarized in Eq. 5:
Rangemax = (λ/4π)tag-min Greceive-tag-minτ)0.5/Ptag-min)> (5)
In Eq. 5, t is the same K factor as in Eq. 3. The read range of an RFID reader can be theoretically estimated from wavelengths, power, and gain coefficients but, in reality, it is difficult to achieve because of many environmental factors. The tag and RFID chip impedance are only two of many factors that affect the read range.13,14,17 The materials from which the tags are made and the form factor can also affect the read range of an RFID antenna. A practical example is a reader with a range of 3 m in one environment and less in another environment.18-20 The reason for this may be linked to the impedance of the tag's antenna and RFID chip (as noted in the parameter t of Eq. 5). An RFID antenna's impedance can be affected by the tag's substrate even when materials with similar EIRP are used to build the tag, since changes in impedance can affect the read range.
Most of the RFID tags currently on the market cannot withstand high radiation levels. (For a review of typical passive RFID tag construction and components, see the accompanying article.) For an RFID tag to withstand high levels of radiation, it must be capable of being hard wired. A gamma-radiationresistant tag must be made with materials capable of withstanding the harsh radiation environments typical of the sterilization units of most medical applications as well as the radiation levels that are commonly found in and around the radioactive waste produced in nuclear power plants.
Continue on page 3
Page Title
Gamma-radiation-resistant RFID tags have the capability of tracking healthcare equipment that has been sterilized by means of gamma radiation. Also the IAEA is willing to certify some countries to electricity from radioactive materials. Minimum contacts can be guaranteed between the materials and human beings by deploying gamma tags in the industry.
See associated table
The authors would like to thank SENSTECH (PKT 1/2010) for supporting this work.
1. H. Fanberg, "The RFID Revolution," Marketing Health Services, Vol. 24, No. 3, 2004, pp. 43-44.
2. M. Glabman, "Room for Tracking RFID," Materials Management Magazine, Health Forum, Inc., 2004.
3. J. Ericson, "RFID for Hospital Care," EBusiness Executive Daily, July 23, 2004.
4. A. Quaadgras, "Who Joins the Platform? The Case of the RFID Business Ecosystem," Proc. of 38th Hawaii International Conf. on System Sciences, Honolulu, HI, 2005.
5. B. Srivastava, "Radio Frequency ID Technology: The Next Revolution in SCM" Business Horizons, Vol. 47, No. 6, 2004, pp. 60-68.
6. J. Aarts, H. Doorewaard, and M. Berg, "Understanding Implementation: The Case of a Computerized Physician Order Entry System in a Large Dutch University Medical Center," Journal of the American Medical Informatics Association, Vol. 11, No. 3, 2004, pp. 207-216.
7. J. S. Ash, P. N. Gorman, M. Lavelle, T. H. Payne, T. A. Massaro, G. L. Frantz, and J. L. Lyman, "A Cross-Site Qualitative Study of Physician Order Entry," Journal of the American Medical Informatics Assoc., Vol. 10, No. 2, 2003, pp. 188-200.
8. S. E. Sarma, "Integrating RFID," ACM Queue, Vol. 2, No. 7, 2004, pp. 50-57.
9. United States Food and Drug Administration, "Federal Food, Drug and Cosmetic Act section 201(h)," Internet:, accessed November 1, 2007.
10. "ECRI: Guidance Article: Radio-frequency Identification Devices," Health Devices, Vol. 34, pp. 149-158, 2005.
11. C. A. Pickett, D. N. Kovaic, J. P. Morgan, J. R. Younkin, B. Carrick, K. Whittle, R. E. Johns, "Results from a Demonstration of RF-Based UF Cylinder Accounting and Tracking System Installed at USEC Facility," USEC, 2008.
12. T. Karygiannis, B. Eydt, G. Barber, L. Bunn, and T. Phillips, "Recommendations of the National Institute of Standards and Technology," NIST, 2007.
13. Daniel M. Dobkin, The RF in RFID, Elsevier, New York, 2008.
14. P. V. Nikitin and K.V.S. Rao, "Theory and Measurement of Backscattering from RFID Tags," IEEE Antennas and Propagation Magazine, Vol. 48, No. 6, December 2006, pp. 212- 218.
15. P. V. Nikitin and K.V.S. Rao, and R.D. Martinez, "Differential RCS of RFID tag," Electronics Letters, April 12, 2007, Vol. 43, No. 8.
16. K. V. Rao, Pavel Seshagiri, V. Nikitin, and Sander F. Lam, "Antenna Design for UHF RFID tags: A Review and a Practical Application," IEEE Transactions on Antennas and Propagation, Vol. 53, No. 12 , December 2005.
17. Mun Leng Ng, Kin Seong Leong, and Peter H. Cole, "Analysis of Constraints in Small UHF RFID Tag Design," 2005.
18. S. Donthareju, S. Tung, A. K. Jones, L. Mats, J. Panuski, J.T. Cain, and M.H. Mickle, "The Unwinding of a Protocol," IEEE Applications & Practice, RFID Series, Vol. 1, No. 1, pp. 4-10, April 2007.
19. A. K. Jones, S. Donthareju, L. Mats, J. T. Cain, M. H. Mickle, "Exploring RFID Prototyping in the Virtual Laboratory," MSE Conference, 2007.
20. Marlin H. Mickle, "Establishment of the University of Pittsburgh RFID Centre of Excellence," IEEE Applications and Practice Magazine, April 2007.
|
Věra Čáslavská
Gymnast Věra Čáslavská is tossed in the air by her teammates after earning the gold medal in the individual all-around competition at the 1968 Olympics in Mexico City.© Jerry Cooke—Time Life Pictures/Getty ImagesVěra Čáslavská midway through a vault at the 1968 Olympic Games in Mexico City.Allsport/Getty Images
Věra Čáslavská, (born May 3, 1942Prague, Czechoslovakia [now in Czech Republic]), Czech gymnast, who won a total of 35 medals, including 22 gold medals, at the Olympic Games and at world and European championships in the 1950s and ’60s.
Čáslavská began her athletic career as a figure skater, but at age 15 she turned to gymnastics, first appearing in international competition at the 1958 world championships, where she won a silver medal in the team event. She won the balance beam at the 1959 European championships and finished a close second to the Soviet gymnast Larisa Latynina at the 1962 world championships. Čáslavská placed first overall in gymnastics at the 1964 Olympics in Tokyo, also taking gold medals in the balance beam and the vault. At the 1965 and 1967 European championships, she won every women’s gymnastic event. At the 1966 world championships, she contributed to the Czech team’s victory over the Soviets, winning the gold in the combined exercises.
In June 1968 Čáslavská signed the “Two Thousand Words,” a document that called for more rapid progress toward real democracy in Czechoslovakia. After Soviet tanks entered Prague in August of that year, Čáslavská, facing possible arrest for her political stance, fled to the mountain village of Šumperk. She was granted permission to rejoin the Olympic team only a few weeks before the 1968 Summer Games opened in Mexico City. There she dominated the gymnastics competition, winning gold medals in the individual all-around, the vault, the uneven parallel bars, and the floor exercise and silver medals in the balance beam and team competition. The day after winning her last gold medal, Čáslavská capped her Olympic career by marrying Josef Odloil, a Czechoslovakian middle-distance runner.
As a result of her political convictions, Čáslavská fell out of favour with the Czech authorities and was initially refused employment. She was eventually allowed to coach the national gymnastics team. After the collapse of communist rule in 1989, Čáslavská became president of the Czechoslovakian Olympic Committee. When the union with Slovakia was dissolved in 1993, she was named president of the Czech Olympic Committee. From 1995 to 2001 she was a member of the International Olympic Committee.
|
Explore highlights
Gold mancus of Ethelred II
Diameter: 20.000 mm
Weight: 3.510 g
CM 1883-5-16-1
Coins and Medals
Gold mancus of Ethelred II
Anglo-Saxon, AD 1003-16
Minted in Lewes, southern England
Minted by the moneyer Leofwine
Of the few Anglo-Saxon gold coins that have survived from the eighth to the eleventh century, almost all are silver pennies. We do not know whether this is because gold coins were always rare, or because people were more careful not to lose them, as they were extremely valuable. Because there are so few, it is difficult to know what they were used for. It has been suggested that they were religious offering pieces, and not part of the regular currency at all.
Anglo-Saxon records occasionally mention a unit of gold called a mancus. This was probably originally a weight, and it became used as a unit of account worth thirty pennies. It is possible that the mancus was also the name of the gold coin. This example is from the reign of Ethelred II, king of the English from 978 to 1013. It comes from the mint of Lewes, and was produced by the moneyer Leofwine, from the same dies that he used to strike silver pennies. This suggests that it was part of the same currency system. The condition of the coin's surface shows that the dies were already rusty when the coin was produced. This also suggests a currency coin, since more care would probably be taken with a presentation or offering piece.
I. Stewart, 'Anglo-Saxon gold coins' in Scripta Nummaria Romana (London, 1978), pp. 143-72
C.S.S. Lyon, 'Historical problems of Anglo-Saxon Coinage (3) Denominations and weights', British Numismatic Journal-12, 38 (1969), pp. 204-22
Browse or search over 4,000 highlights from the Museum collection
Shop Online
Sutton Hoo helmet, £5.00
Sutton Hoo helmet, £5.00
|
making local government more ethical
Intelligence, Motivation, and Legislative Immunity in a Government Ethics Context
But actually this is a real issue, at least in government ethics. It is often hard to tell the difference between incompetence and misuse of office. Take local government attorneys, for example. Many of them consciously let officials off the hook with poor ethics advice, but many others lack both a basic understanding of government ethics and the professionalism to say so. As for local government officials, many of them also make ethical decisions without a basic understanding of government ethics, and without consulting the appropriate individuals or laws.
And let's face it, many elected officials, like Blagojevich, are intellectually challenged. Few people vote for the smarter candidate, and there is a whole industry available to make just about anyone look good. Name and contacts (Blagojevich had one out of two) are far more important than competence.
A Lack of Intrapersonal Intelligence
Even more than intellectual limitations and professional incompetence, however, a lack of intrapersonal intelligence is responsible for unethical conduct. Intrapersonal intelligence is the ability to be aware of one's own emotional states, feelings, and motivations, and to draw on them to understand and guide one's behavior.
Even very intelligent people often are out of touch with themselves, and elected officials are more likely to be out of touch with themselves than others of the same intellectual capacity.
Elected officials tend to have a more developed interpersonal intelligence, that is, the ability to be aware of others' emotional states, feelings, and motivations, and to act upon this knowledge — for example, by influencing people to do what one wants.
Intelligence, Motivation, and Government Ethics
Believe it or not, all this psychological mumbo-jumbo is not only relevant to government ethics, but it is even relevant to legislative immunity in a government ethics context.
A big difference between government ethics as opposed to criminal corruption laws is that knowledge and motivation are irrelevant to government ethics and central to criminal corruption. The difference between a bribe and a gift is that to prove a bribe, you have to prove a quid pro quo, that is, an intent to take money in return for a vote or another action. For a gift to be a violation of an ethics code, no intent, no motivation need be proved.
The dullest knife in the drawer defense doesn't work in a government ethics context. All the dullest knife needs to know is who is the right person to ask for ethics advice, and when to ask for it.
In effect, there is a tradeoff here: less needs to be proved in government ethics, and penalties are lower. That is why I get so worked up about government ethics codes that make violations a crime. This shows a misunderstanding of the difference between government ethics and criminal corruption, and therefore undermines at least the enforcement side of government ethics.
Motivation and Legislative Immunity
Let's return to that word "motivation." No motivation need be proved in government ethics, as opposed to criminal corruption. And coincidentally, motivation is central to how the Supreme Court has interpreted the constitutional Speech or Debate Clause. Here, for example, is what the Supreme Court said in one of the most important legislative immunity cases, United States v. Brewster, 408 U.S. 501 (1972). This was quoted to me by Louisiana Rep. Arnold, who brought the government ethics legislative immunity case in Louisiana that started the ball rolling (see my 2007 blog post):
Of course, no one would care if their votes or other acts were mentioned in a court or ethics commission. It is, more than anything, the motivation behind these acts that legislators and the Supreme Court feel must be protected by the Speech or Debate Clause. The Supreme Court interprets this clause as prohibiting the asking of legislators, outside the legislature itself, why they said or did what they said or did.
Government ethics doesn't ask this question. Why did the legislator accept the gift? Who cares. It was prohibited in order to prevent the appearance of impropriety. Impropriety, greed, corruption, conspiracy, fraud — none of these have to be shown or asked about in a classic government ethics matter.
If a local legislator has a conflict between her role as a legislator and her role as a businessperson or as a mother, that's enough to require the legislator to recuse herself, no questions asked. No questions asked about her motivation, about how she feels about her son or whether she puts her business income ahead of her fiduciary duties to her constituents. She might want more than anything in the world to prevent her son from getting the job, from having him anywhere near city hall, but she still shouldn't vote on it. Her motivation is not the point.
This important difference between government ethics and crime enforcement is one more reason why the Speech or Debate Clause should not be applied in a government ethics context. The founding fathers never contemplated government ethics, as we know it, and it is highly questionable whether in their wisdom they would have applied the clause to a non-criminal, low-penalty, administrative procedure where a legislator's motivation is not at issue.
Robert Wechsler
Director of Research, City Ethics
|
Skip to main content
1. News
2. Science & Space
National Research Council hits NASA space exploration, proposes pathway to Mars
See also
The National Research Council issued its report on the future of space exploration on Wednesday. The report stated that the “horizon goal” for any program of space exploration in the near term (i.e. the next two decades) is a Mars surface expedition, It also stated that the current NASA program, which includes a mission that would snag an asteroid, put in in lunar orbit, and visit it with astronauts is inadequate to meet that goal.
The report gave two reasons for its critique of the current NASA program. First the asteroid redirect mission would not create and test technologies necessary to conduct a crewed Mars mission. Second, NASA projects essentially flat budgets for the foreseeable future. Any space exploration program worthy of the name will cost considerably more money, with five percent increases in NASA funding for a number of years.
The report proposes a number of pathways which can be broken down into two categories, moon and asteroid. A series of lunar surface expeditions would test technologies that will be necessary for conducting operations on the Martian surface. A visit to an asteroid “in its native orbit” would test deep space transportation technologies. A combination of both would likely be preferable for the Mars goal.
On the subject of money, the report presented a bad news/good news finding. The bad news is that there is little public support for increasing funding for space exploration. The good news is that there is also little opposition for doing the same. This suggests that a president and a congress would be able to ramp up space spending without spending too much political capital. Indeed there is every possibility that public support will grow as the benefits of the space exploration program becomes apparent.
There were no real surprises in the report, as it tends to confirm consensus thinking about space exploration outside the current administration. It suggests essentially a return to the Bush era Constellation program, albeit with higher and more stable funding. Whether this recommendation will achieve some political resonance remains to be seen.
|
| |
Steven Kotler Contributor
Opinions expressed by Forbes Contributors are their own.
Contact Steven Kotler
Entrepreneurs 694 views
Crowdsourcing The Final Frontier
It’s been an interesting few months for the commercialization of space.
The festivities kicked off last March, when the 3D modeling platform Sunglass and space company DIYRockets announced an incentive competition aimed at creating an “open-source 3D-printed rocket engine” capable of sending nano-satellites into orbit.
The challenge marks the first time an open source methodology has been applied to the commercial space industry. The hope is that these next wave rockets will democratize the growing low-Earth orbit small-payload delivery market and, ultimately, disrupt the entire space transportation space.
The idea that this will happen sooner rather than later is not even a stretch. Remember it took Chris Anderson and his cohorts at DIY Drones about a year’s worth of open source work to create an autonomous quadcopter that duplicated 90 percent of the military’s $250,000 Raven, except their cost about $300 dollars.
Meanwhile, the 3D rocket contest announcement was followed a few weeks later by the next bit of space news: the April 19 launch of SpaceX’s Grasshopper rocket.
The Grasshopper is part of an attempt to build reusable suborbital rockets—it takes off vertically and lands vertically and not bad for something that stands 10 stories tall.
The Grashopper itself is a disruptive craft, but how long for until someone disrupts the disuptors? How long will it be before we’re crowdsourcing the next version of the Grasshopper? How long before we’ve moved from crowdsourcing unmanned space vehicles to manned ones?
This too is no longer beyond the pale.
Consider that last Monday, Virgin Galactic’s SpaceShipTwo powered up its engines for the first time. SpaceShipOne, for those who remember, was the vehicle that won the $10 million dollar Ansari X Prize in 2004. It was a demonstration project of sorts, the proof positive that a private company could pull off a space flight. The idea behind SpaceShipTwo is tourism—the eventual goal being to take paying customers on a suborbital flight adventure cruise.
The SpaceShipTwo flight was a test burn—those engines only stayed on for about 16 seconds—but that was still enough to send the craft over Mach 1.2 and to a height of 55,000 feet. The real news, at least according to Virgin Galactic president and CEO George Whitesides, is that this test was the first in a final series—a series that ends with actual space flights (some 500 people have purchased $200,000 dollar tickets). If everything goes according to plan, the plan is to have paying customers go rocket man before year’s end.
Again, how long until we’re crowdsourcing commercial space flight? How long until we’re 3D printing all the rocket parts we need? How long until it’s lunar landings? Is it too ridiculous to consider that by 2025 someone will download blueprints from the internet, switch on their backyard fabricator and go John Glenn under their own power?
These are no longer non-sensical questions.
In fact, if you’re looking for market stimulus, in the related and ironic news category, not days after Virgin Galactic announced success, Russia—who, after the Space Shuttle’s retirement, is now contracted to ferry U.S. astronauts to the International Space Station—announced a $5 million dollar per seat price hike—raising costs per astronaut to $70.6 million.
For certain, at $70.6 million a ticket, this is an industry ripe for some open-sourced disruption.
Post Your Comment
Please or sign up to comment.
|
Home > Green Energy > Hydrogen Power >
Existing Natural Gas Pipelines Could Carry Hydrogen, Too
USgas 300x225 Existing Natural Gas Pipelines Could Carry Hydrogen, Too
Map of Natural Gas Pipelines in the Continental US
According to the US Department of Energy, natural gas pipelines are an ideal way to distribute natural gas, even blended with hydrogen. Actually, blending hydrogen and natural gas is nothing new, and dates back to the mid-1800s.
The adoption of hydrogen fuel cell vehicles seems to be hinging on the development of the infrastructure to fuel them, but what we don’t realize is that the infrastructure is right under our feet.
In the US there are just a very few public-access hydrogen refueling stations, which begs the question, “If we’re going to make the switch to hydrogen fuel cell vehicles, how are we going to refuel them?”
Of course this is a valid concern, and has led many to believe that it could cost billions to implement hydrogen infrastructure including generation, distribution, and refueling facilities. What we might not have realized is that we have a working infrastructure already in place, and it’s right under our feet.
Manufactured gas in those times was a blend of up to 50% hydrogen with natural or petroleum gas. The practice of blending hydrogen continued until the 1950s in the continental US, and Hawaii still blends hydrogen with its natural gas.
Today, there are some 2.44 million miles of natural gas pipeline, hundreds of underground storage facilities. About 25% of total US energy consumption was serviced by these lines in 2010, so is the hydrogen infrastructure already in place? If so, then why are we trying to reinvent the wheel?
Share it
Like our Facebook page
About the author
|
Moon Mania Moon Mania
Lesson Plans Moon Resources
Moon Haiku Lesson Plan
Grade Level: K-4 grades
Curriculum Area: Language Arts
Lesson Objectives:
• ELA-2-E1 dictating or writing a composition that clearly states or implies a central idea with supporting details in a logical, sequential order; (1,4)
• ELA-2-E3 creating written texts using the writing process; (1,4)
• ELA-2-E4 using narration, description, exposition, and persuasion to develop compositions (e.g., notes, stories, letters, poems, logs); (1,4)
• ELA-3-E5 spelling accurately using strategies (e.g., letter-sound correspondence, hearing and recording sounds in sequence, spelling patterns, pronunciation) and resources (e.g., glossary, dictionary) when necessary; (1,4)
• ELA-4-E4 giving rehearsed and unrehearsed presentations; (1,4)
• ELA-5-E4 using available technology to produce, revise, and publish a variety of works; (1,3,4)
Technology Performance Indicators: Use technology tools (e.g., publishing, multimedia tools, and word processing software) for individual and for simple collaborative writing, communication, and publishing activities for a variety of audiences. (1,3)
Technology Connection: Word Processing or Paint Software
Assessment: poem, presentation, technology use
Vocabulary: Haiku
1. The teacher can read any moon poem from the book resource list.
2. The teacher will discuss with students haiku poems. Give hard copy examples of haikus. Haiku poems - composed of three lines. First line is 5 syllables, second line is 7 syllables, and third line is 5 syllables.
3. Brainstorm with students to help with words and pictures they may use.
4. The teacher will demonstrate the actual process of creating a haiku with pictures.
5. Students will write their own haiku poem about the moon.
6. When ready students will type their poem in word processing software or paint software, illustrate by either drawing a picture or inserting a picture on the computer, and print a hard copy.
7. Students will read their poem to the class.
8. The teacher will assemble the poems into a class poem book to display at moon festival.
9. Assess student work using a rubric. (See our sample)
Optional: Have students email their poems or publish on the web.
Haiku Web site:
Moon Mania Home •• Lessons •• Resources •• Moon Facts
LETRC Home •• EduConnect •• LPB Interactive •• Contact Us
|
PHP DevCenter
oreilly.comSafari Books Online.Conferences.
PHP Pocket Reference
Pages: 1, 2, 3, 4, 5
Boolean Values
Every value in PHP has a boolean truth value (true or false) associated with it. This value is typically used in control structures, like if/else and for. The boolean value associated with a data value is determined as follows:
• For an integer or floating point value, the boolean value is false if the value is 0; otherwise the boolean value is true.
• For a string value, the boolean value is false if the string is empty; otherwise the boolean value is true.
• For an array, the boolean value is false if the array has no elements; otherwise the boolean value is true.
• For an object, the boolean value is false if the object has no defined variables or functions; otherwise the boolean value is true.
• For an undefined object (a variable that has not been defined at all), the boolean value is false.
PHP has two built-in keywords, true and false, where true represents the integer value 1 and false represents the empty string.
Type Casting
Variables in PHP do not need to be explicitly typed. PHP sets the type when a variable is first used in a script. You can explicitly specify a type using C-style casting.
For example:
$var = (int) "123abc";
Without the (int) in this example, PHP creates a string variable. With the explicit cast, however, we have created an integer variable with a value of 123. The following table shows the available cast operators in PHP:
(int), (integer)
Cast to an integer
(real), (double), (float)
Cast to a floating point number
Cast to a string
Cast to an array
Cast to an object
Although they are not usually needed, PHP does provide the following built-in functions to check variable types in your program: gettype( ), is_long( ), is_double( ), is_string( ), is_array( ), and is_object( ).
An expression is the basic building block of the language. Anything with a value can be thought of as an expression. Examples include:
By combining many of these basic expressions, you can build larger and more complex expressions.
Note that the echo statement we've used in numerous examples cannot be part of a complex expression because it does not have a return value. The print statement, on the other hand, can be used as part of complex expression, as it does have a return value. In all other respects, echo and print are identical--they output data.
Expressions are combined and manipulated using operators. The following table shows the operators available in PHP, along with their precedence (P) and associativity (A). The following table lists the operators from highest to lowest precedence. These operators should be familiar to you if you have any C, Java, or Perl experience.
!, ~, ++, --, @, (the casting operators)
*, /, %
+, - .
<<, >>
<, <=, >=, >
==, !=
? : (conditional operator)
=, +=, -=, *=, /=, %=, ^=, .=, &=, |=
Control Structures
The control structures in PHP are very similar to those used by the C language. Control structures are used to control the logical flow through a PHP script. PHP's control structures have two syntaxes that can be used interchangeably. The first form uses C-style curly braces to enclose statement blocks, while the second style uses a more verbose syntax that includes explicit ending statements. The first style is preferable when the control structure is completely within a PHP code block. The second style is useful when the construct spans a large section of intermixed code and HTML. The two styles are completely interchangeable, however, so it is really a matter of personal preference which one you use.
The if statement is a standard conditional found in most languages. Here are the two syntaxes for the if statement:
if(expr) { if(expr):
statements statements
} elseif(expr):
elseif(expr) { statements
statements else:
} statements
else { endif;
The if statement causes particular code to be executed if the expression it acts on is true. With the first form, you can omit the braces if you only need to execute a single statement.
The switch statement can be used in place of a lengthy if statement. Here are the two syntaxes for switch:
switch(expr) { switch(expr):
case expr: case expr:
statements statements
break; break;
default: default:
statements statements
break; break;
} endswitch;
The expression for each case statement is compared against the switch expression and, if they match, the code following that particular case is executed. The break keyword signals the end of a particular case; it may be omitted, which causes control to flow into the next case. If none of the case expressions match the switch expression, the default case is executed.
The while statement is a looping construct that repeatedly executes some code while a particular expression is true:
while(expr) { while(expr):
statements statements
} endwhile;
The while expression is checked before the start of each iteration. If the expression evaluates to true, the code within the loop is executed. If the expression evaluates to false, however, execution skips to the code immediately following the while loop. Note that you can omit the curly braces with the first form of the while statement if you only need to execute a single statement.
It is possible to break out of a running loop at any time using the break keyword. This stops the current loop and, if control is within a nested set of loops, the next outer loop continues. It is also possible to break out of many levels of nested loops by passing a numerical argument to the break statement (break n) that specifies the number of nested loops it should break out of. You can skip the rest of a given loop and go onto the next iteration by using the continue keyword. With continue n, you can skip the current iterations of the n innermost loops.
The do/while statement is similar to the while statement, except that the conditional expression is checked at the end of each iteration instead of before it:
do {
} while(expr);
Note that due to the order of the parts of this statement, there is only one valid syntax. If you only need to execute a single statement, you can omit the curly braces from the syntax. The break and continue statements work with this statement in the same way that they do with the while statement.
A for loop is a more complex looping construct than the simple while loop:
for(start_expr; cond_expr; iter_expr) {
for(start_expr; cond_expr; iter_expr):
A for loop takes three expressions. The first is the start expression; it is evaluated once when the loop begins. This is generally used for initializing a loop counter. The second expression is a conditional expression that controls the iteration of the loop. This expression is checked prior to each iteration. The third expression, the iterative expression, is evaluated at the end of each iteration and is typically used to increment the loop counter. With the first form of the for statement, you can omit the braces if you only need to execute a single statement.
The break and continue statements work with a for loop like they do with a while loop, except that continue causes the iterative expression to be evaluated before the loop conditional expression is checked.
A function is a named sequence of code statements that can optionally accept parameters and return a value. A function call is an expression that has a value; its value is the returned value from the function. PHP provides a large number of internal functions. The "Function Reference" section lists all of the commonly available functions. PHP also supports user-definable functions. To define a function, use the function keyword. For example:
function soundcheck($a, $b, $c) {
return "Testing, $a, $b, $c";
When you define a function, you need to be careful what name you give it. In particular, you need to make sure that the name does not conflict with any of the internal PHP functions. If you do use a function name that conflicts with an internal function, you get the following error:
Fatal error: Can't redeclare already declared function in filename on line N
After you define a function, you call it by passing in the appropriate arguments. For example:
echo soundcheck(4, 5, 6);
You can also create functions with optional parameters. To do so, you set a default value for each optional parameter in the definition, using C++ style. For example, here's how to make all the parameters to the soundcheck( ) function optional:
function soundcheck($a=1, $b=2, $c=3) {
Variable Scope
The scope of a variable refers to where in a program the variable is available. If a variable is defined in the main part of a PHP script (i.e., not inside a function or a class), it is in the global scope. Note that global variables are only available during the current request. The only way to make variables in one page available to subsequent requests to another page is to pass them to that page via cookies, GET method data, or PUT method data. To access a global variable from inside a function, you need to use the global keyword. For example:
function test( ) {
global $var;
echo $var;
$var="Hello World";
test( );
The $GLOBALS array is an alternative mechanism for accessing variables in the global scope. This is an associative array of all the variables currently defined in the global scope:
function test( ) {
echo $GLOBALS["var"];
$var="Hello World";
test( );
Every function has its own scope. When you create a variable inside of a function, that variable has local scope. In other words, it is only available within the function. In addition, if there is a global variable with the same name as a variable within a function, any changes to the function variable do not affect the value of the global variable.
When you call a function, the arguments you pass to the function (if any) are defined as variables within the function, using the parameter names as variable names. Just as with variables created within a function, these passed arguments are only available within the scope of the function.
Passing Arguments
There are two ways you can pass arguments to a function: by value and by reference. To pass an argument by value, you pass in any valid expression. That expression is evaluated and the value is assigned to the corresponding parameter defined within the function. Any changes you make to the parameter within the function have no effect on the argument passed to the function. For example:
function triple($x) {
return $x;
In this case, $var evaluates to 10 when triple( ) is called, so $x is set to 10 inside the function. When $x is tripled, that change does not affect the value of $var outside the function.
In contrast, when you pass an argument by reference, changes to the parameter within the function do affect the value of the argument outside the scope of the function. That's because when you pass an argument by reference, you must pass a variable to the function. Now the parameter in the function refers directly to the value of the variable, meaning that any changes within the function are also visible outside the function. For example:
function triple($x) {
return $x;
The & that precedes $var in the call to triple( ) causes the argument to be passed by reference, so the end result is that $var ends up with a value of 30.
Static Variables
PHP supports declaring local function variables as static. A static variable retains its value between function calls, but is still accessible only from within the function it is declared in. Static variables can be initialized and this initialization only takes place the first time the static declaration is executed. Static variables are often used as counters, as in this example:
function hitcount( )
static $count = 0;
if ($count == 0) {
print "This is the first time this page";
print " has been accessed";
else {
print "This page has been accessed $count";
print " times";
Web-Related Variables
PHP automatically creates global variables for all the data it receives in an HTTP request. This can include GET data, POST data, cookie data, and environment variables. Say you have an HTML form that looks as follows:
<INPUT TYPE=text NAME=var>
When the form is submitted to the test.php3 file, the $var variable within that file is set to whatever the user entered in the text field.
A variable can also be set in a URL like this:
When the request for this URL is processed, the $var variable is set for the test.php3 page.
Any environment variables present in your web server's configuration are also made available, along with any CGI-style variables your web server might set. The actual set of variables varies between different web servers. The best way to get a list of these variables is to use PHP's special information tag. Put the following code in a page and load the page in your browser:
<? phpinfo( ) ?>
You should see a page with quite a bit of information about PHP and the machine it is running on. There is a table that describes each of the extensions currently enabled in PHP. Another table shows the current values of all the various configuration directives from your php3.ini file. Following those two tables are more tables showing the regular environment variables, the special PHP internal variables, and the special environment variables that your web server has added. Finally, the HTTP request and response headers for the current request are shown.
Sometimes it is convenient to create a generic form handler, where you don't necessarily know all the form element names. To support this, PHP provides GET, POST, and cookie associative arrays that contain all of the data passed to the page using the different techniques. These arrays are named $HTTP_GET_DATA, $HTTP_POST_DATA, $HTTP_COOKIE_DATA, respectively. For example, here's another way to access the value of the text field in our form:
echo $HTTP_POST_VARS["var"];
PHP sets global variables in a particular order. By default, global variables are set first from GET data, then from POST data, and then finally from cookie data. This means that if you have a form with a field named var that uses the GET method and a cookie with a var value, there is just one global variable named $var that has the value of the cookie data. Of course, you can still get at the GET data through the $HTTP_GET_DATA array. The default order can be defined with the gpc_order directive in the php3.ini file.
Pages: 1, 2, 3, 4, 5
Next Pagearrow
Valuable Online Certification Training
Online Certification for Your Career
Enroll today!
Sponsored by:
|
© 2014 Shmoop University, Inc. All rights reserved.
The Mad Dog
Symbolism, Imagery, Allegory
Meet Tim Johnson. He was just snuffling along, investigating interesting smells, burying bones only to dig them up again, and looking out for lady dogs, when—bam—the symbolic structure of the book picks him up and decrees he has to die. Why? What did poor Tim the Dog ever do to get infected with rabies and be gunned down like, well, a dog?
For starters, there's his name. It may seem odd to give an animal the last name of the family it belongs to, but it's apparently common practice in Maycomb. Judge Taylor's pooch gets the same treatment. But more interestingly, it allows the dog's name to sound suspiciously like that of another character. Tim Johnson…Tom Robinson? Coincidence? Maybe. But Scout's memory of her father shooting the dog does pop up more than once in situations involving Tom, and doesn't get mentioned otherwise.
For example, after Scout turns away the lynch mob, her memory of Atticus in front of the jail merges with her memory of him shooting the dog.
I was very tired, and was drifting into sleep when the memory of Atticus calmly folding his newspaper and pushing back his hat became Atticus standing in the middle of an empty waiting street, pushing up his glasses. The full meaning of the night's events hit me and I began crying. (16.3)
But why does Scout associate the two images? Perhaps they're both examples of Atticus doing tough things he doesn't want to do. Or of Atticus facing off with a mindless threat. (He does later refer to the men in the lynch mob as "animals" [16.22]).
Scout returns to this memory again when she's dozing off, waiting for the jury to announce its verdict in Tom's case:
The feeling grew until the atmosphere in the courtroom was exactly the same as a cold February morning, when the mockingbirds were still, and the carpenters had stopped hammering on Miss Maudie's new house, and every wood door in the neighborhood was shut as tight as the doors of the Radley Place. A deserted, waiting, empty street, and the courtroom was packed with people. A steaming summer night was no different from a winter morning. […]. I expected Mr. Tate to say any minute, "Take him, Mr. Finch...." (21.46)
Why does Scout have this feeling? In both past and present, she's waiting for something to happen; both times, she has no power over the outcome. In the previous instance, Atticus's skill with a gun was able to save the neighborhood from the mad dog; will he be able to do the same this time? The same image recurs once more as the jury delivers their verdict.
I saw something only a lawyer's child could be expected to see, could be expected to watch for, and it was like watching Atticus walk into the street, raise a rifle to his shoulder and pull the trigger, but watching all the time knowing that the gun was empty. A jury never looks at a defendant it has convicted, and when this jury came in, not one of them looked at Tom Robinson. (21.48)
Even Atticus's talent for sharp-shooting can't do anything if the gun isn't loaded. It's tempting to try to map out the symbolism here—is the gun the legal process? are the bullets the jury? is Tim Johnson racism?—but that might be an oversimplification. Perhaps it's just the feeling Scout has that's the link between the two situations—the sick horror at what's happening, but knowing that it can't be any other way.
back to top
|
Japan ministers ignored safety warnings over nuclear reactors
Seismologist Ishibashi Katsuhiko claimed that an accident was likely and that plants have 'fundamental vulnerability'
Fukushima nuclear power plant
The emergency at Fukushima's nuclear power plant comes almost 25 years after the meltdown at Chernobyl in Ukraine (above). Photograph: Sergey Dolzhenko/EPA
The timing of the near nuclear disaster at Fukushima Daiichi could not have been more appropriate. In only a few weeks the world will mark the 25th anniversary of the worst nuclear plant disaster ever to affect our planet – at Chernobyl in Ukraine. A major core meltdown released a deadly cloud of radioactive material over Europe and gave the name Chernobyl a terrible resonance.
This weekend it is clear that the name Fukushima came perilously close to achieving a similar notoriety. However, the real embarrassment for the Japanese government is not so much the nature of the accident but the fact it was warned long ago about the risks it faced in building nuclear plants in areas of intense seismic activity. Several years ago, the seismologist Ishibashi Katsuhiko stated, specifically, that such an accident was highly likely to occur. Nuclear power plants in Japan have a "fundamental vulnerability" to major earthquakes, Katsuhiko said in 2007. The government, the power industry and the academic community had seriously underestimated the potential risks posed by major quakes.
Katsuhiko, who is professor of urban safety at Kobe University, has highlighted three incidents at reactors between 2005 and 2007. Atomic plants at Onagawa, Shika and Kashiwazaki-Kariwa were all struck by earthquakes that triggered tremors stronger than those to which the reactor had been designed to survive.
In the case of the incident at the Kushiwazaki reactor in northwestern Japan, a 6.8-scale earthquake on 16 July 2007 set off a fire that blazed for two hours and allowed radioactive water to leak from the plant. However, no action was taken in the wake of any of these incidents despite Katsuhiko's warning at the time that the nation's reactors had "fatal flaws" in their design.
Japan is the world's third largest nuclear power user, with 53 reactors that provide 34.5% of its electricity, and there are plans to increase provision to 50% by 2030. Unfortunately its nuclear industry is bedevilled with controversy In 2002 the president of the country's largest power utility was forced to resign after he and other senior officials were suspected of falsifying plant safety records. Nor is the nature of its reactor planning inducing much comfort.
The trouble is, says Katsuhiko, that Japan began building up its atomic energy system 40 years ago, when seismic activity in the country was comparatively low. This affected the designs of plants which were not built to robust enough standards, the seismologist argues.
Since then, Japan has experienced more serious quakes as tension has built up on tectonic plates, culminating in Friday's devastating earthquake, the worst in Japan for more than 100 years. The result was an incident that came perilously close to triggering a nuclear meltdown.
Starved of coolant, the reactor would have heated up dangerously until its fuel rods melted and released a cloud of highly radioactive material. Not surprisingly, the International Atomic Energy Agency has announced it is now urgently seeking details of what happened at Fukushima. The rest of the world – which includes many countries, including Britain, that are preparing significant nuclear expansion plans – will be looking very closely at what it finds.
• This article was amended on 14 March 2011. The caption implied that the photograph was of Fukushima nuclear plant, not Chernobyl. This has been amended
Today's best video
• Campaigners for a Yes vote in Scotland's Independence referendum
Scottish independence: 'It's all going hell for leather'
• iPhone 6 review
Apple iPhone 6 review
• Scottish independence explainer
Scottish referendum for non-Brits
• Animation of Richard III's skeleton
How King Richard III was killed in battle
Today in pictures
|
Abscessed Teeth And Bruxism TMJ Disorders
Abscessed Teeth And Bruxism TMJ Disorders
Before we begining analazying the abscess in the tooth, we need to recall what bruxism and tmj disorders are. Bruxism and TMJ disorders are usually related to pain in the jaw and teethgrinding. They can cause a wide range of symptoms from popping noises in the ear to having difficulty chewing food. An abscess in the tooth refers to an infection that was caused by a pocket of pus residing in the tissue around the tooth. Abscesses are very serious conditions, and can lead to serious matters if they aren’t treated immediately. When the pulp of a tooth dies due to damage or decay, bacteria will begin to grow from the dead tissue that is left. This bacteria will eventually spread from the root of the dead tooth into the tissue that is below and create a pocket of pus - the abscess.
The dentist can also perform a root canal, in an attempt to remove dead or decayed tissue. Last but not least, he can also drill a hole in the tooth to give the infection a chance to drain and try to remove any dead pulp. The most common treatment with an abscess is to use antibiotics to kill the infection, then get the tooth removed. You should never let it get that bad - as an abscess is something that can destroy your jawbone. For more information about TMJ Disorder treatments and alternative treatments for bruxism, please visit http://bruxismtreatments.org for more information.
|
How big are the biggest squids?
Answered by Discovery Channel
• Discovery Channel
Discovery Channel
1. Some squids are just a couple of inches long, but giant (Architeuthis dux), colossal (Mesonychoteuthis hamiltoni) and jumbo or Humboldt (Dosidicus gigas) squids are absolutely huge. Giant and colossal squids can weigh up to 1,000 pounds (453 kilograms) [source: Smithsonian Institution]. They also can grow to be 35 to 60 feet (10.6 to 18.2 meters) long including their tentacles. To put that size into perspective, that’s longer than a school bus. The squid’s eyeballs are the size of volleyballs and for a little extra effect, they stick out. Humboldt squids are a bit smaller, growing to only about six or seven feet (1.8 or 2.1 meters) long and weighing in at 100 pounds (45 kilograms).
The reason giant, colossal and Humboldt squids grow so big is a phenomenon called deep-sea gigantism. The squids’ ancestors were much smaller, but as the squids migrated to deeper waters, they had to evolve into bigger animals to deter larger predators. Plus, at greater depths, less food is available. The squids have to travel long distances to find food, so size -- and the endurance that comes with it -- is an advantage.
What is believed to be the largest squid ever caught was found snagged on a fishing line off the coast of Antarctica in 2007. The female colossal squid weighed about 1,000 pounds (454 kilograms) and was 30 feet long (9 meters). She would have made her living at depths as deep as 6,500 feet (1,981 meters) beneath the surface [source: National Geographic]. She was carrying some partially developed eggs, and she had the largest eyes of any known animal on Earth at 11 inches wide (28 centimeters). In fact, her eye is the only fully intact eye of a colossal squid that has ever been retrieved [source: National Geographic]. Far from being the feared sea monster of lore, scientists who examined her thought she was likely slow-moving and would have been at high risk for attack by predators such as the sperm whale, her natural enemy.
More answers from Discovery Channel »
Still Curious?
• Are there dinosaur mummies?
Answered by Discovery Channel
• How are replicas of fossils made?
Answered by Discovery Channel
• What was the weather like during the Triassic period?
Answered by Science Channel
What are you curious about?
Image Gallery
|
Search Ask the Dietitian
Sugar & Artificial Sweeteners
1. Is sugar in the U.S. Dietary Guidelines for added sugars or just sucrose? Answer
2. Does the U.S. Dietary Guidelines call for no more than 10% of calories from sugar mean sucrose and natural occuring sugars or all sugars? Answer
3. Since food labels show how many grams of sugar in a serving, how can I know what a good goal should be? Answer
4. How does sugar affect your body? Answer
5. Why do they have sugar in pop? Answer
6. I have started to take MetRx. A couple of my friends brought up that it contains aspartame. Do you have any opinions or suggstions about MetRx or aspartame or both? Answer
7. Could you comment on the fat that many parents are feeding their very young children diet cold drinks with NutraSweet? Answer
8. Does aspartame cause headaches? Answer
9. What is NutraSweet? Answer
10. Would you serve your children KoolAid with sugar or NutraSweet? Answer
11. What do you think about sugar-free hard candy and breath mints? Answer
12. Is saccharin all right to use? Answer
13. How much sugar is in a sucker? Answer
14. I've started using honey nstead of sugar since it is a more natural sweetener. I'm not fat or a diabetic. Answer
3. You don't have to answer this question: why would the USDA make this recommendation and the FDA not require labeling to help us follow that advice?
That question was just so I could vent my frustration concerning the government. Thank you for any help you can give me. I really do appreciate it. Thanks.
The guideline includes both sugars naturally occurring in foods AND sugars added to foods by manufacturers as there is no way for a chemical analysis to separate one from the other in a food. Chemically fructose from an apple looks the same as fructose added as a sweetener even though the fructose sweetener may be in higher concentration for taste.
The 10% guideline applies to all chemicals referred to as "simple sugars". This includes simple sugars like glucose, fructose, lactose, maltose as well as sucrose.
The new food label is to educate the consumer and the FDA collected a lot or opinions to arrive at the new format. Unless a food makes a health claim, it doesn't have to list any other nutrient. Sugar is listed under "carbohydrates" along with fiber which is a complex carbohydrate.
Actually, there is less nutrition information on the new food label than the old one. The new food label only contains 2 vitamis and 2 minerals.
Does the U.S. Dietary Guidelines in calling for no more than 10% of calories from sugar means a) added sucrose, b) sucrose, added or naturally occurring, or c) all simple sugars (fructose, glucose, lactose, sucrose, etc.)?
Do you also know whether sugars on the food label of foods refers to added sucrose or to sucrose (whether or not added) or to all simple sugars?
Thank you. I appreciate any help you can give me.
There aren't a lot of foods (300 - 400) with sugar data. Just because the data is not available other than for some brand name foods which analyze for sugar it can be a major selling point (cereals) for the food. Sugar is not required on the food label by the FDA which updated their rules in 1994. The sugar data on food labels includes both natural occurring sugars like fructose (fruit sugar), lactose (milk sugar) as well as added sugars like sucrose (table sugar) as there is no reasonable way to separate these sugars. Fructose is often used as a sweetener added to foods and a chemical analysis of a food would not differentiate "added" fructose from "naturally occurring" fructose. Honey is another "natural" sugar, but is also added to foods as a sweetener.
The U.S. Dietary Guidelines recommend you moderate your intake of sugar which includes sugar you add to food at the table as well as sugar added by food manufacturers. Unfortunately, this lumps fruit sugar and milk sugar (good sources) in the same analysis with sugar added during manufacturing (carbonated beverages, cakes, pie, cookies, fruit drinks, ice cream and candy). The "sugar" content listed on food labels include both mono and disaccharides (fructose, lactose, maltose, sucrose), but not starch.
Hi. I'm trying to cut down on the amount of sugars I eat. Since the new food labels typically show how many grams of sugar is in a serving, how can I know what a good "goal" should be?
I'd like to record how many grams I eat now and then compare it to how many grams I should be eating. What about percentage of sugars to carbohydrates? I can easily find a goal for my height and weight for carbohydrates, but how much of that should be in the form of sugars? For example, if my carbohydrate goal is 300 grams per day, how many of those grams should be sugar?
If you eat 300 grams of carbohydrate each day, then up to 30 grams could come from sugars. However, the new food label lumps all sugars together including milk (lactose) and fruit sugar (fructose) along with table sugar (sucrose). For instance one serving of fruit canned in fruit juice has 14 grams of sugar and 1 cup of skim milk has 12 grams of sugar. Yet neither of these foods has sugar "added" to them. Unfortunately, the new food label lumps all sugars (disaccharides) together and doesn't make it easier for consumers to identify sources of "added" sugar.
Carbohydrates, including sugars, are your body's main source of energy. There are two forms of sugar in the food we eat. There are naturally occurring sugars in fruits and dairy products and there are added sugars (white, brown or powdered sugar as well as corn syrup solids) in many processed foods.
First start by reading the ingredient list on a food label. Learn to differentiate between ingredients that are added sugars (corn syrup solids or sucrose) and natural sugars like lactose (milk sugar) or fructose (fruit sugar) that are inherent in raw or basic foods. It may take some leaning on your part to recognize sources of added versus natural sugars and may be a bit confusing at times because fructose is also used as an added sugar.
When you eat foods that contain added sugars, choose foods that also contain nutrients like vitamins, minerals or fiber. Limit foods that are high in sugar with few other nutrient values (i.e. 15% vitamin C) listed on the Nutrition Facts label. Avoid foods such as candy, non-diet soda pop, jam, jelly and syrup. You should get most of your carbohydrates (sugars) from starchy foods such as pasta, rice, bread, other grain products, potatoes and other starchy vegetables. These foods are great sources of complex carbohydrates, vitamins, minerals and fiber. These foods are considered to be "nutrient rich". Fruit and fruit juice is another great source of carbohydrates that can provide vitamin A, C, folacin, potassium and other important nutrients. When you choose foods that contain sugars, read the food labels to make sure you're getting some nutrients to go along with your sugar. Remember that moderate amounts of sugar, no matter what their source, are part of a healthy diet.
How does sugar affect your body?
Sugar adds calories which if you eat more than you need, you will gain weight. Weight gain increases your risk of getting heart disease, diabetes, high blood pressure or even some types of cancer. However, if you are underweight, sugar can add extra calories so that you can gain weight. If your body doesn't make enough insulin like a diabetic, then the sugar you eat increases the sugar in your blood to unhealthy levels.
The body breaks down sugar into the sugar you find in your blood (glucose). Unfortunately, there are no vitamins or minerals in sugar and so it is called an "empty" calorie. That is why it is the first food to be eliminated from a weight loss diet. By the way, it doesn't matter if the sugar is white or brown. The amount of molasses in brown sugar is so low it doesn't contribute enough of any vitamin or mineral to count on a food label.
Why do they have sugar in pop?
Well imagine plain water with food coloring and artificial flavor? You could make that at home. Sugar tastes good and adds flavor. But, there is 9 teaspoons of sugar in 12 ounces of pop (soda). Plain water or fruit juice would be a better alternative.
For the past two weeks I have started to take MetRx. A couple of my friends have brought up the fact that it contains aspartame which supposedly kills brain cells and has not been approved by the FDA. I take this mix twice a day and the difference I have noticed is that I am eating less. My concern is with the ingredient aspartame - do you have any opinions or suggestions about MetRx or aspartame or both? Thank you in advance.
Aspartame has been approved by the FDA for a number of years and is considered safe. It contains two amino acids phenylalanine and aspartic acid. There is no research to substantiate that it kills brain cells.
I am unfamiliar with MetRx. If you send the nutritional analysis and info as to why it is used, I can give you more feedback.
Children do not need diet beverages (sugar free) or other diet foods, unless the child is a diabetic. At the present, there is no concern with NutraSweet usage by children. I am concerned though about children who involuntarily share their parent's low calorie diet and parent's preoccupation with weight control.
Children at least through age 10 need sufficient calories and protein for normal growth and development. This can be grossly measured by height and weight. There are published tables for comparison. If a child's height is in the 100th percentile, a weight in the 100th percentile (range of 75% to 125% is appropriate) is OK. If a child's height or weight is less than the 25th percentile, he / she should be seen by a doctor to determine why. Smallness may be genetic, but it can also be induced by an insufficient diet.
Other vitamins and minerals are important and food is the best source of these nutrients. When calories are restricted, so are these vitamins and minerals necessary for growth and development.
Infants and very young children cannot always say when they are thirsty or hungry. A good rule of thumb is for adults to offer children fluids when they themselves drink liquids. Another rule of thumb is to look at the color of urine. During the day, it should be colorless and odorless.
Does aspartame cause headaches? I think I get migraines from aspartame. I was on vacation in the Jamaica over Spring break and I didn't have any migraine headaches. The only thing different that I noticed was the pop had saccharin in it instead. I do drink several cans of diet pop a day.
Try eliminating all foods that contain aspartame from your diet for one month. Document on paper any headache. Also, document when your menstrual cycle starts and ends. Hormone levels can cause fluctuations in not only headaches, but other allergic reactions too. Reintroduce one aspartame containing food every three days.
Since you were on vacation, your absence of migraines may have been due to the lack of stress in your life. With the variety of foods and beverages a person eats, it is very difficult to pinpoint one specific food as the causative agent. Try the above elimination diet to see if aspartame is really the cause. Contact the NutraSweet Company in Skokie, Illinois if you think there is a relationship between aspartame and your headaches.
The Center for Disease Control in Atlanta did some studies of physical complaints and the use of aspartame. After extensive interviews with those persons having complaints, found no clear cut relationship between the complaints and ingestion of aspartame.
What is in NutraSweet? It tastes good and doesn't leave a bitter aftertaste like saccharin.
NutraSweet is the brand name of aspartame. Aspartame contains two amino acids, aspartic acid and phenylalanine. Together they make foods taste sweeter. Persons with PKU (phenylketonuria) cannot use aspartame because it would raise their blood levels of phenylalanine. PKU is an inherited disease and is tested for at birth.
Would you serve your children KoolAid with sugar or NutraSweet?
If you are concerned about sugar versus aspartame, unless your child is a diabetic, give them the sugar sweetened beverage. The sugar content of these powder mixes is about the same as carbonated beverages (9 teaspoons of sugar per 12-ounce beverage). I am not opposed to occasionally giving children, who are within a normal weight range, sugar-based beverages, fruit drinks or soda. Special occasions, like birthday parties, are those occasions when a sugar containing beverage is fine.
I am concerned about diet conscious adults putting children on diets however. Try to avoid feeding children "diet" foods. An adequate amount of calories and nutrients should be available to growing children to insure brain and physical development.
Why not offer other beverage choices to children. In hot weather, offer double diluted fruit juice, just plain cold water are good thirst quenchers or frozen fruit juice cubes.
Taste preference is something that is learned from birth through the age of six. If sweet foods are offered frequently, a child will be more likely to develop a preference for sweets. Don't give into children's pressure to buy sugar-sweetened beverage all the time.
What do you think about sugar-free hard candy and breath mints? They're good tasting, low calorie candies.
Most sugar-free candy, gum and breath mints taste good. Read the label to see which artificial sweetener is an ingredient. Sorbitol and mannitol are frequently used in these low calorie candies.
Sorbitol and mannitol are sugars derived from alcohol. They are readily converted to fructose and glucose. The problem with these sweeteners is that they are slowly absorbed from the intestines and may produce a laxative or gaseous effect. They are low in calories.
I would not suggest you eat a lot of sugar-free candy at a time. If gas or diarrhea bothers you, omit the sugar-free candy for one week and see if your symptoms go away. Try the sugar-free candy again and see if the symptoms return. This is an elimination diet that will help you identify any symptoms you may have.
Is saccharin all right to use? With all the cancer scares about artificial sweeteners, I want to know if it is safe to use.
Saccharin was thought to be one cause of bladder cancer in men. Because of a law called the Delaney Clause, any substance that is known to cause cancer in man or animals must be banned. At the time the Food and Drug Administration was considering banning saccharin, there wasn't any other artificial sweetener on the market. Since that time, any food with saccharin in it, has carried a warning label that the use of saccharin may be hazardous to your health.
The Center for Disease Control in Atlanta studied person's with bladder cancer. They did not find any higher incidence of bladder cancer among saccharin users as compared to non-saccharin users.
The numbers of foods in today's market with saccharin are fewer because of the increased use of aspartame. Aspartame does not leave a bitter aftertaste as saccharin does.
How much sugar is in a sucker?
The size of suckers vary, but in a clear, hard candy sucker, there should be about one to two teaspoons of sugar (four to eight grams).
Are your brushing your teeth after eating a sucker? Sweet sticky sugars like suckers or candy adhere to tooth enamel. The bacteria in your mouth feed on simple sugars. The bacteria increase the acidity of your mouth which in turn eats through the enamel covering on your teeth. This increases cavities (dental caries). Frequent brushing helps remove sticky sweets that adhere to teeth. Use of fluoridated toothpaste also helps reduce cavities.
I've started using honey instead of sugar since it is a more natural sweetener. I've been trying to cut down on sugar. I'm not fat or a diabetic. I just think it would be more healthy.
Well, you are still using sugar if you have switched to honey. Honey contains 40% sucrose (table sugar) and 60% fructose (fruit sugar).
Honey can be used to replace sugar in a recipe; 3/4 cup of honey can replace one cup of sugar in a recipe. You will have to reduce the liquid by one-half cup for each cup of honey you add to the recipe though.
If you want to cut down on your total intake of sugar, consider decreasing all sugars, white, brown, powdered, raw, as well as honey. You could limit your intake of foods high in sugar to once a week rather than eating sweets daily. Another significant reduction in sugar could be made by adding only 1/2 to 1/3 the amount of sugar or honey called for in a recipe. You will be surprised how good cookies taste with half the sugar.
Sugar is a natural food. It comes from sugar cane or sugar beets. It is considered an empty calorie since there are not any vitamins or minerals in sugar. Some advocates of honey claim that honey has vitamins and minerals. Honey does contain some nutrients, but one tablespoon of honey will provide less than 1/100 of your Recommended Dietary Allowances (RDA) for protein, thiamin, riboflavin, niacin, vitamin C, calcium and iron. There is no vitamin A in honey. This is not a significant contribution to your diet nutrient wise and honey is adding calories along with those trace amounts of nutrients.
Search Ask the Dietitian
|
Who is a delegate?
A model UN delegate is a student who assumes the role of an ambassador to the United Nations at a Model UN event. A Model UN delegate does not have to have experience in international relations. Anyone can participate in Model UN, so long as they have the ambition to learn something new, and to work with people to try and make a difference in the world. Model UN students tend to go on to become great leaders in politics, law, business, education and even medicine.
Who is the chair?
The presiding official of a committee.
What is a Secretary General?
The Secretary General is the highest student position in the ECMUN program. Each year a Secretary General (SG) is selected from among the ECMUN staff members. The Secretary General oversees all aspects of the program and is responsible for coordinating the work of the MUN club and Secretariat
What is Secretariat?
Team of students responsible for all the aspects of the conference.
What is Security Council (SC)?
What is Historical Security Council (HSC)?
Has the same role as Security Council, but will discuss about the global events in a specific year. ECMUN has taken 2001.
What is European Council (EC)?
It is an institution of the European Union. It comprises the heads of state or government of the EU member states, along with the President of the European Commission and the President of the European Council. At ECMUN we follow MUN Parliamentary procedure for sessions.
What is United Nations Convention on Climate Change (UNFCCC)?
What is United Nations Humans Rights Council (UNHRC)?
It is a subsidiary body of the United Nations General Assembly. The council works closely with the Office of the High Commissioner for Human Rights (OHCHR) and engages the United Nation’s special procedures.
What is United Nations Children’s Fund (UNICEF)?
What is United Nations Development Program (UNDP)?
What is United Nation Educational, Scientific and Cultural Organization (UNESCO)?
What is a president letter?
It is a letter from the chair of the committee that explains the chairing style and topics in the committee.
What is a position paper?
An essay detailing your country’s policies on the topics being discussed in your committee. Writing a position paper will help you organize your ideas so that you can share your country’s position with the rest of the committee.
What is an opening speech?
The first speech given by any delegate.
What are good pages to use for research? , ,
For more information refer to conference handbook and President’s letter of your committee
What is parliamentary procedure?
Set of rules by which MUN conference is conducted.
What should I wear?
You should be dressed formally; yes that means suits, skirts and dress pants.
Can I use laptop during sessions?
No. Only during unmoderated caucuses to write up working papers.
What should I bring to the conference?
Bring your research materials and paper, as you cannot use the laptop during the sessions. Only during the unmoderated caucus to write digital working papers.
What happens if the rules are violated?
You will be warned by Secretary General and in extreme cases you may be suspended from the conference.
Will there be refreshments available throughout the day?
Yes, the conference will provide snacks and drinks. There are vending machines in the building as well as both with food, coffee and drinks that you can buy.
|
Herpes simplex virus
From Wikipedia, the free encyclopedia
(Redirected from HSV-1)
Jump to: navigation, search
This article is about the virus. For information about the disease caused by the virus, see Herpes simplex.
Herpes simplex virus
Herpes simplex virus TEM B82-0474 lores.jpg
TEM micrograph of a herpes simplex virus
Virus classification
Group: Group I (dsDNA)
Order: Herpesvirales
Family: Herpesviridae
Subfamily: Alphaherpesvirinae
Genus: Simplexvirus
• Herpes simplex virus 1 (HSV-1)
• Herpes simplex virus 2 (HSV-2)
Main article: Herpes simplex
HSV-1 and -2 are transmitted by contact with an infected area of the skin during re-activations of the virus. Herpes simplex virus (HSV)-2 is periodically shed in the human genital tract, most often asymptomatically, and most sexual transmissions occur during asymptomatic shedding.[3] Asymptomatic reactivation means that the virus causes atypical, subtle or hard to notice symptoms that are not identified as an active herpes infection. In one study, daily genital swab samples found HSV-2 at a median of 12-28% of days among those who have had an outbreak, and 10% of days among those suffering from asymptomatic infection, with many of these episodes occurring without visible outbreak ("subclinical shedding").[4]
In another study, seventy-three subjects were randomized to receive valacyclovir 1 g daily or placebo for 60 days each in a 2-way crossover design. A daily swab of the genital area was self-collected for HSV-2 detection by polymerase chain reaction, in order to compare the effect of valacyclovir 1 g once daily for 60 days versus placebo on asymptomatic viral shedding in immunocompetent, HSV-2 seropositive subjects without a history of symptomatic genital herpes infection. The study found that Valacyclovir significantly reduced shedding during subclinical days compared to placebo, showing a 71% reduction. Eighty-four percent of subjects had no shedding while receiving valacyclovir versus 54% of subjects on placebo. Eighty-eight percent of patients receiving valacyclovir had no recognized signs or symptoms versus 77% for placebo.[5]
For HSV-2, subclinical shedding may account for most of the transmission, and one study found that infection occurred after a median of 40 sex acts.[4] Atypical symptoms are often attributed to other causes such as a yeast infection.[6][7] HSV-1 is often acquired orally during childhood. It may also be sexually transmitted, including contact with saliva, such as kissing and mouth-to-genital contact (oral sex).[8] HSV-2 is primarily a sexually transmitted infection but rates of HSV-1 genital infections are increasing.[6]
Both viruses may also be transmitted vertically during childbirth, although the real risk is very low.[9] The risk of infection is minimal if the mother has no symptoms or exposed blisters during delivery. The risk is considerable when the mother gets the virus for the first time during late pregnancy.[10]
Viral structure[edit]
Animal herpes viruses all share some common properties. The structure of herpes viruses consists of a relatively large double-stranded, linear DNA genome encased within an icosahedral protein cage called the capsid, which is wrapped in a lipid bilayer called the envelope. The envelope is joined to the capsid by means of a tegument. This complete particle is known as the virion.[11] HSV-1 and HSV-2 each contain at least 74 genes (or open reading frames, ORFs) within their genomes,[12] although speculation over gene crowding allows as many as 84 unique protein coding genes by 94 putative ORFs.[13] These genes encode a variety of proteins involved in forming the capsid, tegument and envelope of the virus, as well as controlling the replication and infectivity of the virus. These genes and their functions are summarized in the table below.
The genomes of HSV-1 and HSV-2 are complex and contain two unique regions called the long unique region (UL) and the short unique region (US). Of the 74 known ORFs, UL contains 56 viral genes, whereas US contains only 12.[12] Transcription of HSV genes is catalyzed by RNA polymerase II of the infected host.[12] Immediate early genes, which encode proteins that regulate the expression of early and late viral genes, are the first to be expressed following infection. Early gene expression follows, to allow the synthesis of enzymes involved in DNA replication and the production of certain envelope glycoproteins. Expression of late genes occurs last; this group of genes predominantly encode proteins that form the virion particle.[12]
Cellular entry[edit]
A simplified diagram of HSV replication
Genetic inoculation[edit]
After the viral capsid enters the cellular cytoplasm, it is transported to the cell nucleus. Once attached to the nucleus at a nuclear entry pore, the capsid ejects its DNA contents via the capsid portal. The capsid portal is formed by twelve copies of portal protein, UL6, arranged as a ring; the proteins contain a leucine zipper sequence of amino acids which allow them to adhere to each other.[16] Each icosahedral capsid contains a single portal, located in one vertex.[17][18] The DNA exits the capsid in a single linear segment.[19]
Immune evasion[edit]
Latent infection[edit]
The virus can be reactivated by illnesses such as colds and influenza, eczema, emotional and physical stress, gastric upset, fatigue or injury, by menstruation and possibly exposure to bright sunlight. Genital Herpes may be reactivated by friction.
Viral genome[edit]
The open reading frames (ORFs) of HSV-1[12][29]
Gene Protein Function/description Gene Protein Function/description
UL4 UL4 [7] unknown UL41 UL41; VHS [8] Tegument protein; Virion host shutoff[22]
UL6 Portal protein UL-6 Twelve of these proteins constitute the capsid portal ring through which DNA enters and exits the capsid.[16][17][18] UL43 UL43 [11] Membrane protein
UL21 UL21 [40] Tegument protein[31] US2 US2 [41] Unknown
The Herpes simplex 1 genomes can be classified into six clades.[33] Four of these occur in East Africa with one clade in East Asian and another in Europe/North America. This suggests that this virus may have originated in East Africa. The most recent common ancestor of the Eurasian strains appears to have evolved ~60,000 years ago.[34] The East Asian HSV-1 isolates have an unusual pattern that is currently best explained by the two waves of migration responsible for the peopling of Japan.
The mutation rate has been estimated to be ~1.38×10−7 substitutions/site/year.[33] In clinical setting, the mutations in either thymidine kinase gene or DNA polimerase gene has caused resistancy to Acyclovir. However, most of the mutations occured in thymidine kinase gene compared to DNA polimerase gene.[35]
Treatment and vaccine development[edit]
Connection between facial sores and Alzheimer's disease[edit]
In the presence of a certain gene variation (APOE-epsilon4 allele carriers), a possible link between HSV-1 (i.e., the virus that causes cold sores or oral herpes) and Alzheimer's disease was reported in 1979.[37] HSV-1 appears to be particularly damaging to the nervous system and increases one’s risk of developing Alzheimer’s disease. The virus interacts with the components and receptors of lipoproteins, which may lead to the development of Alzheimer's disease.[38] This research identifies HSVs as the pathogen most clearly linked to the establishment of Alzheimer’s.[39] According to a study done in 1997, without the presence of the gene allele, HSV-1 does not appear to cause any neurological damage or increase the risk of Alzheimer’s.[40] However, a more recent prospective study published in 2008 with a cohort of 591 people showed a statistically significant difference between patients with antibodies indicating recent reactivation of HSV and those without these antibodies in the incidence of Alzheimer's disease, without direct correlation to the APOE-epsilon4 allele.[41] It should be noted that the trial had a small sample of patients who did not have the antibody at baseline, so the results should be viewed as highly uncertain. In 2011 Manchester University scientists showed that treating HSV1-infected cells with antiviral agents decreased the accumulation of β-amyloid and P-tau, and also decreased HSV-1 replication as expected.[42]
Multiplicity reactivation[edit]
When HSV particles are exposed to doses of a DNA damaging agent that would be lethal in single infections, but are then allowed to undergo multiple infection (i.e. two or more viruses per host cell), MR is observed. Enhanced survival of HSV-1 due to MR occurs upon exposure to different DNA damaging agents, including methyl methanesulfonate,[44] trimethylpsoralen (which causes inter-strand DNA cross-links),[45][46] and UV light.[47] After treatment of genetically marked HSV with trimethylpsoralen, recombination between the marked viruses increases, suggesting that trimethylpsoralen damages stimulate recombination.[45] MR of HSV appears to partially depend on the host cell recombinational repair machinery since skin fibroblast cells defective in a component of this machinery (i.e. cells from Bloom’s syndrome patients) are deficient in MR.[47] These observations suggest that MR in HSV infections involves genetic recombination between damaged viral genomes resulting in production of viable progeny viruses. HSV-1, upon infecting host cells, induces inflammation and oxidative stress.[48] Thus it appears that the HSV genome may be subjected to oxidative DNA damage during infection, and that MR may enhance viral survival and virulence under these conditions.
Use as an anti-cancer agent[edit]
Herpes simplex virus is considered as a potential therapy for cancer and has been extensively clinically tested to assess its oncolytic (cancer killing) ability.[49] Interim overall survival data from Amgen's phase 3 trial of a genetically-attenuated herpes virus suggests efficacy against melanoma.[50]
Use in neuronal connection tracing[edit]
Herpes simplex virus is also used as a transneuronal tracer defining connections among neurons by virtue of traversing synapses[51]
3. ^ Schiffer JT, Mayer BT, Fong Y, Swan DA, Wald A (2014). "Herpes simplex virus-2 transmission probability estimates based on quantity of viral shedding". J R Soc Interface 11 (95): 20140160. doi:10.1098/rsif.2014.0160. PMID 24671939.
5. ^ Sperling RS, Fife KH, Warren TJ, Dix LP, Brennan CA (March 2008). "The effect of daily valacyclovir suppression on herpes simplex virus type 2 viral shedding in HSV-2 seropositive subjects without a history of genital herpes". Sex Transm Dis 35 (3): 286–90. doi:10.1097/OLQ.0b013e31815b0132. PMID 18157071.
20. ^ Abbas et al (2009) Cellular and Molecular Immunology, Elsevier Inc.
29. ^ Search in UniProt Knowledgebase (Swiss-Prot and TrEMBL) for: HHV1
32. ^ Matis J, Kúdelová M (2011). "HSV-1 ICP0: paving the way for viral replication". Future Virology 6 (4): 421–429. doi:10.2217/fvl.11.24. PMID 12083325.
35. ^ Hussin A, Md. Nor NS, Ibrahim N (2013) Phenotypic and genotypic characterization of induced acyclovir-resistant clinical isolates of herpes simplex virus type 1" Antivir Res 100 306-313 doi:10.1016/j.antiviral.2013.09.008
36. ^ Kimberlin DW, Whitley RJ, Wan W, Powell DA, Storch G, Ahmed A, Palmer A, Sánchez PJ, Jacobs RF, Bradley JS, Robinson JL, Shelton M, Dennehy PH, Leach C, Rathore M, Abughali N, Wright P, Frenkel LM, Brady RC, Van Dyke R, Weiner LB, Guzman-Cottrill J, McCarthy CA, Griffin J, Jester P, Parker M, Lakeman FD, Kuo H, Lee CH, Cloud GA (2011). "Oral acyclovir suppression and neurodevelopment after neonatal herpes". N. Engl. J. Med. 365 (14): 1284–92. doi:10.1056/NEJMoa1003509. PMC 3250992. PMID 21991950.
41. ^ Letenneur L, Pérès K, Fleury H, Garrigue I, Barberger-Gateau P, Helmer C, Orgogozo JM, Gauthier S, Dartigues JF (2008). "Seropositivity to herpes simplex virus antibodies and risk of Alzheimer's disease: a population-based cohort study.". PLoS ONE 3 (11): e3637. doi:10.1371/journal.pone.0003637. PMC 2572852. PMID 18982063.
44. ^ Das SK (1982). "Multiplicity reactivation of alkylating agent damaged herpes simplex virus (type I) in human cells". Mutation research 105 (1–2): 15–18. doi:10.1016/0165-7992(82)90201-9. PMID 6289091.
45. ^ a b Hall JD, Scherer K (1981). "Repair of psoralen-treated DNA by genetic recombination in human cells infected with herpes simplex virus". Cancer Research 41 (12 Pt 1): 5033–5038. PMID 6272987.
46. ^ Coppey J, Sala-Trepat M, Lopez B (1989). "Multiplicity reactivation and mutagenesis of trimethylpsoralen-damaged herpes virus in normal and Fanconi's anaemia cells". Mutagenesis 4 (1): 67–71. doi:10.1093/mutage/4.1.67. PMID 2541311.
47. ^ a b Selsky CA, Henson P, Weichselbaum RR, Little JB (1979). "Defective reactivation of ultraviolet light-irradiated herpesvirus by a Bloom's syndrome fibroblast strain". Cancer Research 39 (9): 3392–3396. PMID 225021.
48. ^ Valyi-Nagy T, Olson SJ, Valyi-Nagy K, Montine TJ, Dermody TS (2000). "Herpes Simplex Virus Type 1 Latency in the Murine Nervous System is Associated with Oxidative Damage to Neurons". Virology 278 (2): 309–321. doi:10.1006/viro.2000.0678. PMID 11118355.
49. ^ Varghese S, Rabkin SD (1 December 2002). "Oncolytic herpes simplex virus vectors for cancer virotherapy". Cancer Gene Therapy 9 (12): 967–978. doi:10.1038/sj.cgt.7700537. PMID 12522436.
51. ^ Norgren, R. B., Jr., & Lehman, M. N. (1998). "Herpes simplex virus as a transneuronal tracer. [Review].". Neurosci Biobehav Rev 22 (6): 695–708. doi:10.1016/s0149-7634(98)00008-6. PMID 9809305.
External links[edit]
|
Sponsored Links
The Secret of Deliberate Creation
Get your Master Numerology Reading
Unleash Unlimited Abundance
Food for Wealth and Health
Build a Better Body
Free Usui Reiki 1 Course
Hera, the Greek goddess called the Queen of Heaven, was a powerful queen in her own right, She is the Greek Counterpart of the Babylonian Goddess Inanna/Ishtar, long before her marriage to Zeus, the mighty king of the Olympian gods. The goddess Hera ruled over the heavens and the earth, responsible for every aspect of existence, including the seasons and the weather. Honoring her great capacity to nurture the world, her very name translates as the "Great Lady". Our word galaxy comes from the Greek word gala meaning "mother's milk" . . . legend has it that the Milky Way was formed from the milk spurting from the breasts of the Greek goddess Hera, Queen of Heaven.
Where drops fell to earth, fields of lilies sprung forth. She was also worshipped as the Roman goddess Juno, and the month of June (which is the most popular month for weddings) is named in her honor. It is partly on account of Hera's great beauty, and particularly her beautiful, large eyes, that she is linked to her sacred animal, the cow, and also the peacock with its iridescent feathers having "eyes". The cow symbolizes the goddess Hera's nurturing watchfulness over her subjects, while the peacock symbolizes her luxury, beauty, and immortality.
In ancient times Hera was revered as being the only one the Greek goddesses who accompanied a woman through every step of her life. The goddess Hera blessed and protected a woman's marriage, bringing her fertility, protecting her children, and helping her find financial security. Hera was, in short, a complete woman, overseeing both private and public affairs. But it was Hera's uncommon beauty that attracted the attention of her future husband, the lusty Zeus, who tricked Hera into taking him to her breast by changing himself into a small, frightened and wounded bird that elicited her pity. Once cradled in Hera's bosom, Zeus changed back into his manly form and tried to take her . . . but she resisted his advances, putting him off until he promised to marry her. The delay only increased his desire for Hera and, once married, they had the longest honeymoon on record, lasting over 300 years!
Unfortunately, the goddess Hera's life was not to remain so enviable. Once the honeymoon was over, Zeus reverted to his earlier "playboy" lifestyle, married or not, compulsively seducing or raping whichever of the Greek goddesses or mortal women caught his wandering eye. His amorous exploits left the regal goddess Hera feeling betrayed and humiliated on numerous occasions. To make matters even worse, Zeus often showed more favor towards the offspring of his illicit liaisons than he did to the children Hera bore him. In Greek mythology Hera, although wounded, remained faithful and steadfast in her loyalty to Zeus, electing instead to vent her fury on "the other women" rather than Zeus himself even though it was usually Zeus who had deceived, seduced or raped the innocent women.
This wasn't always Hera's reaction, however. On one occasion she decided to give Zeus a "taste of his own medicine" by conceiving and delivering a child by herself, proving that she really didn't need him anyway. It didn't work out quite as she'd hoped. She gave birth, as the sole parent, to Hephaestus (God of the Forge) who was born with a deformity that made him lame. Zeus was not impressed, and Hera rejected her son, sending him away from Mount Olympus to grow up among the mortals.
At other times, in reaction to his continuing infidelities, the goddess Hera simply withdrew from Zeus and the other Olympian gods and goddesses and wandered around the earth, often in darkness, always eventually ending up back at the home where she'd spend her happy youth. In spite of how he had mistreated her, Zeus did love Hera and, more than that, felt as if part of himself was missing when she was not there for him. Once, panicked that Hera didn't seem to be in any hurry to return this time, he invited her to a "mock" marriage ceremony that he'd arranged to a princess near her home. She couldn't help but be amused to discover him making his vows, not to a princess, but a statue! Hera's laughter broke the ice, and she forgave him and returned to Mount Olympus to resume her role as wife and queen.
It is unfortunate that it is not the goddess Hera's nurturing or her steadfastness in the face of adversity that are remembered today, but mostly the stories of her jealousy and vindictiveness. Some historians argue that the goddess Hera was unjustly portrayed in the famous stories of Homer, probably because he was himself victimized by a mean and shrewish wife. More than any of the other Greek goddesses, the goddess Hera reminds us that there is both light and dark within each of us and that joy and pain are inextricably linked in life. The Greek goddess Hera represents the fullness of life and affirms that we can use our own wisdom in the pursuit of any goal we choose.
Much love to you all
Ishtar )O(
|
Appendix One
Palestine's Jewish Population, A.D. 638-1800
You will read:
I. Palestine Before the Crusades, 638-1099.
II. Palestine During the Crusader Era, 1099-1291.
III. Palestine Under the Mamluk Dynasty, 1291-1517.
IV. Palestine Under the Ottoman Turks, 1517-1800.
This appendix expands on some historical points and population estimates for part of the period covered briefly in Chapter Two.
According to Israeli demographer Roberto Bachi, from the start of the Arab period in A.D. 638 until into the nineteenth century, Palestine's Jewish population was always less than ten thousand and in some periods was only a few thousand.
I. Palestine Before the Crusades, 638-1099.
After Arabian Muslims conquered Palestine, the new rulers, pri-marily military leaders, left its civil service in native Byzantine hands. Greek remained the primary language. Palestine's conqueror, Caliph Omar I (582-644), was a devout, austere Moslem and personal friend of Mohammed. He treated Jews and Christians quite well. At the south end of the Temple Mount, Omar or a suc-cessor began to build a simple mosque, which was later replaced by the magnificent al Aqsa Mosque, which is still there. Christian pilgrims were welcome in Palestine.
Under ensuing caliphs, treatment of Jews and Christians fluctuated. In about 688-91 Caliph Abd al-Malik (ruled 685-705), in a political move against his Muslim competitors in Arabia, erected the exquisite Dome of the Rock on the site of Solomon's temple. With it he hoped to divert Muslim pilgrims to Jerusalem and away from Mecca and Medina. Jews worked on the shrine's staff in lieu of paying taxes. Christian conversions to Islam under the harsh Caliph Omar II (ruled 717-720) plus continuing Muslim immigration from surrounding lands changed Palestine from a primarily Christian to a primarily Muslim area. Caliph Harun al-Rashid (ruled 786-809) forced Christians to wear blue badges and Jews yellow ones. However, his son, Caliph al-Mamun (ruled 813-833), restored religious tolerance. In about 935, Jerusalem Jews were allowed to build a synagogue near the Wailing Wall.
During the tenth and eleventh centuries, as Fatimid power declined, Palestine was subject to raids by Seljuk Turks and Bedouin tribes. In 1071 Seljuks captured Jerusalem and mistreated and overcharged Christian pilgrims. In 1076 Jerusalem revolted against the Seljuks but failed; many of its inhabitants were ordered killed. This deteriorating situation helped trigger the Crusades. In 1098 the Fatimids recaptured the city. The continuing havoc and resulting emigration also reduced Palestine's Jewish population to only a few thousand before the first Crusaders came.
These mainly Frankish soldiers conquered Jerusalem in 1099 after a forty-day siege. Reportedly defying orders, they massacred its Jewish and Muslim inhabitants - almost forty thousand men, women and children. Some were tortured. This halted 461 years of Muslim rule over Palestine.
II. Palestine During the Crusader Era, 1099-1291.
Palestinian Jews experienced mixed fortunes under the Crusaders. Jerusalem's first Crusader ruler initially reinstated the ban against Jews living there; however, he or his successors exempted a few Jewish families. By 1110 the Crusaders, having gained mil-itary control throughout most of Palestine, relaxed their policies toward local populations and allowed them to remain. Jewish com-munity life was centered in Acre, which had some two hundred Jewish families in 1167, in Ashkelon and some other cities. In about 1169 a Jewish traveler wrote that some two hundred Jewish families lived in the Tower of David area of Jerusalem. The Cru-saders brought from Europe a rigid feudal system, to which Pales-tinian Jews had to conform.
In 1187 Saladin (ruled 1187-1193) defeated the Crusaders mil-itarily, and most of Palestine returned to Muslim rule. He be-friended Jerusalem's Jews and allowed other Jews to move into it, where they have been allowed to live ever since (except in the Old City while under Jordan's rule from 1948 to 1967). Under Sala-din Jews increased both in Jerusalem and elsewhere in Palestine.
In 1211 three hundred rabbis and other Jewish scholars fleeing persecution in France and England settled in Crusader Palestine and built new synagogues and schools. In 1229 Frederick II, the Holy Roman Emperor, negotiated a ten-year treaty that returned Nazareth, Jerusalem and Bethlehem from Muslim to Christian rule. In 1244 Kwarizmian Turks invaded Jerusalem, plundered and destroyed it and massacred many of its inhabitants. However, in 1248 the Turks were driven out by Mamluks, a Muslim dynasty based in Egypt, which ruled the city until 1517. In 1260 Mongols invaded northern Palestine but were decisively defeated before reaching Jerusalem. The Mamluks also kept pounding away at the Crusaders in their increasingly few remaining fortresses along the coast. This frequent warfare played havoc with Palestine's inhab-itants. In 1263 a traveler wrote that "only a handful" of Jews lived in Palestine. Virtually all of Jerusalem's Jews fled, many of them moving to Sechem, thirty miles north, in Samaria. At one point during that decade a visiting rabbi reported that Jerusalem had only two thousand Muslims, three hundred Christians, and one or two Jewish families. In 1267 Nahmanides, a Jewish visitor to Jeru-salem, wrote that there were barely enough Jewish men there to form a minyan - ten men - to hold prayers in their house on the Sabbath. Nahmanides started a synagogue there but did not form a Jewish community.
Despite raids from the east and the gradual Mamluk advance from the south, Ashkenazi Jews from Europe immigrated to Palestine, especially to the fortified Crusader towns along the northern coast. The Jewish population in Acre, the Crusader capital from 1191 to 1291, grew considerably during the thirteenth century. Until this large influx of Ashkenazim, the majority of Palestine's Jewry had been Arabic-speaking "Eastern" Jews. Tensions developed between the two groups, who soon were declaring bans on each other. In 1291 the Mamluks drove the Crusaders from Acre, their last Palestinian fort. The Sultan, avenging the slaughter by the Crusaders in Jerusalem 192 years earlier, ordered a massacre, which killed many Acre Jews.
III. Palestine Under the Mamluk Dynasty, 1291-1517.
Having ejected the Crusaders, the Mamluks wanted to prevent their return. They therefore destroyed the Crusader beachheads - Palestine's coastal cities. This forced these cities' people, including Jews, to move inland. But destruction of the port cities deprived the inland cities of commercial access to the sea and to other international trade routes, causing a depression. Jewish emigration from Palestine exceeded immigration from Europe and North Africa; the Jewish population hit a new low which continued for several decades. This trend was somewhat reversed during the mid 1300s as Jewish refugees increased, especially from France and Germany, where Jews were persecuted partly because of hysteria following the Black Death. In the early 1480s a visiting Christian reported that five hundred Jewish families lived in Palestine. However, a Jewish traveler at about that time said only about half that many lived there. Economic conditions were poor; Palestine shared with other lands in droughts, famines, earthquakes, epidemics, high taxes, high prices, government cor-ruption, and attacks by Bedouins and bandits. In 1481 marauders attacked Jerusalem and plundered and burned nearby Ramla. A visiting rabbi that year reported that Palestinian Jews' primary income was donations from Diaspora Jews. In the latter part of the 1400s the Mamluks forbade Jews and Christians to visit the Temple Mount or the Patriarchs' tombs in Hebron.
Spanish monarchs in 1492 expelled some 175,000 Jews from Spain and subsequently additional Jews from Spanish-ruled areas in the Mediterranean. This immense cruelty resulted in refugees settling in the four Palestinian cities holy to Jews - Jerusalem, Hebron, Safed, and Tiberias - during the late fifteenth and early sixteenth centuries. Estimates of Jewish population differ, but each of them indicates it was very low. One estimate in the early sixteenth century put Palestine's Jewish population at no more than five thousand.
IV. Palestine Under the Ottoman Turks, 1517-1800.
In 1517 Ottoman Turks completed their conquest of Palestine. An estimated five hundred Jewish families lived in the entire country at the time, with less than half of them in Jerusalem. Sultan Suleiman the Magnificent (ruled 1520-1566) greatly enhanced the city by restoring and erecting Muslim shrines, including the Dome of the Rock, and by repairing the city wall and adding handsome gates, which still add to the city's charm. He was a great improvement over the Mamluks. Under Suleiman and the Turks generally, Jews were treated relatively well. They continued to have freedom of worship and freedom to administer their own marriage, divorce and inheritance laws.
By 1550 Palestine's total population was probably more than 200,000, of which 90 percent were Muslim, and 10 percent were non-Muslims, a substantial number of whom were Jews. By that time Jerusalem had an estimated six thousand Muslims, three thousand Christians and one thousand Jews. A Jew who visited the city in about 1551 reported seeing a large school and two synagogues, the smaller for Ashkenazi Jews, and the larger for Sephardic Jews. The latter speak Ladino, a Judeo-Spanish dialect which soon became the common language of North African and Mideast Jews, later known as "Oriental" Jews. Safed, by then the main center of Jewish life in Palestine, had perhaps another one thousand or more Jewish families.
About 1560 Don Joseph Nasi (1524-1579), a Jew who was an Ottoman tax official and adviser to the Ottoman government, persuaded Suleiman and his son and successor, Prince Selim II, to deed over to him Tiberias and the surrounding region, including seven villages. This area was to be used as a homeland for Jewish refugees, especially from Spain, as well as from Portugal, which in 1498, under Spanish pressure, had also banished all Jews. The Tiberias project was under way by 1564 but increasing opposition from neighboring Arabs worked against building the new settlement. Moreover, Nasi was either too busy with his government duties in Istanbul (Constantinople renamed) or had lost interest in the project and it failed. However, Solomon Aben-Jaish (1520-1603) revived the plan and received government approval. His family moved to the Tiberias area and restarted the project but this too failed.
Meanwhile, the Jews in Safed prospered and increased. It again was a thriving center for Jewish studies as well as for trade in grain and cloth, especially silk and wool fabrics. However, many Palestinian Jews, especially scholars, students, elderly and indigent, still depended primarily on donations from Diaspora Jews for their livelihood.
Toward the end of the sixteenth century Ottoman sultans began to lose their powerful hold on the empire. As generals vied for power, the role of Jews in the government in Istanbul shrank. Local officials squeezed high taxes out of their subjects, including Jews. Palestine became an increasingly neglected backwater of the hard-pressed empire - a condition that continued throughout the next three hundred years until the British and Arabs took it in 1917-18. By the late seventeenth century many Jews had abandoned smaller villages because of marauding nomads in search of grazing lands and plunder. Jews continued to live in Hebron and Gaza city; about twelve hundred lived in Jerusalem.
During the eighteenth century Palestine's Jews increased, especially through immigration of Hasidim from Poland and Russia. The Ottoman governor of northern Palestine rebuilt Tiberias; at his invitation Jews moved there in 1738. It soon became a center second only to Safed, which had a plague in 1742 and an earthquake in 1769. By 1776 a number of Russian Hasidim had moved to Safed. Meanwhile, by the mid-1700s, the world-wide Jewish population stood at perhaps slightly more than two million. About half lived in Poland, their ancestors having been welcomed there during persecutions in England, France, Germany and Spain.
Palestinian Jews, Samaritans and Christians alike had chafed under the harsh Roman-Byzantine imperial rule between A.D. 136 and 638. Yet this period often gave Palestine a peace and prosperity which supported a much larger population, including a much larger Jewish population, than did most or perhaps all of the Arab and Crusader periods between 638 and 1800. Not only was the Jewish population of Palestine very low between about A.D. 1000 and 1800, Palestine's total population was very low. After the Black Death in the fourteenth century its total population dipped to perhaps 150,000. Israeli demographer Bachi sums up much of the 800-year period:
As shown by an impressive quantity of historical records, throughout the late Middle Ages and up to the 19th century, Jews individuals or in groups, prompted by the desire to be in the land of their fathers in order to pray, to study, and finally to be buried there. Sometimes they were inspired by Messianic hopes, and sometimes they sought asylum in the Holy Land during times of distress in the Diaspora.
However, statistically speaking, these movements were limited in size. It is also likely that poor economic conditions, lack of personal security and low health standards prevailing in the country were causes of substantial re-emigration and high mortality, which therefore greatly reduced the demographic influence of immigration.
By 1800 perhaps 5,000-6,500 Jews and some 265,000-325,000 Arabs lived in Palestine.
|
Logo IMG
HOME > PAST ISSUE > Article Detail
Crawfish and Water Birds
Jay Huner
Just imagine the thrill of seeing several thousand white egrets, ibises and blue-hued herons, along with a hundred or more scarlet roseate spoonbills, exploding from the shallows of a southern Louisiana wetland. One does not have to be a birder to be amazed by the color and magnificence of these stately wading birds. Fortunately, such sights are now commonplace from mid-autumn into early summer in the Bayou State, a consequence of the expansion of rice farming and crawfish aquaculture there.
Figure 1. White-faced ibis fly over snowy egretsClick to Enlarge Image
Brought nearly to extinction by hunters roughly a century ago, egret, heron, ibis and spoonbill populations have rebounded dramatically in southern Louisiana in the past 50 years. In other areas, the status of wading birds is not so rosy, as coastal wetlands succumb to the tide of development sweeping the American Sunbelt. Florida, for example, has long been noted for its many wading birds, but the loss of appropriate habitat in that state has forced some populations of these birds into decline. A million acres of coastal wetland in Louisiana have also disappeared, but the half million inland acres that are now flooded regularly to raise rice or crawfish have helped to compensate for that damage to the environment. So Louisiana's success merits attention—and nurturing.
Although most people in my state admire such birds for their beauty and applaud their resurgence, crawfish "farmers" have become increasingly concerned about the damage that these animals do to their "crop" of small crustaceans. I began work in crawfish aquaculture when I was a graduate student at Louisiana State University in 1972, and even at that time owners were concerned about wading birds raiding the 40,000 or so acres of ponds they had by then built.
Today, Louisiana crawfish farmers have nearly three times that area in production. For the most part, they use these shallow ponds to raise red swamp crawfish (Procambarus clarkii), which look like tiny lobsters and are similarly tasty. Farm-raised and wild-caught animals now contribute equally to the 50,000 or so tons of live Louisiana crawfish sold each year, which accounts for nearly half of the global trade in this delicacy.
Despite the healthy growth of their industry, crawfish farmers continue to complain loudly about losses from wading birds. They also worry about the large flocks of crawfish-eating cormorants, gulls, terns and, in some cases, pelicans, which have become common visitors to crawfish ponds in the winter and spring. Even coots, normally herbivorous, have become abundant and are feeding on crawfish to some degree.
Crawfish ponds are clearly water-bird magnets. Carnivorous birds have learned to take advantage of the concentration of nutrient-rich prey—crawfish, insects, worms, small fishes and tadpoles—that these artificial wetlands harbor. And herbivorous birds feast on the abundance of seeds and aquatic plants available in the ponds, which typically range from 10 to 20 acres in size and are normally a foot or so deep.
Actually, the ponds are kept that full of water for only part of the year, typically from mid-fall through mid-spring, which simulates the natural hydrological cycle that local wetlands experience. Crawfish farmers often use the summer months to cultivate rice in their ponds by putting just a few inches of water in them. Raising this second crop adds to their profits and does not interfere with the production of crawfish, which normally begins again in October. (Incidentally, the combination of crawfish and rice makes an excellent gumbo dinner.)
Raising crawfish in this way is relatively inexpensive because, unlike farmed shrimp, these tiny crustaceans do not need to be fed fish or vegetable meal. Crawfish are omnivorous and can devour the sundry small animals that proliferate once the plants in the ponds begin to deteriorate. They also eat the decomposing vegetation itself, along with various seeds and stray rice grains, where rice was grown during the previous summer. So even the herbivorous birds that do not feed on crawfish directly compete with them for food. But gauging the damage that birds do to aquaculture operations has proved difficult, in part because crawfish management is an inexact science.
comments powered by Disqus
Subscribe to American Scientist
|
Chegg Guided Solutions for Anatomy andamp Physiology An Integrative Approach 1st Edition: Chapter 28
0 Stars(0 users)
• Step 1 of 1
As an explanation to a young couples’ problem conceiving, a fertility clinic offers some insight to both of them on how hot baths may negatively affect the man’s reproductive fertility.
Since hot water does not directly affect erectile tissue in a penis, it would not interfere with a normal erection. The erection is related to blood flowing into three erectile bodies in the penis; therefore answer (a) is incorrect.
Autonomic neurons for erections and ejaculations are not n direct contact with hot bath water during immersion, and thus cannot be singed. Answer (b) is incorrect.
The hypothalamus secretes GnRH to begin the hormonal cycle for sperm production. This cycle is not affect by temperature by rather by hormone levels in the blood. Therefore, answer (d) is incorrect.
The correct explanation is answer (c) the temperature of the testes rises in hot water. The testes operate in a narrow range of temperature to successfully produce viable sperm. If testicles are in direct contact with hot water, it raises the testes to abnormally high temperatures, resulting in fewer viable sperm being produced.
Step-by-step solutions for this book and 2,500 others
Already used by over 1 million students to study better
|
Creationists and Carbon 14
The quotation comes from a comment on Facebook, and is used with permission.
The comment was on this post from “Questioning Answers in Genesis” which I shared, which addresses a specific claim that Ken Ham made in his debate with Bill Nye, as well as the broader issue of what young-earth creationists say about radiometric dating.
• Sven2547
Thanks for this. I was raging at the screen when Nye failed to call out Ham for this at one point.
• Will
This only scratches the surface of what Young Earthers say about radiometric dating that just isn’t true.
• John Wilkins
I now know a meme originator!
This isn’t my first!
• http://youtube.com/user/BowmanFarm Brian Bowman
melodysheep has a new video with the following description:
“A marvelous excerpt from Bill Nye’s recent debate set to original music.”
• stuart32
The main creationist argument against radiometric dating seems to be that the rate of decay might have been different in the past. There are a number of objections to this, including the fact that if the rate had been speeded up as much as it would have needed to be to make a 6000 year old earth look 4.5 billion years old, the earth would have fried.
Another objection is that the radiometric dates fit with other observations. We know that the continents are drifting across the earth at a slow but steady rate. It’s an interesting fact that the theory of continental drift (plate tectonics) was established before anyone could actually observe the continents moving. Only later was it possible to observe the movement directly. Ken Ham might have said,”Has anyone seen them move? Then how do you know?”
Anyway, we know that new seafloor is being created at the mid-Atlantic Ridge, which pushes Africa and South America apart, and we know the rate at which it is happening. We can use this to check the reliability of radiometric dating by obtaining samples of the oceanic crust in the Atlantic and dating them. Samples from the Ridge should be young and samples further away should be progressively older. The two methods are in agreement.
If the rate of decay had been different in the past then the rate of seafloor spreading would also have had to be different by the same amount in order to give the present agreement. Since the two processes are completely different this would be an odd coincidence.
• David_Evans
“the earth would have fried.”
I once emailed Answers in Genesis pointing out this problem, after they referred to the allegedly scientific RATE program:
They replied that there were 3 possible solutions:
1 The accelerated decay happened during the Flood
2 The accelerated day happened during one of the 6 days of creation
3 the rocks were created with the appearance of age
Of course options 1 and 2 would greatly increase the problem, and option 3 is just the Omphalos argument and totally unscientific. At that point I gave up on them.
• stuart32
Yes, I think this objection to their argument is absolutely decisive. There are other objections, but this one is more than enough on its own to settle the debate. Accelerating the rate of decay to the extent that it melts the rocks would have the effect of resetting the clock. So if it really happened it would give the earth a misleading impression of youth, not of age.
If they have option 3 up their sleeves, it makes you wonder why they bother with options 2 and 3.
|
Badge Michael Tomasky Blog
What if it can't be stopped?
David Roberts of the enviro site Grist asks a disturbing question and one that hangs in the balance today as we all watch and see whether BP can perform this top-kill operation: what if the leak simply can't be stopped?
If today's operation (which has succeeded on land but never been tried under 5,000 feet of salt water) fails, it will likely be another few weeks before a new attempt can be made. At 10,000 barrels (or whatever) a day...then what? Mother Jones reports in all seriousness that a "groundswell" is building for dropping a nuclear bomb on the spill. This has actually been done in Russia, but for underground leaks, not seaborne ones.
The possibility exists that humankind simply does not have the capacity to fix this problem. Roberts:
Once we know that accidents can be catastrophic and irreversible, it becomes clear that there is no margin of error. We're operating a brittle system, unable to contain failure and unable to recover from it. Consider how deepwater drilling will look in that new light.
I agree - that would be a staggering shock to Americans. When problems have arisen requiring innovation and know-how, there's never been anything we couldn't do eventually. There have been plenty of things we didn't and don't do: we didn't built the right kind of levees around New Orleans because the price was "too high" and we don't require enough safety trips in coal mines because we as a society have decided it isn't "worth" it.
Those things are shameful, as far as I'm concerned, but they're quite different psychologically from simply not having a solution at all. And remember: if that is the case, this leak could go on for years. Not an exaggeration. There is lots of oil down there. Imagine this going on for five years.
Would people be up in arms demanding the government find a solution at any price? Would a majority of Americans grasp the connection between the need for government and regulation (in this case, the acoustic switch and other redundancies that other governments require in offshore operations but the US does not) and the possible prevention of something like this?
Or would Americans just say, well, this is tragic, but it's one of those things that happens and it's not an excuse for more government? And we need oil so let's keep at it. Something like this is unlikely to happen twice.
I'm afraid I fear the response will be the latter. I suppose the only reason to think otherwise is that this is happening down south, and southern political and corporate interests that would normally be free-market all the way might be thinking twice since it's their own back yard.
But in general, we've reached a point in the US at which the predictable agitprop machinery will start humming if the leak proves unstoppable, saying it isn't really all that terrible, and brace yourselves as Obama and Pelosi et al. use this as one more reason to swoop in and snatch away more of your liberty. And then the debate won't be about the facts of drilling operations and safeguards at all, but about freedom versus statism. And you know which side wins that argument in America.
And the impotence of not being able to do anything? It will be shocking for a while. And then, one day, it won't be. And eventually a solution will come along, and then we'll forget, in that manner that we increasingly do.
It's pretty depressing. Let's just hope to heaven this thing works today.
Latest posts
Today's best video
|
Bug love: The fascinating story of the fig wasp
About 10 years ago, I had quite a scare when my high school biology teacher warned us away from figs because their insides were crawling with wasps. Some internet research revealed that this claim was only partly true, so I continued along my fig-consuming way without thinking much more of it.
That is, until today, when we published a paper titled “Moving your sons to safety: galls containing male fig wasps expand into the centre of figs, away from enemies,” which made me look deeper into this symbiotic relationship – and now I’m eager to share all the creepy crawly details I found.
The short story is that fig wasps lay their eggs inside the fruit, where they hatch and mate. The female then crawls out of the fig, through a tunnel chewed by the male, and eats her way into a new fig to lay her eggs. In the process, she loses her wings and antennae and dies, trapped, inside the new fig, which she has also pollinated.
As for the caveats: there are also species of self-pollinating figs, which do not require the wasps, and species of parasitic fig wasps that game the system, taking advantage of the figs as incubators without doing their pollination duty. (I’m still not sure which ones make it to the supermarket though.)
Today’s paper explores some of the differences in egg-laying behavior between pollinating, symbiotic wasps and non-pollinating, parasitic wasps. Non-pollinating wasps not only take advantage of the fig, but sometimes also kill the larvae of pollinating wasps. In response to this threat, it appears that pollinator wasps have developed some defense mechanisms, including the location and sex ratio of eggs laid, the authors report.
Wasps aside, I also learned a surprising piece of information about figs themselves. They are not fruits, but are actually something called an “inflorescence,” or a cluster of flowers. It’s just that the flowers are hidden on the inside: each crunchy little seed in a fig represents one flower. To make it more complicated, there are three different types of flowers: male, short female, and long female. Female fig wasps can only reach and lay their eggs in the short female flowers, so the long female flowers are left to develop fig seeds, allowing both the fig and the wasp to prosper.
Image source: Mundoo via Flickr
This entry was posted in Aggregators and tagged , , , . Bookmark the permalink.
Add Comment Register
Leave a Reply
|
From Wikipedia, the free encyclopedia
(Redirected from Murderer)
Jump to: navigation, search
"Murderer" redirects here. For other uses, see Murderer (disambiguation).
For other uses, see Murder (disambiguation).
The elements of common law murder are:
1. Unlawful
2. killing
3. of a human
4. by another human
5. with malice aforethought.[3]
Killing – At common law life ended with cardiopulmonary arrest[3] – the total and permanent cessation of blood circulation and respiration.[3] With advances in medical technology courts have adopted irreversible cessation of all brain function as marking the end of life.[3]
of a human – This element presents the issue of when life begins. At common law, a fetus was not a human being.[5] Life began when the fetus passed through the vagina and took its first breath.[3]
by another human – At early common law, suicide was considered murder.[3] The requirement that the person killed be someone other than the perpetrator excluded suicide from the definition of murder.
1. Intent to kill,
2. Intent to inflict grievous bodily harm short of death,
Common and sharia law[edit]
According to Blackstone, English common law identified murder as a public wrong.[7] At common law, murder is considered to be malum in se, that is an act which is evil within itself. An act such as murder is wrong/evil by its very nature. And it is the very nature of the act which does not require any specific detailing or definition in the law to consider murder a crime.[8]
Depending on the jurisdiction, sharia law does permit for capital punishment for murder. However if a relative of the victim forgives the murderer then he or she may go unpunished.[9]
• Unlawful killings without malice or intent are considered manslaughter.
Specific to certain countries[edit]
• A killing simply to prevent the theft of one's property may or may not be legal, depending on the jurisdiction. In the US, such a killing is legal in Texas.[12] In recent years, Texas has been the scene of some very controversial incidents that involved killing to protect property, that have led to discussions of the laws and social norms of the state (see Joe Horn shooting controversy). In a highly controversial case, in 2013, a jury in south Texas acquitted a man who killed a prostitute, who, after receiving $150 from the man in exchange for sex, refused to have sex with the man, and attempted to run away with his money. The man's lawyer argued that the man was trying to retrieve property which was stolen during night time, an action which allows for the use of deadly force in Texas. The jury accepted this defense. There was major controversy in this case, due to the fact that there were questions about whether the money was in fact stolen, since the man had given it voluntarily to the prostitute, and the "contract" of prostitution is in fact an illegal contract in Texas, since both buying and selling sex are criminal offenses.[13][14]
• Killing to prevent specific forms of aggravated rape or sexual assault - killing of attacker by the potential victim or by witnesses to the scene; this is especially the case in regard to child rape- legal in parts of the US and in various countries[15]
• In some parts of the world, especially in jurisdictions which apply Sharia law, the killing of a woman or girl in specific circumstances (e.g., when she commits adultery) and is killed by husband or other family members, known as honor killing, is not considered murder.
Murder in the House, Jakub Schikaneder.
California's murder statute, Penal Code Section 187, was interpreted by the Supreme Court of California in 1994 as not requiring any proof of the viability of the fetus as a prerequisite to a murder conviction.[16] This holding has two implications. The first is a defendant in California can be convicted of murder for killing a fetus which the mother herself could have terminated without committing a crime.[16] The second, as stated by Justice Stanley Mosk in his dissent, because women carrying nonviable fetuses may not be visibly pregnant, it may be possible for a defendant to be convicted of intentionally murdering a person he did not know existed.[16]
Mitigating circumstances[edit]
Main article: M'Naghten rules
Aaron Alexis holding shotgun during his rampage.
Under New York law, for example:
—N.Y. Penal Law, § 40.15[18]
Under the French Penal Code:
Article 122-1
Post-partum depression[edit]
For a killing to be considered murder, there normally needs to be an element of intent. A defendant may argue that he or she took precautions not to kill, that the death could not have been anticipated, or was unavoidable. As a general rule, manslaughter[20] constitutes reckless killing, but manslaughter also includes criminally negligent (i.e. grossly negligent) homicide.[21]
Diminished capacity[edit]
In those jurisdictions using the Uniform Penal Code, such as California, diminished capacity may be a defense. For example, Dan White used this defense[22] to obtain a manslaughter conviction, instead of murder, in the assassination of Mayor George Moscone and Supervisor Harvey Milk.
Aggravating circumstances[edit]
• Premeditation
• Poisoning
• Murder of a police officer,[23] judge, fireman or witness to a crime[24]
• Murder of a pregnant woman[25]
• Crime committed for pay or other reward[26]
• Exceptional brutality or cruelty
• Murder for a political cause,[23][27][28]
Year-and-a-day rule[edit]
Main article: Year and a day rule
In the United States, many jurisdictions have abolished the rule as well.[30][31] Abolition of the rule has been accomplished by enactment of statutory criminal codes, which had the effect of displacing the common-law definitions of crimes and corresponding defenses. In 2001, the Supreme Court of the United States held that retroactive application of a state supreme court decision abolishing the year-and-a-day rule did not violate the Ex Post Facto Clause of Article I of the United States Constitution.[32]
Origins and history[edit]
In Judeo-Christian traditions, the prohibition against murder is one of the Ten Commandments given by God to Moses in (Exodus: 20v13) and (Deuteronomy 5v17). The Vulgate and subsequent early English translations of the Bible used the term secretly killeth his neighbour or smiteth his neighbour secretly rather than murder for the Latin clam percusserit proximum.[35][36] Later editions such as Young's Literal Translation and the World English Bible have translated the Latin occides simply as murder[37][38] rather than the alternatives of kill, assassinate, fall upon, or slay.
The term 'Assassin' derives from Hashshashin,[39] a militant Ismaili Shi`ite sect, active from the 8th to 14th centuries. This mystic secret society killed members of the Abbasid, Fatimid, Seljuq and Crusader elite for political and religious reasons.[40] The Thuggee cult that plagued India was devoted to Kali, the goddess of death and destruction.[41][42] According to some estimates the Thuggees murdered 1 million people between 1740 and 1840.[43] The Aztecs believed that without regular offerings of blood the sun god Huitzilopochtli would withdraw his support for them and destroy the world as they knew it.[44] According to Ross Hassig, author of Aztec Warfare, "between 10,000 and 80,400 persons" were sacrificed in the 1487 re-consecration of the Great Pyramid of Tenochtitlan.[45][46]
Middle English mordre is a noun from Anglo-Saxon mordor and Old French murdre. Middle English mordre is a verb from Anglo-Saxon myrdrian and the Middle English noun.[47]
International murder rate per 100,000 inhabitants, 2011
An estimated 520,000 people were murdered in 2000 around the globe. Another study estimated the world-wide murder rate at 456,300 in 2010 with a 35% increase since 1990.[48] Two-fifths of them were young people between the ages of 10 and 29 who were killed by other young people.[49] Because murder is the least likely crime to go unreported, statistics of murder are seen as a bellwether of overall crime rates.[50]
UNODC : Per 100,000 population (2011)
Murder rates by country. Murder rates in jurisdictions such as Japan, Singapore, Hong Kong, Iceland, Norway, Switzerland and Austria are among the lowest in the world, around 0.5 cases per 100,000 people per year; the rate of the United States is among the highest of developed countries, around 5.5 in 2004,[51] with rates in larger cities sometimes over 40 per 100,000.[52] The top ten highest murder rates are in Honduras (91.6 per 100,000), El Salvador, Ivory Coast, Venezuela, Belize, Jamaica, U.S. Virgin Islands, Guatemala, Saint Kitts and Nevis and Zambia. (UNODC, 2011 - full table here).
The following absolute murder counts per-country are not comparable because they are not adjusted by each country's total population. Nonetheless, they are included here for reference, with 2010 used as the base year (they may or may not include justifiable homicide, depending on the jurisdiction). There were 52,260 murders in Brazil, consecutively elevating the record set in 2009.[53] Over half a million people were shot to death in Brazil between 1979 and 2003.[54] 33,335 murder cases were registered across India,[55] about 19,000 murders committed in Russia,[56] approximately 17,000 murders in Colombia (the murder rate was 38 per 100,000 people, in 2008 murders went down to 15,000),[57] approximately 16,000 murders in South Africa,[58] approximately 15,000 murders in the United States,[59] approximately 26,000 murders in Mexico,[60] approximately 13,000 murders in Venezuela,[61] approximately 4,000 murders in El Salvador,[62] approximately 1,400 murders in Jamaica,[63] approximately 550 murders in Canada[64] and approximately 470 murders in Trinidad and Tobago.[63] Pakistan reported 12,580 murders.[65]
Murder in Rio de Janeiro. More than 800,000 people were murdered in Brazil between 1980 and 2004.[66]
In the United States, 666,160 people were killed between 1960 and 1996.[67] Approximately 90% of murders in the US are committed by males.[68] Between 1976 and 2005, 23.5% of all murder victims and 64.8% of victims murdered by intimate partners were female.[69] For women in the US, homicide is the leading cause of death in the workplace.[70]
In the US, murder is the leading cause of death for African American males aged 15 to 34. Between 1976 and 2008, African Americans were victims of 329,825 homicides.[71][72] In 2006, Federal Bureau of Investigation's Supplementary Homicide Report indicated that nearly half of the 14,990 murder victims were Black (7421).[73] In the year 2007 non-negligent homicides, there were 3,221 black victims and 3,587 white victims. While 2,905 of the black victims were killed by a black offender, 2,918 of the white victims were killed by white offenders. There were 566 white victims of black offenders and 245 black victims of white offenders.[74] The "white" category in the Uniform Crime Reports (UCR) includes non-black Hispanics.[75] In London in 2006, 75% of the victims of gun crime and 79% of the suspects were "from the African/Caribbean community."[76] Murder demographics are affected by the improvement of trauma care, which has resulted in reduced lethality of violent assaults – thus the murder rate may not necessarily indicate the overall level of social violence.[77]
Workplace homicide is the fastest growing category of murder in America.[70]
Despite the immense improvements in forensics in the past few decades, the fraction of murders solved has decreased in the United States, from 90% in 1960 to 61% in 2007.[79] Solved murder rates in major U.S. cities varied in 2007 from 36% in Boston, Massachusetts to 76% in San Jose, California.[80] Major factors affecting the arrest rate include witness cooperation[79] and the number of people assigned to investigate the case.[80]
Intentional homicide rate per 100,000 inhabitants, 2009
Southern slave codes did make willful killing of a slave illegal in most cases.[86] For example, the 1860 Mississippi case of Oliver v. State charged the defendant with murdering his own slave.[87] In 1811, the wealthy white planter, Arthur Hodge, was executed for murdering several of his slaves on his plantation in the British West Indies.[88]
In Corsica, vendetta was a social code that required Corsicans to kill anyone who wronged their family honor. Between 1821 and 1852, no less than 4,300 murders were perpetrated in Corsica.[89]
Specific murder law[edit]
Degrees of murder by country[edit]
Certain countries employ the concept of first-, second-, and third-degree murder. Canadian law distinguishes first- and second-degree murder. Both the United States and Peru have respective degrees of first-, second-, and third-degree murder. See Degrees of murder in the United States and Murder (Peruvian law).
See also[edit]
Topics related to murder
5. ^ R v Tait [1990] 1 QB 290.
9. ^ "capital punishment". 2009. Retrieved 14/9/2014.
12. ^
13. ^[dead link]
14. ^
15. ^
17. ^ M'Naughten's case, [1843] All ER Rep 229.
18. ^ N.Y. Penal Law, § 40.15, found at N.Y. Assembly web site, retrieved April 10, 2014.
22. ^ (the so-called "Twinkie defense").
23. ^ a b Murder (English law)
24. ^ Murder (United States law)
25. ^ Murder (Romanian law)
26. ^ Murder (Brazilian law)
27. ^ [1]
28. ^ Yigal Amir
32. ^ Rogers v. Tennessee, 532 U.S. 451 (2001).
33. ^ CBS News coverage of Barnes' acquittal Accessed May 24, 2010
37. ^ "Exodus 20v13". Young's Literal Translation. Retrieved 21 January 2011. "Thou dost not murder."
38. ^ "Exodus 20v13". World English Bible. Retrieved 21 January 2011. "You shall not murder."
44. ^ "Science and Anthropology". Retrieved 2010-06-25.
51. ^ "FBI web site". 2001-09-11. Retrieved 2010-06-25. [dead link]
52. ^
68. ^
69. ^
70. ^ a b
71. ^ "Homicide trends in the United States". Bureau of Justice Statistics.
72. ^ "Homicide Victims by Race and Sex". U.S. Census Bureau.
74. ^ Ann L. Pastore; Kathleen Maguire (eds.). Sourcebook of criminal justice statistics Online (31st ed.). Albany, New York: Bureau of Justice Statistics.
82. ^ "Homicide Rates in the United States 1900-1990".
External links[edit]
|
Password Protecting Pages
Password File
You can password protect a portion or all of your website. Two files are required for this, a password file containing usernames and encrypted passwords, and an ".htaccess" file.
To create this file, use htpasswd. If the password file doesn't yet exist, type:
htpasswd -c filename user
To add additional users to an existing password file type:
htpasswd filename user
htpasswd filename -D user
The ".htaccess" File
The second file required to password protect a portion of your web site is a file called ".htaccess". This is placed in the directory to be protected. This file tells the web server where to find the ".htpasswd" file and what form of authentication to apply. This example shows how to use password authentication to protect a portion of your web space. It is also possible to limit access using groups, by domain name, or by address space. The format of the ".htaccess" file is:
AuthUserFile /home/login/.htpasswd
AuthGroupFile /dev/null
AuthName ByPassword
AuthType Basic
<Limit GET PUT POST>
require valid-user
|
Indicator Report - Birth Defects: Overall
Why Is This Important?
Major birth defects are associated with many adverse outcomes, from pregnancy through adult life. Pregnancies affected by birth defects are more likely to end as a stillbirth. Affected newborns and children are at an increased risk of premature death, chronic illness, or long term disability. In the United States and other developed countries, birth defects are the leading cause of infant mortality, and are a major contributor to pediatric hospitalizations, chronic childhood illness, and developmental disabilities. Because it has the highest birth rate in the nation, birth defects are a crucial public health issue in Utah.
Tracking and studying birth defects provides the information needed to monitor the burden of disease locally and statewide, to assess services, to allocate resources for optimal care, and to evaluate prevention efforts.
All Birth Defects Prevalence, Overall and by Race/Ethnicity, Utah, 1999-2011
::chart - missing::
data tableconfidence limits
Data Notes
Hispanic persons may be of any race.
Data Sources
Utah Birth Defect Network.
Other Views
Number of cases of major birth defects per 1,000 live births and stillbirths. Major birth defects are those that require medical, surgical, or rehabilitative services, and have an impact on the person's health and development.
Common major birth defects include anomalies of the heart (e.g., septal defects, conotruncal defects), face (e.g., cleft lip and palate), skull (e.g., craniosynostosis), limbs (e.g., missing digits), brain or spine (e.g., anencephaly and spina bifida), kidneys and genitourinary system (e.g., absent kidney, hydronephrosis, hypospadias), liver and gastrointestinal system (e.g., biliary atresia, esophageal atresia). We include among major birth defects also chromosomal anomalies such as Down syndrome.
In this report we do not include certain mild conditions such as those heart findings detected in the preterm baby and that often resolve over time (e.g., patent ductus arteriosus); mild conditions not leading to treatment (e.g., coronal hypospadias not needing surgery); or conditions that usually do not lead to major medical concerns except perhaps in later stages of life (mitral prolapse).
How We Calculated the Rates
Numerator: Number of cases of major birth defects among live births and fetal deaths in women residing in Utah.
Denominator: Number of live births and stillbirths among women residing in Utah.
Page Content Updated On 10/23/2013, Published on 11/10/2013
The information provided above is from the Utah Department of Health's Center for Health Data IBIS-PH web site ( The information published on this website may be reproduced without permission. Please use the following citation: "Retrieved Thu, 18 September 2014 1:38:29 from Utah Department of Health, Center for Health Data, Indicator-Based Information System for Public Health Web site:".
|
Forgot your password?
Space Science
Milky Way Is Twice the Size We Thought 301
Posted by kdawson
from the everything-you-know-is-wrong dept.
Peter writes to tell us about a research group at the University of Sydney in Australia, who in the middle of some calculation wanted to check the numbers everybody uses for the thickness of our galaxy at the core. Using data available freely on the Internet and analyzing it in a spreadsheet, they discovered in a matter of hours that the Milky Way is 12,000 light years thick, vs. the 6,000 that had been the consensus number for some time.
Milky Way Is Twice the Size We Thought
Comments Filter:
• 2x bigger (Score:3, Insightful)
by Feef Lovecraft (1231264) <feeferscat AT gmail DOT com> on Wednesday February 20, 2008 @04:06AM (#22485308) Homepage
So until now everyone was just measuring the radius of the Milky Way?
• by timmarhy (659436) on Wednesday February 20, 2008 @04:11AM (#22485344)
that only confirms that wikipedia is not a reliable source.
• by Thanshin (1188877) on Wednesday February 20, 2008 @04:13AM (#22485360)
Is there any physical effect where a galaxy ends? Or are we just talking about an imaginary limit.
How hard is it to map the galaxy? If we don't know where the stars are, we can't know the size. If we know, we don't need it; we can describe the actual, real, shape.
Where's the flaw in my logic? (I hope it's in the part about the limit being imaginary, I like limits in Space like the heliosphere)
• Re:A good reminder (Score:3, Insightful)
by bandersnatch (176074) on Wednesday February 20, 2008 @04:16AM (#22485374) Homepage
because like the internet is like TOTALLY a definitave source mkay?
• Dark Matter (Score:1, Insightful)
by Anonymous Coward on Wednesday February 20, 2008 @04:17AM (#22485390)
Does this ruin dark matter? Perhaps our mass estimates for our own galaxy were off by a factor of 2.
• by TapeCutter (624760) on Wednesday February 20, 2008 @04:37AM (#22485454) Journal
People who depend on a single source are unreliable.
• by Atario (673917) on Wednesday February 20, 2008 @04:58AM (#22485538) Homepage
Not anymore! Hee hee!
• by Jugalator (259273) on Wednesday February 20, 2008 @05:09AM (#22485582) Journal
that only confirms that wikipedia is not a reliable source.
This argument is getting sort of tiresome to me. In well written Wikipedia articles, key facts are often referenced today. This then becomes a blanket argument against Wikipedia as a whole, without caring for whether the information was well referenced or not. Often, it is. Sure, often it's not too, but IMHO, one need to check that out first.
This time, you've already received your answer to why Wikipedia had this information, and it's in fact not a long time ago I've had to do the same.
So, please guys, before you bash Wikipedia, check if there's a good reason to the discrepancy of the information. Surprisingly often, especially in articles receiving good attention like the one for our galaxy, there is.
• by Jugalator (259273) on Wednesday February 20, 2008 @05:12AM (#22485598) Journal
Ironically, Wikipedia is one among few encyclopedias that do this. Not for all facts, far from it, but for a fair number of facts. For example, Wikipedia has three references for the mass of the Milky Way, and you can also see which referenced were used for that sole claim. You won't be able to see that by using Britannica.
• by dltaylor (7510) on Wednesday February 20, 2008 @05:17AM (#22485628)
The spiral arms are thicker than we've been assuming. Does that mean that there are more stars and gas/dust clouds in the greater volume? If there are more, then the mass of the galaxy is higher, and with the relativistic adjustment recently adopted, there's less need for a "dark halo", or, at least, less of one required to balance the velocity of the outer stars. OTOH, if there's the same amount, then the density is less, which throws off the very measurement technique that they're using to derive the new thickness, since the less-dense interstellar medium will have less effect on the two wavelengths (yeah, I read the article).
Anyone know of an online resource for the American Astronomical Society papers? I'd like to see what, if anything, they say about the density values for the WIM.
• Re:A good reminder (Score:4, Insightful)
by TapeCutter (624760) on Wednesday February 20, 2008 @05:24AM (#22485662) Journal
The other reply is correct. It's not that everyone just assumed it's origin it's that everyone was uncertain about the origin. There was a hell of a lot of evidence collected for the CDC, WHO and others. Science is designed like that, nobody is ever 100% certain about anything.
Some religious and political groups (where many claim/demand proof) use this systematic uncertainty to justify their particular perversions of common decency when science presents them with inconvienient evidence. The search for the origin of aids was a good example.
Nobody is immune because nobody can keep up with everything. The comments on slashdot demonstrate that every day. Over the last 7-8yrs there has been a magnificent debate on slashdot over global warming. What once was marked troll is now insightfull, if nothing else I think most of the regulars (including me) know more about the science behind it than they did a few years ago.
• by piquadratCH (749309) on Wednesday February 20, 2008 @05:35AM (#22485710)
As a public service to the Slashdot community I'm going to blatantly violate copywrite and post the lyrics here so we can all see them after geocities melts down
If you violate copyright, do it right [youtube.com].
• by uhlume (597871) on Wednesday February 20, 2008 @05:41AM (#22485740) Homepage
The NASA source doesn't specify at what radius the thickness is measured, leading me to believe that the "1000 light years" figure references an average, or representative, thickness. According to the summary (although curiously unmentioned in TFA) this new discovery seems to pertain specifically to the Milky Way's thickness at the Galactic core, where it is substantially thicker than at points located further down the arms (as illustrated in this side view [usra.edu]).
• by Eivind (15695) <[email protected]> on Wednesday February 20, 2008 @05:43AM (#22485746) Homepage
Technically, Wikipedia should never claim any spesific thing. They don't really have an opinion as such on the size of the MW or anything else. Yeah, I know, the article says "The Milky-Way is so-and-so big". But that should really be read as:
"Our sources, given under this article, claims that the Milky-Way is so-and-so big" One could write it like that, but it'd become tiresome real quick.
That information is by nessecity only at best as good as the sources.
Besides; that's the way reality works in general. When somebody claims some fact it ALWAYS means that based on the sources that that person choose to believe (be it his own eyes or a scientific paper, or Fox-news) says so.
• by syousef (465911) on Wednesday February 20, 2008 @05:43AM (#22485752) Journal
From TFA with commentary:
Proving not all science requires big, expensive apparatus, Professor Gaensler and colleagues...downloaded data from the internet
No, this actually proves that you can reuse data gathered with large expensive apparatus. There's a difference. They couldn't have done this without expensive infrastructure that just happened to cost them nothing (or close to nothing) - ie. The original instruments and the Internet.
Well now wouldn't you want to explore why the data differs so much, before declaring your answer to be the correct one just because you verified your calculations are correct?
My first thought is: Did they use some standard or average value for the density of the WIM? Could the discrepancy be because the WIM itself is not uniform through the thickness of the galaxy/
This is definitely an interesting result and worth following up but rather than declare victory the real question is why is there such a large discrepancy with other data?
• Define "edge" (Score:3, Insightful)
by Dan100 (1003855) on Wednesday February 20, 2008 @06:03AM (#22485826) Homepage
To measure the thickness of something, you need to know where it ends. The Milky Way isn't a solid object, so there must be some arbitary definition of the "edge" where the average density drops below a certain value.
Perhaps the differences in quoted thicknesses are the result of different definitions of the edge?
• by uhlume (597871) on Wednesday February 20, 2008 @06:44AM (#22486004) Homepage
How is this modded "insightful"? Scientific models and methods improve, often building upon earlier models and methods. This isn't an indication of incompetence or malfeasance in the earlier science; it just means that we're getting better at it.
Additionally, the revised estimate of the point of divergence of humans from primates as a result of newly-discovered fossil evidence isn't even remotely relevant to a case in which existing data has been re-interpreted to form a new conclusion.
• by rastan (43536) * on Wednesday February 20, 2008 @07:49AM (#22486414) Homepage
Wikipedia states the average thickness was 1000ly, not the maximum as discussed in the summary.
• That should say.. (Score:2, Insightful)
by mario_grgic (515333) on Wednesday February 20, 2008 @09:45AM (#22487140)
We now think Milky Way is twice the size than what we had previously thought. Using "is" makes it sound like they actually know how big it is this time around.
• by greginnj (891863) on Wednesday February 20, 2008 @09:56AM (#22487246) Homepage Journal
But the same people (presumably) have also rushed off to edit Wikipedia! (I see a half dozen edits this morning, to add in the "new" thickness.) That's the part that I find incredible. And people really take Wikipedia seriously?
You're right. God forbid some stupid fucking amateurs should be so passionately interested in your field that they would do something so counterproductive to your ivory-tower efforts as ... editing a Wikipedia article. It's not like they're part of the public that becomes more or less willing support funding for NSF or NASA grants, for instance. You should be able to get by on royal patronage just fine, without being troubled by the noise generated by hoi polloi.
• by gfxguy (98788) on Wednesday February 20, 2008 @10:14AM (#22487394)
It's what happens when one guy does a calculation and everybody else cites it... then it becomes "consensus."
• by boot_img (610085) on Wednesday February 20, 2008 @10:37AM (#22487676)
I guess I should clarify. I have no problem with amateurs editing Wikipedia. But I do have problems with, as you say, stupid, fucking amateurs editing Wikipedia.
For example, at the moment Wikipedia says:
The disk of the Milky Way galaxy is approximately 100,000 light years in diameter, and is believed to be about 1,000 light years thick (average thickness),[8] with the center bulge's thickness recently discovered by University of Sydney researchers to be about 12,000 light years, contrary to the previously thought 6,000.[9]
This is not correct. The Wikipedia editors have decided somehow that the 12,000 light year measurement refers to the center of the Milky Way (even though it does not state this anywhere in the U Sydney Press Release). As I said above, the 12,000 light year measurement refers not to a location but to a component, the Warm Ionised Medium or WIM.
My point is simply that the quality of Wikipedia is only as good as the effort that editors make to understand a subject and edit appropriately.
• by mysticgoat (582871) on Wednesday February 20, 2008 @10:58AM (#22487932) Homepage Journal
You bet I take Wikipedia seriously.
It is the largest and broadest source of information that has ever been available, any where, any time. It gives access to any of 2.25 million articles at incredible speed: it takes many times longer to phrase the Google query that identifies the relevant article than it does to fetch the text.
Are the contents accurate?
That's the wrong question.
Are the contents useful?
You bet they are, if you understand the context and know how to critically assess what you read. As with any encyclopedia, the most valuable parts of the articles are the references and citations to other works. Through those, a discerning reader can learn the major features of an unfamiliar field. Additionally, the Wikipedia article itself is a pretty good indicator of what the well informed non-expert believes he knows about any field. This is important: it wasn't so long ago that expensive surveys were the only tools for assessing lay knowledge about a field.
Wikipedia is not authoritative. That does not diminish its value. For various reasons no encyclopedic collection is an authority on any subject (other than itself, and even that is often time-limited).
• by greginnj (891863) on Wednesday February 20, 2008 @11:02AM (#22487992) Homepage Journal
I'm perfectly willing to concede that you have expertise on this subject. Since you complain that
why don't you become an editor and help it along? It's not hard at all. When talking about Wikipedia editors, there is no "them". Rather than telling Slashdot that Wikipedia could be better, you could be ... making Wikipedia better. If you put in appropriate footnotes and a clear explanation, especially once today's media frenzy dies down, you'll be lighting a candle rather than cursing the darkness. [Full disclosure, and odd coincidence: a while back, I made a minor edit for clarity to the article on "peculiar velocity" [wikipedia.org]. The article is still a stub -- feel free to check it out and improve it [wikipedia.org]. ]
I can easily understand that talking about 'how thick the galaxy is' is a lot like the 'is Pluto a planet' dispute -- it's just shorthand for more complex issues that you could elucidate. For example -- you could provide a brief paragraph describing the controversy, and how different elements lead to different measures of a galaxy's thickness, and give those measures. You'd be, you know, educating. If you both care enough and know enough about a subject to be bothered by the Wikipedia article, that's a sign you should be improving it.
• by Anonymous Coward on Wednesday February 20, 2008 @11:10AM (#22488094)
I guess you're one of these morons. All he's saying is do a little research and try to work out what is really going on before you go and write an encyclopedia. The usual wiki-wankery happens when people change things too fast without getting a clear understanding of what's going on before they make the changes, and that's why it can never be taken seriously.
• by zenyu (248067) on Wednesday February 20, 2008 @12:09PM (#22489014)
My example about the dating of primate and human evolution was to prove that these type of huge "corrections" have occured even in other scientific fields as well. So what we know to be absolutely true today, can be completely off tomorrow.
Scientists never know anything to be "absolutely true". Absolute truth is the domain of charlatans, liars and cheats.
When geology started scientists proved that certain rocks in England were "millions of years old!", and postulated based on that that the earth might be "hundreds of millions of years old!". But those numbers seem quaint and even silly today. As new rocks were discovered we soon learned that they were billions of years old, and when we learned about plate tectonics we realized the Earth could be older than the oldest rocks we could find. Our guess as to what the milkyway even looks like are based on looking at other galaxies and then seeing similar structures in our own local neighborhood. We can't actually look at it like we look at other galaxies. We are inside of it; close by stars and dust obscure our view, and our vantage point is that of someone looking at a plane from the side.
What we can see are 'standard candles', that is stars emitting light within a certain range based on our knowledge of nuclear reactions and our ability to calculate apparent mass and composition. This rests on nuclear reaction theory for stars of large mass that we can not test as easily as we can test say simple nuclear decay, and it also rests on a number of approximations for the amount of dust vs "dark matter" in the intervening space (once you know how bright the star is at it's surface, you then base it's distance from you on how bright it appears to you on earth; the stuff in between matters). Terms like "dark matter" and "dark energy" should be hints that we can be off by several magnitudes. If one star is somewhere between 5,000 and 10,000 light years away, while it sounds like a huge difference, the same approximations can tell us that another star is between 5 and 10 light years away.
To put this in perspective, does it really matter if homo split off from ape 1 or 2 or 4 million years ago. Or, whether modern man is 50, 100, or 200 thousand years old? Even what happened in your day yesterday is not completely known to you. You have forgotten most of it, and what you do remember is colored by your dreams last night and your mind's ability to integrate it into what has happened before. But you'll make do with your imperfect knowledge of the day, this month you'll have an idea of how warm it was based on the weather this year + the fact that you don't remember it being an unseasonable day, and ten years from now you'll have an idea based on the season, and ten thousand years from now, people reading your description of your day will have an idea of the weather based on the season and climate. All are less accurate than if I had asked you yesterday how warm it was, but so long as you understand the data and it's approximate accuracy it is still useful. It's useful to have an idea of how long ago ape split off from man vs when modern man split off from other human species, but the day the month and the year isn't important when you're dealing with large numbers like this. The order of magnitude is all you need for any useful work. The processes probably took many years anyway. Except in the laboratory, speciation doesn't happen overnight...
• by BloodSprite (557023) on Wednesday February 20, 2008 @12:12PM (#22489052) Homepage
does this effect Dark Matter, Missing Mass calculations so that they balance now? (or are a smaller magnitude?)
• by Nodlehs (860786) on Wednesday February 20, 2008 @01:02PM (#22489838)
Now you will say that I could always revert the changes ... but that means that not only would I have to write the article, but constantly "maintain" and "protect" it as well. It's the latter prospect that is discouraging.
If you have enough energy to check slashdot regularly, you have enough energy to check a wikipedia article once a week to see that information you obviously care about is maintained.
On the other hand, if I were contacted by an editor to write for a "real/classical" encyclopedia, I could be assured that my hard work would be protected.
Real? Because classic literature is NEVER wrong... And you are always right too? right? ...
• by spun (1352) <loverevolutionary@@@yahoo...com> on Wednesday February 20, 2008 @01:17PM (#22490078) Journal
I wonder where I've heard that word before.
The one guy who calculated global warming is a myth, and all the dittoheads who parrot back the misinformation without any thought in their tiny, birdlike brians?
• by pkphilip (6861) on Wednesday February 20, 2008 @01:43PM (#22490498)
"Why do I feel your hidden conclusion from this is 'Jebus is teh g0d!'"
Interesting that this was brought up. Questioning a "scientific finding" these days or even implying that there may be problems in how the scientific research is being conducted can bring all kinds of interesting people from the woodwork - it is an act about as sacrilegious as arguing before the pope during the dark ages that the sun is not, in fact, rotating around the earth.
I fully expect to be modded down to oblivion for this and I honestly couldn't care less.
• by PitaBred (632671) <slashdot.pitabred@dyndns@org> on Wednesday February 20, 2008 @01:59PM (#22490798) Homepage
It's because you specifically noted primate and human evolution versus the theory of evolution in general, somehow implying that humans are special and outside the system, and you also used a fallacious argument about "Well, if we were wrong about one thing, we could be wrong about everything in science!". This is typically an argument of "Intelligent Design theorists", which is why the GPP brought it up. There have always been problems with scientific research in all fields being imperfect, because humans do it. Stating that you think this is some kind of new thing, or only new in your field of interest, is disingenuous.
• by uhlume (597871) on Wednesday February 20, 2008 @04:36PM (#22493240) Homepage
You have no evidence of such an occurrence in this case, and I'd challenge you to find conclusive and credible evidence of such a phenomenon in any other scientific consensus.
Boldly-worded Slashdot write-up and subsequent rush to Wikipedia notwithstanding, all we have here is a brief article in a little-known Australian paper, vaguely referencing an as-yet unpublished study by a group of astronomers who seem (it's hard to say anything without reference to the study itself) to have re-interpreted existing data to support a finding contradictory to the current consensus, probably within a relatively narrow domain. A new consensus may or may not be built as other scientists independently verify or discredit the methodology and findings of the study. Sensationalistic headlines aside, a single new study does not automatically establish or dissolve consensus, nor should it. This is precisely what the process of scientific consensus is about, and why scientists (and others) rightly trust it.
• by brian0918 (638904) <brian0918@g m a i l .com> on Wednesday February 20, 2008 @05:22PM (#22493876)
"You just don't like to pay your bills."
I'm fine with my bills. I'm even fine with a voluntary taxation system. I think if someone wants to donate their money to a cause, they should be free to do so. What I am not fine with is the plurality taking away my fundamental rights. Do you deny that we have such rights? Individual rights are the fundamental moral principle when men deal with one another. The majority may not --morally -- trample the rights of the minority or the individual. Democracy, to the extent it is good, is only good as good as its ability to protect individual rights.
|
Friday, October 7, 2011
A number of Ways (Do), owing to the fact that a Do is a particular expression of the Way of the universe itself, have used the term mu to point to the sum and substance of the universe. And since it is the mind after all that perceives the absolute universe, various mental states in the Ways have appellations that utilize the character for mu as well. Originating in Buddhism, but having parallels in other religions, mu means, “the void,” or “nothingness.”--H. E. Davey, The Japanese Way of the Artist
|
Nuclear Scintigraphy
Nuclear scintigraphy, or scanning, is a form of diagnostic imaging designed to detect areas of increased bone metabolism that may signal orthopedic diseases. It is especially helpful in identifying multiple areas in the skeletal system that may be contributing to lameness. Bone scans enable us to look at regions of the body, such as the back, pelvis and hips of the horse, that are inaccessible to other imaging modalities.
During the procedure a patient is injected with a radioactive dye (radioisotope) that accumulates around inflamed bones. After the injection, the horse is placed in a stall for a few hours to allow the isotope to settle into the affected areas. The horse is then scanned with a gamma camera, which is similar to a large Geiger counter. The gamma camera detects the isotope signal as it emanates from the horse and produces an image that highlights specific areas of inflammation. From these images, the clinician is able to determine the affected areas and narrow down the potential causes of lameness.
Please contact the Hospital for Large Animals to make an appointment.
Hospital for Large Animals
Cummings School of Veterinary Medicine at Tufts University
53 Willard Street
North Grafton, MA 01536
Phone: (508) 839-7926
Fax: (508) 839-7931
Emergency Services: (508) 839-7926
|
So gimbals, 3 accumulated axial rotations, do not really work very well for orienting an object. Gimbals can be locked, and it is very unintuitive to control them. How do we fix these problems?
There are a few downsides to this approach. First, a 4x4 matrix is rather larger than 3 floating-point angles. But a much more difficult issue is that successive floating-point math can lead to errors. If you keep accumulating successive transformations of an object, once every 1/30th of a second for a period of several minutes or hours, these floating-point errors start accumulating. Eventually, the orientation stops being a pure rotation and starts incorporating scale and skewing characteristics.
The solution here is to re-orthonormalize the matrix after applying each transform. A coordinate system (which a matrix defines) is said to be orthonormal if the basis vectors are of unit length (no scale) and each axis is perpendicular to all of the others.
Unfortunately, re-orthonormalizing a matrix is not a simple operation. You could try to normalize each of the axis vectors with typical vector normalization, but that would not ensure that the matrix was orthonormal. It would remove scaling, but the axes would not be guaranteed to be perpendicular.
Orthonormalization is certainly possible. But there are better solutions. Such as using something called a quaternion.
A quaternion is (for the purposes of this conversation) a 4-dimensional vector that is treated in a special way. Any pure orientation change from one coordinate system to another can be represented by a rotation about some axis by some angle. A quaternion is a way of encoding this angle/axis rotation:
Equation 8.1. Angle/Axis to Quaternion
Assuming the axis itself is a unit vector, this will produce a unit quaternion. That is, a quaternion with a length of 1.
Quaternions can be considered to be two parts: a vector part and a scalar part. The vector part are the first three components, when displayed in the order above. The scalar part is the last part.
Quaternion Math
Quaternions are equivalent to orientation matrices. You can compose two orientation quaternions using a special operation called quaternion multiplication. Given the quaternions a and b, the product of them is:
Equation 8.2. Quaternion Multiplication
If the two quaternions being multiplied represent orientations, then the product of them is a composite orientation. This works like matrix multiplication, except only for orientations. Like matrix multiplication, quaternion multiplication is associative ((a*b) * c = a * (b*c)), but not commutative (a*b != b*a).
The main difference between matrices and quaternions that matters for our needs is that it is easy to keep a quaternion normalized. Simply perform a vector normalization on it after every few multiplications. This enables us to add numerous small rotations together without numerical precision problems showing up.
There is one more thing we need to be able to do: convert a quaternion into a rotation matrix. While we could convert a unit quaternion back into angle/axis rotations, it's much preferable to do it directly:
Equation 8.3. Quaternion to Matrix
This does look suspiciously similar to the formula for generating a matrix from an angle/axis rotation.
Composition Type
So our goal is to compose successive rotations into a final orientation. When we want to increase the pitch, for example, we will take the current orientation and multiply into it a quaternion that represents a pitch rotation of a few degrees. The result becomes the new orientation.
But which side do we do the multiplication on? Quaternion multiplication is not commutative, so this will have an affect on the output. Well, it works exactly like matrix math.
Our positions (p) are in model space. We are transforming them into world space. The current transformation matrix is represented by the orientation O. Thus, to transform points, we use O*p
Now, we want to adjust the orientation O by applying some small pitch change. Well, the pitch of the model is defined by model space. Therefore, the pitch change (R) is a transformation that takes coordinates in model space and transforms them to the pitch space. So our total transformation is O*R*p; the new orientation is O*R.
Yaw Pitch Roll
We implement this in the Quaternion YPR tutorial. This tutorial does not show gimbals, but the same controls exist for yaw, pitch, and roll transformations. Here, pressing the SpaceBar will switch between right-multiplying the YPR values to the current orientation and left-multiplying them. Post-multiplication will apply the YPR transforms from world-space.
Figure 8.3. Quaternion YPR Project
Quaternion YPR Project
The rendering code is pretty straightforward.
Example 8.2. Quaternion YPR Display
void display()
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glutil::MatrixStack currMatrix;
currMatrix.Scale(3.0, 3.0, 3.0);
//Set the base color for this object.
Though GLSL does not have quaternion types or quaternion arithmetic, the GLM math library provides both. The g_orientation variable is of the type glm::fquat, which is a floating-point quaternion. The glm::mat4_cast function converts a quaternion into a 4x4 rotation matrix. This stands in place of the series of 3 rotations used in the last tutorial.
In response to keypresses, g_orientation is modified, applying a transform to it. This is done with the OffsetOrientation function.
Example 8.3. OffsetOrientation Function
void OffsetOrientation(const glm::vec3 &_axis, float fAngDeg)
float fAngRad = Framework::DegToRad(fAngDeg);
glm::vec3 axis = glm::normalize(_axis);
axis = axis * sinf(fAngRad / 2.0f);
float scalar = cosf(fAngRad / 2.0f);
glm::fquat offset(scalar, axis.x, axis.y, axis.z);
g_orientation = g_orientation * offset;
g_orientation = offset * g_orientation;
g_orientation = glm::normalize(g_orientation);
This generates the offset quaternion from an angle and axis. Since the axis is normalized, there is no need to normalize the resulting offset quaternion. Then the offset is multiplied into the orientation, and the result is normalized.
In particular, pay attention to the difference between right multiplication and left multiplication. When you right-multiply, the offset orientation is in model space. When you left-multiply, the offset is in world space. Both of these can be useful for different purposes.
|
Koch Coke PileRuth Germain/Petroleum Coke Awareness Facebook
Canada's oil sand mines will eventually produce up to 2 trillion barrels of oil and what that could mean for the environment has been debated for years. What's often overlooked though is a coke byproduct that results from refining the tar-like bitumen of the oil sands into oil.
Coke is a low-quality type of coal and the Marathon Petroleum plant in Detroit has made overlooking its role in the oil sands debate impossible to ignore. The refinery was built on the Detroit River more than 70 years ago but began refining Canadian oil sand deliveries just last November.
The coke waste started accumulating then. The New York Times writes that now the mound of coke towers three stories above the street, covers an entire city block, and is owned by Koch Carbon controlled by David and Charles Koch.
Petroleum coke generates up to 10% more CO2 than coal, and new permits allowing its use are no longer issued in the U.S.
Faced with hauling the stuff away and selling at a loss, Canadian mining companies have been piling it into massive man-made mountains of their own. The immense mound of coke in the pictures below were photographed during our trip to the oil sands last year.
While coke is used widely in countries like China and Mexico where emissions are less regulated than in the U.S., it sells for 25% less than coal. That means shipping the coke from Canada only makes sense if it's pumped out in the tar-like bitumen and refined closer to where it's eventually sold.
Oxbow drew media scrutiny in 2012 after donating $4.25 million to GOP candidates and spending another $1.3 million on lobbyists in the same period.
How this potential concentration of coke could affect the U.S. has yet to be seen. The National Fire Protection Agency warns petroleum coke should be prevented from contaminating groundwater at all costs and any spills that have the potential of reaching a waterway are required by U.S. Coast Guard regulations to be reported immediately.
The following photos from our Alberta oil sands trip last year, shows the scope of the coke already backlogged in the region.
Alberta Oil Sand Petroleum Coke Piles Robert Johnson/Business Insider
Alberta Oil Sand Petroleum Coke Piles Robert Johnson/Business Insider
Alberta Oil Sand Petroleum Coke PilesRobert Johnson/Business Insider
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.