text
stringlengths
263
344k
id
stringlengths
47
47
dump
stringclasses
23 values
url
stringlengths
16
862
file_path
stringlengths
125
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
81.9k
score
float64
2.52
4.78
int_score
int64
3
5
Some scientists think that our ability to study the universe with space craft and telescopes might actually be destroying it. It's all part of quantum physics. It's also surfer wisdom. In the November 21st edition of the Telegraph, Roger Highfield explains that our ability to measure the universe may actually shorten its life, because our observations could trigger another "big bang," which would be the end of our world and the start of another one. In the November 13th issue of the Telegraph, Highfield writes about "an impoverished surfer [who] has drawn up a new theory of the universe, seen by some as the Holy Grail of physics, which has received rave reviews from scientists." Garrett Lisi spends most of the year surfing in Hawaii, but has still had time to come up with a new "theory of everything," which includes new atomic particles. His new theory, which he calls "e8," will be able to be tested when the CERN particle accelerator is activated in Geneva, Switzerland in 2008. OUR world may be ending soon and when it happens, don't say we didn't warn you! You can help keep us alive by becoming a subscriber and by clicking the donate button on our homepage. No matter what happens to the universe, it's not too late to save US! Merry Christmas! To learn more, click here and here. NOTE: This news story, previously published on our old site, will have any links removed.
<urn:uuid:ff526a6d-85e4-4588-8366-3b8886b34cc3>
CC-MAIN-2013-20
http://www.unknowncountry.com/news/will-space-travel-destroy-universe
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382560/warc/CC-MAIN-20130516092622-00053-ip-10-60-113-184.ec2.internal.warc.gz
en
0.974326
300
3.078125
3
On this day in 1950, officials of the United States Lawn Tennis Association (USLTA) accept Althea Gibson into their annual championship at Forest Hills, New York, making her the first African-American player to compete in a U.S. national tennis competition. Growing up in Harlem, the young Gibson was a natural athlete. She started playing tennis at the age of 14 and the very next year won her first tournament, the New York State girls' championship, sponsored by the American Tennis Association (ATA), which was organized in 1916 by black players as an alternative to the exclusively white USLTA. After prominent doctors and tennis enthusiasts Hubert Eaton and R. Walter Johnson took Gibson under their wing, she won her first of what would be 10 straight ATA championships in 1947. In 1949, Gibson attempted to gain entry into the USLTA's National Grass Court Championships at Forest Hills, the precursor of the U.S. Open. When the USLTA failed to invite her to any qualifying tournaments, Alice Marble--a four-time winner at Forest Hills--wrote a letter on Gibson's behalf to the editor of American Lawn Tennis magazine. Marble criticized the "bigotry" of her fellow USLTA members, suggesting that if Gibson posed a challenge to current tour players, "it's only fair that they meet this challenge on the courts." Gibson was subsequently invited to participate in a New Jersey qualifying event, where she earned a berth at Forest Hills. On August 28, 1950, Gibson beat Barbara Knapp 6-2, 6-2 in her first USLTA tournament match. She lost a tight match in the second round to Louise Brough, three-time defending Wimbledon champion. Gibson struggled over her first several years on tour but finally won her first major victory in 1956, at the French Open in Paris. She came into her own the following year, winning Wimbledon and the U.S. Open at the relatively advanced age of 30. Gibson repeated at Wimbledon and the U.S. Open the next year but soon decided to retire from the amateur ranks and go pro. At the time, the pro tennis league was poorly developed, and Gibson at one point went on tour with the Harlem Globetrotters, playing tennis during halftime of their basketball games. In the early 1960s, Gibson became the first black player to compete on the women's golf tour, though she never won a tournament. She was elected to the International Tennis Hall of Fame in 1971. Though she once brushed off comparisons to Jackie Robinson, the trailblazing black baseball player, Gibson has been credited with paving the way for African-American tennis champions such as Arthur Ashe and, more recently, Venus and Serena Williams. After a long illness, she died in 2003 at the age of 76.
<urn:uuid:32861dff-7384-4f8f-a68a-b3efeef42a26>
CC-MAIN-2014-10
http://www.history.com/this-day-in-history/althea-gibson-becomes-first-african-american-on-us-tennis-tour
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010824553/warc/CC-MAIN-20140305091344-00051-ip-10-183-142-35.ec2.internal.warc.gz
en
0.972627
569
3
3
Showing all 15 results - Dental plaque - Dental calculus (tartar) - Inflammation of the root - Fissure, approximal and smooth surface caries. - 2-part lower incisor with longitudinal section - 2-part lower canine with longitudinal section - Lower single-root pre-molar - 2-part lower twin-root molar with longitudinal section showing caries attack - 3-part upper triple-root molar with longitudinal section and caries insert Each tooth is also available singly. The dentition development model is cast from a natural specimen, 4 upper and lower jaw halves, 4 different stages of dentition development: - New born - Approx. 5-year old child - Approx. 9-year old child - Young adult Dentition development model on a stand. This giant dental care model, large enough to be seen from the back of a classroom, shows the upper and lower half of an adult's dentition. A flexible joint between the jaws allows easy movement of the dental care model. Teach kids the proper teeth cleaning techniques using the giant toothbrush included with this dental care model. The model represents half of the left lower jaw of a young person. One section of bone is removable from the half lower jaw to expose the tooth roots, spongiosa, vessels and nerves. Canine and first molar are removable from the half lower jaw, and longitudinally sectioned. Half lower jaw on stand.
<urn:uuid:7dacb9bf-3e9d-443d-873c-c5b5c3cf30c9>
CC-MAIN-2020-29
https://www.inds.co.uk/product-category/education/biology/anatomy/teeth/?add-to-cart=37867
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655899209.48/warc/CC-MAIN-20200709065456-20200709095456-00588.warc.gz
en
0.868007
317
3.125
3
The Relationship Between Colonial French and Native American Artifacts at the Louis Blanchette Site, 23SC2101 Author(s): Nicole M. Weber 23SC2101, also known as the Louis Blanchette Site in St. Charles, Missouri, is a multi-component site with both French Colonial and Native American levels. Lindenwood University discovered two outbuildings on the site, and two Native American features. Field schools partially excavated the floors of the outbuildings, discovering what are probably Native American artifacts in one of these. The Native American artifacts found at the site are possibly linked to Blanchette’s Native American wife, but for the time being it is unsure if these were left behind by previous Native American occupants of the site. Statistical analysis suggests there is a relationship between the lithics and Native American pottery found in the dirt floors of the outbuilding to the French Colonial occupation. This Resource is Part of the Following Collections Cite this Record The Relationship Between Colonial French and Native American Artifacts at the Louis Blanchette Site, 23SC2101. Nicole M. Weber. Presented at Society for Historical Archaeology, Fort Worth, TX. 2017 ( tDAR id: 435361) min long: -129.199; min lat: 24.495 ; max long: -66.973; max lat: 49.359 ;
<urn:uuid:6e88cc8c-5f31-4312-a7f9-049d470aa727>
CC-MAIN-2017-47
https://core.tdar.org/document/435361/the-relationship-between-colonial-french-and-native-american-artifacts-at-the-louis-blanchette-site-23sc2101
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809392.94/warc/CC-MAIN-20171125032456-20171125052456-00567.warc.gz
en
0.914074
286
2.765625
3
This week at Glenfeliz, we introduced new groups of students to the garden classroom. We learned the rules that keep us and all the living things in the garden safe. Students also got a chance to meet the garden by observing, sketching, and asking questions about what they found. We checked in on our pea plants, and they are growing taller and looking great! Our radishes have begun to sprout! Next we will need to thin the sprouts so that the radish roots can have plenty of space to grow big and round under the soil. The sprouts are edible and delicious, so they will not go to waste once we pull them for thinning. What a great first day for our new kindergarten and 2nd grade classes! We look forward to more fun and learning in the garden in the coming weeks. Happy gardening and healthy eating, -Garden Ranger Ashley
<urn:uuid:9cef8c12-5f89-4017-811e-7884d7623fd7>
CC-MAIN-2017-26
http://enrichla.org/uncategorized/new-students-meet-the-garden-at-glenfeliz/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323908.87/warc/CC-MAIN-20170629103036-20170629123036-00582.warc.gz
en
0.940899
185
3
3
At least 13,486 gas leaks and 2,577 short circuits They were registered between March 2020 and July 2021, reported the Peruvian General Fire Department (CGBP). In addition, the authority warned that although the consumption of electricity and gas bottles (LPG) has increased during the COVID-19 pandemic, these can represent a danger to life and materials if they are not used properly. The information was released by the Supervisory Agency for Investment in Energy and Mining (Osinergmin) and the General Fire Department of Peru, institutions that alerted the population about accidents involving electricity and gas cylinders. Along these lines, Osinergmin pointed out that many homes use an adapter or “gas outlet” to boost the amount of gas in the burners. However, this is highly dangerous “because gas at an inadequate pressure could pass into the hose and cooker, causing leaks,” he said. To prevent gas (LPG) leaks, the pressure regulator must be taken into account, which is connected to the balloon. The authorities recommend that users use the “knob” regulator (known as premium) since it gives greater security than the “red lever”. Osinergmin and the Firefighters also recommend the following to the public: - Buy the ball in a formal place and require that it be in good condition, without dents or corrosion. - Never put the balloon inside a cupboard or in confined spaces, because in the event of a leak the LPG will not dissipate. - Use hoses designed for liquefied petroleum gas. They have the word GLP written on them, as well as the month and year of manufacture. Osinergmin: prevent short circuits To prevent electrical accidents, they recommend having facilities in good condition. “For example, grounding, the thermomagnetic switch that cuts the current in the event of overloads and the differential switches that protect people’s lives against possible electric shocks,” they indicated. Likewise, Osinergmin and the Fire Department warned that it is dangerous to overload electrical outlets and use poor quality extensions “they can generate short circuits and fires. The recommendation is to use “surge suppressors” and extensions with thick, double-sheathed cables known as vulcanized cables, “they indicated. In addition, they reported that Any revision or change in the electrical installations must be carried out by a specialist. Kingston is an accomplished author and journalist, known for his in-depth and engaging writing on sports. He currently works as a writer at 247 News Agency, where he has established himself as a respected voice in the sports industry.
<urn:uuid:1886ed3a-6606-43db-9f79-2d93a90047b9>
CC-MAIN-2023-40
https://247newsagency.com/economy/5583.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510481.79/warc/CC-MAIN-20230929022639-20230929052639-00598.warc.gz
en
0.959719
560
2.828125
3
A lot of what gets discussed here in relation to the greenhouse effect is relatively simple, and yet can be confusing to the lay reader. A useful way of demonstrating that simplicity is to use a stripped down mathematical model that is complex enough to include some interesting physics, but simple enough so that you can just write down the answer. This is the staple of most textbooks on the subject, but there are questions that arise in discussions here that don’t ever get addressed in most textbooks. Yet simple models can be useful there too. I’ll try and cover a few ‘greenhouse’ issues that come up in multiple contexts in the climate debate. Why does ‘radiative forcing’ work as method for comparing different physical impacts on the climate, and why you can’t calculate climate sensitivity just by looking at the surface energy budget. There will be mathematics, but hopefully it won’t be too painful. So how simple can you make a model that contains the basic greenhouse physics? Pretty simple actually. You need to account for the solar radiation coming in (including the impact of albedo), the longwave radiation coming from the surface (which depends on the temperature) and some absorption/radiation (the ‘emissivity’) of longwave radiation in the atmosphere (the basic greenhouse effect). Optionally, you can increase the realism by adding feedbacks (allowing the absorption or albedo to depend on temperature), and other processes – like convection – that link the surface and atmosphere more closely than radiation does. You can skip directly to the bottom-line points if you don’t want to see the gory details. The Greenhouse Effect The basic case is set up like so: Solar radiation coming in is , where is the albedo, TSI the solar ‘constant’ and the factor 4 deals with the geometry (the ratio of the area of the disk to the area of the sphere). The surface emission is where is the Stefan-Boltzmann constant, and is the surface temperature and the atmospheric radiative flux is written , where is the emissivity – effectively the strength of the greenhouse effect. Note that this is just going to be a qualitative description and can’t be used to quantitatively estimate the real world values. There are three equations that define this system – the energy balance at the surface, in the atmosphere and for the planet as a whole (only two of which are independent). We can write the equations in terms of the energy fluxes (instead of the temperatures) since it makes the algebra a little clearer. The factor of two for A (the radiation emitted from the atmosphere) comes in because the atmosphere radiates both up and down. From those equations you can derive the surface temperature as a function of the incoming solar and the atmospheric emissivity as: If you want to put some vaguely realistic numbers to it, then with S=240 W/m2 and =0.769, you get a ground temperature of 288 K – roughly corresponding to Earth. So far, so good. Point 1: It’s easy to see that the G (and hence ) increases from S to 2S as the emissivity goes from 0 (no greenhouse effect) to 1 (maximum greenhouse effect) i.e. increasing the greenhouse effect warms the surface. This is an extremely robust result, and indeed has been known for over a century. One little subtlety, note that the atmospheric temperature is cooler than the surface – this is fundamental to there being a greenhouse effect at all. In this example it’s cooler because of the radiative balance, while in the real world it’s cooler because of adiabatic expansion (air cools as it expands under lower pressure) modified by convection. Now what happens if something changes – say the solar input increases, or the emissivity changes? It’s easy enough to put in the new values and see what happens – and this will define the sensitivity of system. We can also calculate the instantaneous change in the energy balance at the top of the atmosphere as or changes while keeping the temperatures the same. This is the famed ‘radiative forcing’ you’ve heard so much about. That change (+ve going down) is: where are the small changes in solar and change in emissivity respectively. The subscripts indicate the previous equilibrium values We can calculate the resulting change in G as: so there is a direct linear connection between the radiative forcing and the resulting temperature change. In more complex systems the radiative forcing is a more tightly defined concept (the stratosphere or presence of convection make it a little more complex), but the principle remains the same: Point 2: Radiative forcing – whether from the sun or from greenhouse gases – has pretty much the same effect regardless of how it comes about. The ratio of is the sensitivity of to the forcing for this (simplified) system. To get the sensitivity of the temperature (which is the more usual definition of climate sensitivity, ), you need to multiply by i.e. . For the numbers given above, it would be about 0.3 C/(W/m2). Again, I should stress that this is not an estimate for the real Earth! As an aside, there have been a few claims (notably from Steve Milloy or Sherwood Idso) that you can estimate climate sensitivity by dividing the change in temperature due to the greenhouse effect by the downwelling longwave radiation. This is not even close, as you can see by working it through here. The effect on due to the greenhouse effect (i.e. the difference between having and its actual value) is , and the downward longwave radiation is just , and dividing one by the other simply gives – which is not the same as the correct expression above – in this case implying around 0.2 C/(W/m2) – and indeed is always smaller. That might explain it’s appeal of course (and we haven’t even thought about feedbacks yet…). Point 3: Climate sensitivity is a precisely defined quantity – you can’t get it just by dividing an energy flux by any old temperature. Now we can make the model a little more realistic by adding in ‘feedbacks’ or amplifying factors. In this simple system, there are two possible mechanism – a feedback on the emissivity or on the albedo. For instance, making the emissivity a function of temperature is analogous to the water vapour feedback in the real world and making the albedo a function of temperature could be analogous to the ice-albedo or cloud-cover feedbacks. We can incorporate the first kind of physics by making dependent on the temperature (or for arithmetical convenience). Indeed, if we take a special linear form for the temperature dependence and write: then the result we had before is still a solution (i.e. ). However, the sensitivity to changes (whether in the greenhouse effect or solar input) will be different and will depend on . The new sensitivity will be given by So if is positive, there will be an amplification of any particular change, if it’s negative, a dampening i.e. if water vapour increases with temperature that that will increase the greenhouse effect and cause additional warming. For instance, , then the sensitivity increases to 0.33 C/(W/m2). We could do a similar analysis with a feedback on albedo and get larger sensitivities if we wanted. However, regardless of the value of the feedbacks, the fluxes before any change will be the same and that leads to another important point: Point 4: Climate sensitivity can only be determined from changes to the system, not from the climatological fluxes. While this is just a simple model that is not really very Earth-like (no convection, no clouds, only a single layer etc.), it does illustrate some relevant points which are just as qualitatively true for GCMs and the real world. You should think of these kinds of exercises as simple flim-flam detectors – if someone tries to convince you that they can do a simple calculation and prove everyone else wrong, think about what the same calculation would be in this more straightforward system and see whether the idea holds up. If it does, it might work in the real world (no guarantee though) – but if it doesn’t, then it’s most probably garbage. N.B. This is a more pedagogical and math-heavy article than most of the ones we post, and we aren’t likely to switch over exclusively to this sort of thing. But let us know if you like it (or not) and we’ll think about doing similar pieces on other key topics.
<urn:uuid:c21b3245-ed7f-4f4e-8377-9ec2b50f97cf>
CC-MAIN-2013-20
http://www.realclimate.org/index.php/archives/2007/04/learning-from-a-simple-model/comment-page-2/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00054-ip-10-60-113-184.ec2.internal.warc.gz
en
0.921425
1,839
3.734375
4
Researchers have found that goats develop their own “accents” as they grow older and move among social groups. The study, published in the journal Animal Behavior, shows that a goat’s environment affects his or her calls. These findings challenge the scientific community’s widely-held belief that most mammals’ voices are genetically predetermined. Until now, scientists were only aware of a few species capable of developing unique vocalizations based on their social surroundings, including humans, dolphins, and elephants. However, this research suggests that many more mammals may be capable of developing unique voices. Perhaps more importantly, the conclusions drawn from this new research underscore the significant cognitive abilities of goats and other farm animals, as well as the value of socialization opportunities for the animals. Goats are generally spared the worst horrors of factory farming in the United States, but millions of other highly social and intelligent animals (such as gestating sows, confined in individual crates so narrow that they cannot turn around) are systematically denied the opportunity to socialize and form groups. As our understanding of animal cognition improves, so will our ability to advocate for better animal welfare.
<urn:uuid:2266fe0a-e55c-46a3-ba9f-29f7aa9fc8b8>
CC-MAIN-2014-10
http://www.awionline.org/awi-quarterly/2012-spring/no-kidding-goats-have-accents
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011179079/warc/CC-MAIN-20140305091939-00095-ip-10-183-142-35.ec2.internal.warc.gz
en
0.96052
232
3.75
4
How was "The Tyme Appointed"? by Anthony Aveni Virginia's Powhatan Indians, here represented by Jerry Fortune of the Rappahannock tribe, made the heavens their calendar. Imagine living without watches or calendars. How would you know when your bills were due or when to plan periodic visits to your dentist or physician? For most cultures that thrived before the invention of the mechanical clock the time to stop work was reckoned by whether you could see heads on a coin, and lunchtime was determined by raising your hand skyward to see whether the sun had reached its highest point. Longer-term intervals were counted off on notched sticks, and by the lost art of moon watching. The neat thing about the moon is that anyone who pays attention to it can easily discern changes in its face from day to day. The moon begins its month-long cycle as a thin "first crescent," visible in the west shortly after sunset. Within a week it moves farther from the setting sun as it waxes to a D-shaped first quarter seen high on the meridian. Then it bulges through the waxing gibbous phase as it grows to a full moon. Rising directly opposite the setting sun, it takes over the duties of lighting the sky all night in the sun's absence. That seven-day period from quarter to full correlates perfectly with our week, a convenient timing device in agrarian societies because that's just about the time it takes to pick vegetables, carry them to a regional market, dispense them be-fore they spoil, and get back to the fields after a day of rest to repeat the market cycle. During the last two weeks of the 29-day lunar cycle, the process takes place in reverse, as the moon transforms itself through waning gibbous to last quarter and finally to the thinnest of crescents seen hovering over the predawn sun before it disappears for a day or two. The British term "fortnight," which stands for two weeks or fourteen nights may have been a spin-off of the early relationship between the week and the month—an interval that divides the month into its pair of familiar half cycles from new to full and from full to new moon. Then it happens all over again. Little wonder lunar mythologies all over the world characterize the man in the moon as a nocturnal hero who waxes to maturity, only to be eaten away in old age by the devil of darkness. Though his feeble remnant falls from the sky, his son soon rises anew to avenge the death of his father. In low-tech societies, the month-to-month intervals are convenient for reckoning major changes in the environment, such as when the rains come and go, when the bear hibernates, when the salmon spawn or the geese go away, when the mulberries are ripe or the nuts ready to crush to make flour, when the corn should be planted and harvested. All of these events and activities functioned as names of months in the orally transmitted calendars of Native Americans. Harvest, the first, and Hunter's, the second, full moons following the autumn equinox on September 22 or 23 are familiar northeast native survivals. The Powhatan Indians, as Virginia's band of Algonquians are known, were no exception. The moon passes through a cycle of illumination, known as a lunation, in 29.5 days. The composite of twenty-four digital photos, shot by António Cidad‹o through a reflecting telescope in his home rooftop observatory in Oeiras, Portugal, tracks half a lunation. The sophisticated use Indians made of natural wealth, suggested by the interpretations of Henricus Historical Park and Pete McKee, embraced the sky. Edward Waterhouse, in the 1623 Virginia Company of London tract A Declaration of the State of the Colony and Affaires in Virginia, described the Powhatans as "naked, tanned, deformed, savages ...no other than wild beasts," and "worse then their old Deuill which they worship." The Powhatan, to the contrary, were skilled lunar timekeepers. These farming hunter-gatherers, who lived in a land of rivers, bays, and estuaries, grew corn and vegetables in summer, took game in winter, and fished during spring. They reckoned a "moon of stags," a "corn moon," and a first and second "moon of cohonks"—the Algonquian word sounds just like the call of the geese, the sound from which the word derives. Moreover, they tallied their moons by knots on strings and notched sticks. Any effective lunar calendar must be in sync with the seasonal or solar year, which the crop cycle follows. But as nature would have it, the moon and sun have trouble living together in harmony. Our years of 365+ days can accommodate twelve lunar cycles, or 354 days, with a shortfall of eleven days, or thirteen with an overrun of eighteen, that is to say, 383 days. Our Roman forebears, who bequeathed us the calendar we use, force-fitted the two cycles together. They artificially lengthened the months to thirty and thirty-one days, a most unnatural choice for attentive sky watchers, which the Romans decidedly were not. Another solution for keeping seasonal time by the moon would be to keep successive twelve-month years, inserting a thirteenth month into the year cycle when necessary to make up for lost time, as we do with our Leap Year. Many Native American tribes, likely including the Powhatans, did just that. Some were more precise with their lunar reckoning. We know the Delaware named the phases of the same moon; for example, the new moon, likely the first visible crescent; the round or full moon; and the half round, or probably last quarter, moon. The intervals between the directly visible phases—ranging from a few to several days—proved convenient in day-to-day practical operations. Think of how often in a given day you refer to activities that will take place "after the weekend," "early next week," or "in a few weeks." "Little Corn," "Great Corn," "Turkey," "Cold Meal," and "Deer" are a long way from our abstract January, February, March, and April, but what we learn from Native American timekeeping is that the names given to time periods represent "lived time"—the activity itself. And in any successful society such activity would necessarily include scheduling the conduct of war. One of the most revealing discoveries about the recently deciphered inscriptions of the ancient Maya of Yucatan is that they undertook their raids in neighboring cities when the maize was already in the ground, that is, after the intensive labor associated with sowing the crop following the rains had already been dispensed with. We know this because hieroglyphs and imagery pertaining to warfare have been correlated with the appearance of the planet Venus at specific times in the seasonal year. The Maya were conducting real "Star Wars." As we learn from "The tyme appointed," Mary Miley Theobald's article published elsewhere in this issue, although there were undoubtedly some profoundly political causes, such as loss of land, dissatisfaction over English policies, avenging recent wrongs, and so forth, the crucial question of the proximate cause or causes remains unsettled. Timing of the end-of-winter attack is another issue, and this is where the moon enters our story. Early spring would have been an illogical time for the Powhatan to rise against an adversary. The Virginia Company of London emphasized the naturalism of Eiasuntomino on a 1615 broadside promoting a fund-raising lottery for Jamestown. Matahan, also portrayed on the Virginia Company of London broadside, lived in adaptive harmony with the rhythms of Nature's seasons. John Smith tells us that May was the main corn—along with pumpkin and melon—planting month, with some early planting in April and later planting in June. Each crop was reaped four months later. These crucial Nepinough, or corn-earing, months were the busiest time in the agricultural cycle; it would have been important to the natives to recoup the fields after an attack by the middle of May to allow enough time to perform these vital activities. Though the record is sparse, given what we know of their methods for keeping time, it is not at all implausible that the Powhatan would have conducted star—or moon—wars of their own. What little we do know of the celestial circumstances surrounding the Powhatan coups of 1622 and 1644 points to it. On the eve of the March 22, 1622, attack, calculations reveal that the moon was in its third quarter phase and that it rose in the south-southeast in the constellation of Sagittarius about an hour past midnight. The moon presented almost precisely the same aspect the night before the April 18, 1644, attack: third quarter, rising, this time in Capricorn, in the same direction, also about one hour after midnight. Successful attacks require not only good timing but advance planning. Waterhouse's account of the first uprising says, "Several days before this bloodthirsty people put their plan into execution they led some of our people through very dangerous woods," and, "On Friday before the day appointed by them for the attack they visited, entirely unharmed, some of our people in their dwellings." If Opechancanough, who masterminded both attacks, wanted to plan them on these dates, the most obvious way to convey his intent to his cohorts would have been to set the lunar clock by counting days from the first visible crescent. For more effective long-range planning the Powhatans, an association of scattered tribes, could have synchronized the strikes to take place on a particular moon in the cycle, provided they shared an intertribal calendar—another not unlikely assumption. For the 1622 attack, in the latitude of Virginia, first visible crescent occurred March 1; for the 1644 episode it happened March 28—in both cases twenty-one days, or three-quarters of a lunar cycle, before history records the dramatic results. Why, then, did the two attacks occur a month apart—one in March, the other in April—as reckoned by our calendar? Here we need to call to mind two facts we have already learned about native timekeeping: first, the moon always takes precedence; and second, there can be twelve or thirteen moon cycles in a lunar year. Suppose, for example, that we count moons from the December solstice, which is December 21, the day when the sun rises and sets farthest to the south. Suppose Year One—or better, "Sun One"—ends with the completion of the twelfth moon, eleven days short of the winter solstice of Year Two. Then the first moon of Year Two will begin with the observation of the first crescent around December 10. If Sun Two also contains twelve moons, then the first moon of Sun Three will begin about November 30. Clearly, the next year cycle, or Sun Four, would be a most convenient one in which to insert a thirteenth moon. Now, if we were to count to last quarter from the first crescent of Moon One of Sun One, we would arrive at December 21 + twenty-one days, or January 11. But if we performed the same operation, in say Sun Three, we would land on December 21 (November 30 + twenty-one days). From this hypothetical example it is easy to see that, with a casual system for intercalating months, identical dates in the Powhatan moon calendar could correspond to dates up to a month apart in our sun or seasonal calendar. The moon was in its third quarter in March 1622 and April 1644 when Virginia's Native Americans attacked European intruders. The engravings shown above and below were published in 1675 at Prague by Melchior Küssell. The conjectural images depict a Native American, who some historians think could have been Opechancanough, leading attacks on Spanish missionaries near Yorktown in 1571 during a first quarter moon. The historical record conflicts on whether Opechancanough took Easter into account when planning the attacks. What makes this difficult to corroborate in real time is that Easter is computed in such a wide variety of ways that reckoning when it was celebrated is difficult. Incidentally, there is a touch of irony in the fact that our way of calculating Easter depends on the Jewish lunar calendar, and, like dates set in the Powhatan lunar calendar, it, too, is a movable holiday—it floats in the seasonal calendar. To make matters more difficult, because the events under consideration occurred in the century following major corrections of the Western calendar, there has been considerable confusion over whether dates reported by Virginia's earliest historians are given in the Julian, or Old Style, calendar in use in Roman Catholic Europe before 1582 or the Gregorian, New Style, calendar adopted since. The two calendars differed by eleven days at that time, but the computation of Easter yields dates up to a month apart. Thus, in 1622 Easter Sunday, Old Style, fell on April 21, but it occurred on March 27, New Style. Could it be that those who noted that the attack took place around the Easter holiday were referring the Old Style March 22 date to the New Style celebration of Easter, five days before which it took place? By striking coincidence both versions of Easter fell on exactly the same dates in 1644, the year of the second uprising. But this time if we wish to juxtapose it with the Paschal holiday, as some historical accounts require, we must assume that the record refers to the Old Style Easter date. In that case the attack would have taken place three days before Easter. Unfortunately no 1644 almanacs seem to survive, but an extrapolation of an Anglican common prayer book dated 1641 suggests that the April, Old Style, date is indeed the most likely one to have been recognized in the colonies. Last, and not often considered in the problem of whether the Powhatan coups were celestially timed, there are the circumstances of the attack on the mission of the Jesuits, who first attempted to establish permanent settlements in Virginia. It happened on the eve of the feast of Purification, or Candlemas, which was February 2, in the year 1571. What transpired in the sky the night before? A quarter moon, this time first quarter, rode high in the sky at sunset, illuminating the landscape until midnight. Might this signal a tradition of strikes on the enemy at quarter moons coinciding with key points in their holiday cycle? Only our lack of knowledge of Powhatan religious and social customs prevents us from speculating further. Anthony Aveni, editor or author of more than two dozen books on ancient astronomy, is the Russell B. Colgate Professor of Astronomy and Anthropology, serving in the department of physics and astronomy and the department of sociology and anthropology at Colgate University. He helped develop the field of archaeoastronomy, and his research in the astronomical history of the Maya Indians of ancient Mexico puts him among the founders of Mesoamerican archaeoastronomy. This is his first contribution to the journal.
<urn:uuid:a57cfc64-4599-46e2-ac69-348988e39805>
CC-MAIN-2014-23
http://www.history.org/Foundation/journal/Autumn05/appointed.cfm?showSite=mobile
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997889255.54/warc/CC-MAIN-20140722025809-00103-ip-10-33-131-23.ec2.internal.warc.gz
en
0.965406
3,179
3.109375
3
An Independence Day parade in Norfolk, Neb. has sparked a renewed controversy involving freedom of speech and freedom of expression. The parade featured a float that depicted a figure that bore the likeness of Barack Obama, who was standing at the door of an outhouse. The words inscribed on the outhouse stated, "Obama Presidential Library." The Nebraska Democratic Party cried foul, claiming that the display was racist. But the float's creator stated that the figure in the float represented himself, and his depiction of the Obama Presidential Library as an outhouse represented his anger concerning Obama's handling of the VA hospital scandal. The complaint of the Nebraska Democratic Party was enough to culminate in a visit by an official from the Department of Justice (DOJ) and the NAACP. The NAACP in particular was not amused by the display and stated that the float's creator failed to adequately express the intended message. While no further information has been made available, one can only wonder what will happen if the Feds from the DOJ decide that the display's creator had directed his sentiments directly at Obama. Does he go to jail? Will he be fined? Will he be charged with a crime? What crime? Is it not allowed in the United States for citizens to openly criticize a president? Is it a crime to use the most captivating means possible to express disgust toward that president, or anyone else in government for that matter? Traditionally America has cherished and protected free speech and expression, even that which most may consider to be inappropriate and even if those in government are offended. The only exceptions to this long history took place in 1798 and again in 1917. In the first instance, President John Adams pushed the passage of the Sedition Act in 1798 over the strong objections of his Vice President and political opponent, Thomas Jefferson. The Act made it illegal to oppose the government or to write or publish statements in opposition to any law in America or to vehemently criticize government officials. But when Jefferson became President after Adam's term, he allowed the Sedition Act to expire in 1801. The second official violation of free speech by government took place in 1917 under the Espionage Act, the brainchild of President Woodrow Wilson and politicians who were part of the Progressive Movement. The Act made it a federal crime to spread false rumors about the military, disrupt its operations, or to stir up mutiny or disrupt recruiting. But in 1918 Congress and the President expanded the 1917 law by passing the Sedition Act of 1918, which made it a felony to criticize the government. It is to be noted that during these years the Wilson Administration and the Congress rounded up and incarcerated citizens of German and Austrian descent, tossing them into detention camps within the United States. This action was a response to American involvement in WW I, in spite of the fact that U.S. citizens of German and Austrian descent supported the U.S. war effort. Once the populace finally got rid of Wilson and many of the more extremist of the Progressives, the Sedition Act of 1918 was allowed to expire in 1921. By this time the mood of the electorate was shifting, and the nation was headed toward a more freedom=oriented society based on low taxes, small government, and meager federal spending. The power and scope of the federal government was drastically rolled back, particularly under President Calvin Coolidge and his supporters among the Republican controlled Congress. But under Barack Obama and the Democrats who have controlled at least one chamber of the Congress, both chambers from 2006 to 2010, free speech has stood precariously on the brink of extinction. Recently Attorney General Eric Holder stated that criticizing Barack Obama is racist and implied that there should be legal consequences for doing so, given that such a thing is "hate speech." Further, Obama appointee Elena Kagan, Associate Justice of the U.S. Supreme Court, has written in the past that the government should exercise more often its power to restrict free speech in the name of "a compelling government interest." The entirety of the six years of Obama's presidency has cast a chilling pall over freedom of expression. Say the wrong thing, anything, that offends the speech police, and you will be vilified, excoriated, isolated, and ostracized for being a "racist." If the nation continues down this road unabated, it is not a stretch to envision citizens going to jail for criticizing the president. At present, the only legal remedy at our disposal is to load up both houses of Congress with conservative/libertarian public servants who will summarily disarm the power of the Obama/Democrat machine, making Obama truly a powerless lame duck. You may also be interested in the following: My personal blog, The Liberty Sphere. My popular series titled, Musings After Midnight. My ministry site, Martin Christian Ministries.
<urn:uuid:bea39751-21be-4eff-a559-80229fdd1740>
CC-MAIN-2014-23
http://www.examiner.com/article/criticize-obama-go-to-jail?cid=rss
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510270399.7/warc/CC-MAIN-20140728011750-00252-ip-10-146-231-18.ec2.internal.warc.gz
en
0.967521
977
2.625
3
Indian scientists have successfully conducted Mission Shakti shooting down a live satellite target in the low earth orbit (LEO). About Mission Shakti: - Mission Shakti is a joint programme of the Defence Research and Development Organisation (DRDO) and the Indian Space Research Organisation (ISRO). - As part of the mission, an anti-satellite (A-SAT) weapon was launched and targeted an Indian satellite which had been decommissioned. Mission Shakti was carried out from DRDO’s testing range in Odisha’s Balasore. About anti-satellite missile test: - It is Called ASAT in short, it is the technological capability to hit and destroy satellites in space through missiles launched from the ground. - India’s successful ‘kill’ with an A-SAT weapon is significant. - No country has used an A-SAT against another nation till date. In all the instances, the nations testing anti-satellite missiles have targeted one of their defunct satellites to showcase their space warfare capabilities. About international law on weapons in outer space: - The principal international Treaty on space is the 1967 Outer Space Treaty. - India is a signatory to this treaty, and ratified it in 1982. The Outer Space Treaty prohibits only weapons of mass destruction in outer space, not ordinary weapons. - India expects to play a role in the future in the drafting of international law on prevention of an arms race in outer space including inter alia on the prevention of the placement of weapons in outer space in its capacity as a major spacefaring nation with proven space technology. - India is not in violation of any international law or Treaty to which it is a Party or any national obligation. Significance of the Mission: - Mission Shakti has made India the fourth nation in the world, with the capability to successfully target satellites through an Anti-Satellite Missile. - India has a long standing and rapidly growing space programme. It has expanded rapidly in the last five years. The Mangalyaan Mission to Mars was successfully launched. - Thereafter, the government has sanctioned the Gaganyaan Mission which will take Indians to outer space. - India has undertaken 102 spacecraft missions consisting of communication satellites, earth observation satellites, experimental satellites, navigation satellites, apart from satellites meant for scientific research and exploration, academic studies and other small satellites. - India’s space programme is a critical backbone of India’s security, economic and social infrastructure.
<urn:uuid:a65ddc5c-ef16-484f-95e3-abb470514795>
CC-MAIN-2020-29
https://www.bitul.in/current-affairs/indian-scientists-successfully-conducts-mission-shakti/
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655887319.41/warc/CC-MAIN-20200705090648-20200705120648-00164.warc.gz
en
0.902921
520
3.453125
3
Simplifications: triangulation: background Triangulation is the term used to describe a situation in which there are three parties in a chain involving the purchase and/or sale (as the case may be) of the same goods. But instead of the goods physically moving from one party to the next along the chain, they are delivered directly from the first supplier to the last. Consequently there is only one movement of the goods. But there are two supplies - the first between the first supplier and the intermediate supplier (middleman), and the second between the intermediate supplier and the final customer. When illustrated diagrammatically as follows it produces a triangle - hence the name. In this example a UK company receives an order from a customer in Germany. This is fulfilled by dispatching the goods directly from the UK company’s own supplier in France. As the diagram illustrates, there are two supplies (i.e. between the French company and the UK company, and between the UK company and their German customer). But there is only one movement of the goods from France to Germany. Applying the normal VAT place of supply rules in these circumstances will result in the UK company being registerable for VAT either in France or Germany. This arises because, without the goods moving to the UK, the UK company is potentially - making an intra-EC supply in France (where it purchases the goods) to Germany, or - receiving an intra-EC supply in Germany where it then makes an onward domestic supply to its German customer. For more information about this see the manual covering the place of supply of goods (VATPOSG).
<urn:uuid:d1cb3387-8019-4500-9b17-f511fd5cb9fd>
CC-MAIN-2017-43
https://www.gov.uk/hmrc-internal-manuals/vat-single-market/vatsm5205
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825399.73/warc/CC-MAIN-20171022165927-20171022185927-00058.warc.gz
en
0.953469
333
3.640625
4
The Bete people are located in central west Ivory Coast numbering about 600.000. Today the vast majority still follow their traditional African religion, believing in a God “Lago”, but at same time they attend spirits; spirits of their ancestors, and spirits inhabiting nature. The religious cults give rise to numerous mask performances. Bete carvers are famous for one particular face mask, the gre or nyabwa, caracterized by it´s exaggerated, grimacing distorted features. In earlier days, this mask presided over the cermony held when peace was restored after armed conflicts . The mask was also worn to prepare men for war, where the mask was believed to offer magical protection by instilling fear and terror in potential enimies. The Bete also create carved elegant statues. Male and female statues are displayed in shelters or shrines to represent the founders of the community. Other smaller statuettes may have carevd to represent spouses from the other world, a tradition inspired by the Baule.
<urn:uuid:a1bf3504-62d1-42cd-b0eb-5be64abced87>
CC-MAIN-2017-39
http://christas.dk/africa/bete/
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687592.20/warc/CC-MAIN-20170921011035-20170921031035-00112.warc.gz
en
0.948602
210
3
3
1. Research objectives. When writing a dissertation at Masters level it is essential to consider all aspects from which the strength of the piece will be assessed. Original, relevant, manageable research objectives must be formulated – and stated with precision – in order to signal the serious and considered nature of the work you are to undertake. 2. critical review. It must be shown that your precisely stated research objectives were not snatched out of thin air but emerged as important questions from a thorough critical review of existing research and background literature. Your consummate ability to analyse critically a large volume of material must be coupled with an alert mindfulness of the relevance for your own avenues of research. 3. Deficiencies. Develop the confidence to turn your analytical gaze to existing research in order to identify shortcomings in your chosen field. Identification of deficiencies in existing knowledge is necessary to justify the particular direction of your research objectives, which aim to address such deficiencies and make a valuable novel contribution to the field. 4. Scope. It is not enough that the content of your novel findings be excellently communicated; you must also articulate the scope and position of your work in its broader academic context. Demonstrate your mastery of the subject area by clearly signposting how your dissertation fits in, as well as the limits of its scope. 5. Originality. Originality is, needless to say, a core component of extended pieces of work at Masters level. Having created suitable objectives, gained a thorough understanding of deficiencies in existing knowledge and remained mindful of the scope of your work, you have laid the foundations for making a genuinely original contribution to the knowledge base of your subject area. 6. Methodology. An absolutely key aspect of any dissertation is a thorough discussion of, and justification for, the methodology you have selected. Compare and contrast competing alternatives and thoroughly analyse each to make a convincing rationale for your final choice. Data collection methods should be described in detail such that your research can be reproduced by others. Qualitative research tools such as questionnaires should be put in the appendix. 7. Analysis. Irrespective of the type of research you have undertaken, an extremely important aspect of your final dissertation will be the quality of your analysis. For investigations with a heavy quantitative component, sophisticated statistical analysis will need to be in evidence. Bear in mind also that even more qualitative methods can generally be found to have some statistically analysable numerical component. 8. Findings. The final stages of your dissertation must include a detailed discussion of your findings and the conclusions that you have drawn from these. All conclusive statements should be diligently and precisely crafted to leave no room for ambiguity. Each should also be entirely defensible either empirically or by sound reasoning. A summary of results and conclusions should also appear early in your abstract. 9. Significance of your work. A proper concluding chapter is not complete without the serious consideration of the academic significance of your findings for the subject area. This area of discussion should directly recall material from the critical review of current literature and aim to place the present findings in a wider context. 10. Academic conventions. A short reminder where, at this stage, one should not really be necessary: be impeccably faultless in your fluent use of standard academic conventions, including appropriate use of appendices, bibliographies, abstracts, title pages, in-text referencing and footnotes.
<urn:uuid:d328aa06-480b-4515-9c62-39d0fc786fd6>
CC-MAIN-2014-35
http://www.oxbridgeessays.com/blog/dissertation-writing-2/
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500813887.15/warc/CC-MAIN-20140820021333-00299-ip-10-180-136-8.ec2.internal.warc.gz
en
0.932578
680
2.625
3
Yesterday the results from the SALOME study were published, and the groundbreaking findings are tremendously exciting. Using the law as a catalyst for positive social change, Pivot Legal Society works to improve the lives of marginalized communities. The Study to Assess Longer-term Opioid Medication Effectiveness (SALOME) was a Vancouver-based clinical study testing alternative treatments for those with chronic heroin addiction. It compared injectable hydromorphone to injectable diacetylmorphine (heroin-assisted treatment in the form of pharmaceutical heroin) for long-term street opioid users not currently benefitting from available treatments, such as methadone and suboxone. Previous research in Canada and internationally has already demonstrated the effectiveness of heroin-assisted treatment. The SALOME study demonstrates that hydromorphone is as effective as diacetylmorphine. This is incredible news. Hydromorphone works. Heroin-assisted treatment works. The results mean that doctors, and their patients, have another potential treatment option available to them. The small group of Canadian patients who received, and continue to receive, these medicines -- all of whom took part in the clinical trial -- report substantial improvement to their health and well-being. Receiving the treatment in a supervised, medical setting improved physical and mental-health outcomes. It reduced illicit drug use and related criminal activity since they no longer need to rely on street-grade heroin. It stabilized their lives, many finding housing and employment when previously both had been impossible. While both have been proven effective, the difference in accessing heroin-assisted treatment and hydromorphone is this: whereas hydromorphone is already federally approved as a painkiller, access to diacetylmorphine faces political and regulatory obstacles. These obstacles need to be removed, and, with a record number of fentanyl-related overdose deaths among opiate users in the last two years, they need to be removed as soon as possible. In October 2013, then-Health Minister Rona Ambrose added diacetylmorphine to the list of restricted substances available through Health Canada’s Special Access Program (SAP). SAP is designed to let patients get medications normally not available in Canada on the basis of credible data supporting the safety and efficacy of the drug for the medical emergency at issue. It’s the same program that allows supervised injection facilities like Insite to operate. The decision by the Health Minister effectively cut-off access to diacetylmorphine to participants in the SALOME trial via the SAP program. Only a court injunction obtained by Pivot, representing five participants, and Providence Health Care, who led the study, has allowed study participants to continue to access the treatment now that the study has ended. It is, to say the least, an unnecessarily onerous and convoluted process for doctors to access a treatment that has been proven time and time again to work. But injunctions are only temporary, and unless the new federal government reverses the previous Health Minister’s decision the ability for patients and doctors to access this incredibly important treatment may yet again be put at risk. There’s no need for the new federal government to allow their predecessor’s ideologically fuelled decision to stand, especially in light of the many positive steps they’ve already taken to improve access to life-saving treatment interventions. The SALOME results highlight that the solutions to this health crisis are readily available. It’s time now for the federal government to remove restrictions and make all effective addiction treatments available to doctors and patients. As Dr. Scott MacDonald, physician lead at Vancouver’s Crosstown Clinic, the only clinic where both hydromorphone and heroin-assisted treatment are available, said yesterday at the study results announcement: “We need every tool in the addiction treatment toolkit available.”
<urn:uuid:979da93b-4f46-4a28-9533-a6c615c9f1af>
CC-MAIN-2023-40
https://www.pivotlegal.org/_we_need_every_tool_in_the_toolkit_available_more_treatment_options_for_heroin_addiction
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511361.38/warc/CC-MAIN-20231004052258-20231004082258-00203.warc.gz
en
0.944111
787
2.515625
3
There is some good news to report as we approach the two year anniversary of the the Air France 447 accident in the South Atlantic during the late evening hours of May 31, 2009. An unmanned submarine exploration team headed by the Woods Hole Oceanographic Institution – the same group that found the Titanic – located the flight data recorder from Air France 447 on the ocean floor in 12,000 feet of water. The flight disappeared without a distress call, other than a few last-minute computer-generated messages announcing electrical issues, some 750 miles northeast of Brazil that night taking 228 men, women and children, residents from 32 countries, to their death. Two years later, there is little more than speculation about what brought the aircraft down offering few opportunities for closure of any sort to the grieving families, nor Air France, Airbus, the French BEA or anyone else wondering how and why. The recorder may offer some hope, provided it has not been damaged by two years of exposure to the sea. An Abundance of Theories Not surprisingly, there has been plenty of speculation. The flight plan called for the Airbus A300-200 to depart Rio de Janiero for Paris along a route that would drive it squarely through the Intertropical Convergence Zone (ITCZ), an area near the equator with nearly calm upper-level winds. That lack of wind makes weather behave a bit differently from what we normally see in latitudes further north. A translation from pilot-speak means that in the warm, moist air of the region, thunderstorms often grow unrestricted to enormous heights, some considerably more than 10 miles above the ocean surface making them impossible to fly over. Significant lines of storms were forecast along the route the Airbus planned to fly that evening, leading some to initially believe the experienced crew flew directly into the storms, or at least too close to them. Other experts believe the initial wreckage recovered indicates the aircraft hit the water in-tact, but at an enormous speed partially debunking the thunderstorm concept. Without solid data from the flight recorders, everything though, is likely to remain simply speculative. The Cost of Answers I chatted not long ago with Matt Bradley, vice president of business development at Vancouver-based Flyht, a commercial aviation data collection and delivery company. I was more than a little surprised to learn that we could have had some useful answers about Air France 447 before the aircraft hit the water that night if real-time data streaming equipment had been on-board. Not long after I mentioned a similar idea in a June, 2009 TV interview on WGN, a source told me the technology was non-existent, as well as impractical. After talking to Bradley, I wondered how a better understanding of the accident two years ago might have changed the industry. Would we have changed the A330 airspeed probes or was that simply a great sound bite? Was it some operational issue that took the aircraft outside it’s normal performance envelope? Was it the way the pilots attempted to penetrate to thunderstorms that was the problem? Imagine if we’d known the answers two years ago. Bradley – an A330 pilot himself – said not only does the data-streaming technology exist now, but that it did at the time of the Air France 447 accident. The Airbus simply didn’t have the equipment installed. One simple reason is money. Bradley said a unit installed on the Airbus could have run somewhere “between $35,000-$50,000,” with similar prices for installation on Boeing, Embraer and Bombardier airframes. Then of course there is the cost to stream the data. Another reason? “Because the public hasn’t yet demanded it,” he told me. Of course, how could they when no one knows the options even exist. But airframe manufacturers knew it existed in 2009. Bradley did tell me that full-time data streaming really IS impractical, not to mention some system to analyze those enormous amounts of information. At least it’s impractical right now. That doesn’t mean an airplane couldn’t regularly phone home with a short burst of vital operational data so people knew what was going on every so often. The maintenance messages the Airbus did send were pretty much useless for figuring out what happened that night. Or what about an “emergency” button a crew or computer could trigger with useful data when things get hairy? Certainly maybe every commercial airplane doesn’t need this capability, but airplanes flying the Atlantic, the Pacific or the polar regions certainly should employ it. They carry life rafts for emergencies. Why not emergency data transfer capabilities? When I mentioned this story to a bunch of magazine journalists in New York this weekend, to a person, each thought the capability already existed on international flights. No one could understand how in this day and age, with the state of technology, that an airline could lose an airplane at sea and have no idea what happened for two years … assuming again the recorder delivers something valuable Honestly, I can’t understand not having this device aboard an international airplane either. Rob Mark, publisher
<urn:uuid:2fc51859-5f30-422f-80ef-bbf6f898a9f3>
CC-MAIN-2014-15
http://www.jetwhine.com/2011/05/air-france-447-the-cost-of-what-well-learn/
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00441-ip-10-147-4-33.ec2.internal.warc.gz
en
0.955671
1,058
2.546875
3
In 1944, Dr. Ancel Keys conducted The Minnesota Starvation Experiment, the most complete experiment of starvation ever done. 36 young healthy men were selected. For the first three months, they got a standard diet of 3200 calories per day. Over the next six months, calories were restricted to 1570. However, calories were adjusted to reach a target weight loss of 24 percent, around 2.5 pounds (1.1 kilograms) per week. Some men were given less than 1000 calories per day. They also walked 22 miles per week as exercise. With such calorie restriction, the men experienced profound physical and psychological changes - Cold sensation, even on a sunny summer day - Strength dropped 21 percent - Heart rate slowed considerably - Body temperature dropped to an average of 95.8 F - Physical endurance dropped by half - Blood pressure dropped - The men become extremely tired and dizzy - They lost hair Psychologically they were devastated, showing a lack of interest in everything except for food. Some man hoarded cookbooks and utensils. They were constantly hungry and thinking about food, unable to concentrate or dedicate to any mental task. What was happening? The men were eating and burning around 3000 calories a day. Then, suddenly, calories were reduced to 1500 per day, less in some cases. All body functions that needed energy experienced an immediate 30 to 40 percent reduction. Body temperature dropped to an average of 95.8 F: Calories are needed to heat the body, so fewer calories lowered the body temperature. e Heart rate slowed considerably: Calories are needed for the heart to pump blood. Less calories available means a slower heart rate. Physical endurance dropped by half: Calories are needing to move the body. With fewer calories available movement was reduced, resulting in weakness during physical activity. Unable to concentrate or dedicate to any mental task: Our brains needs calories to function. Fewer calories were available so cognition was reduced. Hair loss: Calories are needed to grow hair. Less calories meant that lost hair was not being replaced This is the way our body reacts to less available energy. Energy and air are critical for our body to function, and without either of them we will die. If we reduce calorie intake our body adapts to the available energy, because if we continue to use the same amount as before we would soon would burn our stored energy (fat) than protein stores (muscle), and then we would die. If we consume 1500 calories per day, the body adapts to burn only 1500 (or a little less for a safe margin), so we achieve a balance and we don’t need to use our stored energy (fat). Our body gets into economy mode and uses less energy for each function, reducing the energy output. You feel lousy, cold, and tired but you survive; that’s the most important thing. The men in the Minnesota Starvation Experiment should have lost 78 pounds (35 kilograms), but they only lost 37 pounds (16.8 kilograms). As the body adapts to the available calorie intake, the only way to lose more weight was with a more severe calorie restriction. What happened after? Once calorie intake was restored to normal values, they regained weight quickly and, in only 12 weeks, their weight was higher than prior to the experiment. Reduced muscle mass and a slower metabolism meant that the extra calories they were now consuming were more readily converted to fat. Also, as a preventative measure, their recently-starved bodies were primed for energy storage and they did so very effectively. How Calorie Reduction Works? Let’s say a man or a woman normally eats 2000 calorie per day. Following their doctor’s orders, they adopt a low-fat, portion-controlled, calorie-restricted diet, reducing the daily calorie intake to 1500 calories, 500 fewer calories than before. Their body starts to adapt, and the total used energy also drops to 1500 calories. First symptoms: they feel lousy, cold, tired, hungry, irritable, and depressed, but they continue the sacrifices to achieve their weight loss goals. In the beginning weight loss is fast, but as their body adapts to the calorie restriction, calorie expenditure decreases to match the 1500 calories per day and bodyweight plateaus. They continue to make the necessary sacrifices, following the diet as prescribed, but one year later things haven’t improved. Bodyweight starts to slowly increase, even though they continue to follow the diet. Tired of feeling so lousy the diet is abandoned and our dieter goes back to eating 2000 calories per day. Since their metabolism has slowed to output 1500 calories per day, all the extra calories will be stored as fat, so her weight quickly increases. Does this scenario sound familiar? Accused of lacking willpower, our dieter feels like a failure after so many sacrifices. The truth is that it’s not really failure. What happened is the expected, natural outcome during severe calorie restriction. Calorie restriction doesn’t work in the long term for weight loss. Imagine that you’re running a coal-fired power plant. Every day to generate energy for the city you receive and burn 2000 tons of coal. You also have a warehouse to store some coal, just in case you need it. One day, you only get 1500 tons. Should you continue to burn 2000 tons of coal every day? If we did, we would quickly run out of energy and the city would go into total shut down, and we certainly get fired for doing a lousy job. nA better choice would be to only burn 1500 tons, or maybe a little less to keep a safe margin. Probably some lights would go off but there wouldn’t be a massive blackout, and we would keep our job. As long as we continue to get only 1500 tons of coal we continue to burn only 1500 tons. Less Calories Less Energy Used The assumption of fewer calories produces more weight loss is simply not true. The Minnesota Starvation Experiment and other experiments have shown how our bodies adapt and find a balance between calories in and calories out. You lose weight with calorie restriction in the first few months, but as soon as you finish the diet you will regain all the weight you have lost. In the process, you feel tired, lousy and depressed. The Weight Loss Fallacy: Eating Less Move More Dietician, Governments, and Doctors, have been screaming “Eat Less, Move More” as the way to lose weight. But it’s simply not true; it doesn’t work like that. Losing weight triggers two important responses: -Reduced total energy expenditure -Hormonal signals amplify the efforts to acquire more food Our body adapts to the available energy input and it uses hormones to amplify hunger signals. That’s why the subjects of the Minnesota experiment became so obsessed with food that they started to build kitchen-related tools and all their thoughts were how to get food. The same thing happens when we are full after eating; our body silences our hunger hormones. The Sustainable Solution for Weight Loss There is not a single solution that will work for all of us. Instead, we must find our own solution. The one that works for us. In my recent article, I explain why diets don’t work and all diets are doomed to fail. You can find it here. There are proven steps that will help you to lose weight or avoid weight gain. Examples include things like avoiding high sugar beverages, processed or junk food, taking daily walks, skipping breakfast, eating low-carb meals, reducing alcohol consumption, and ditching refined sugar from your diet. In our newsletter, I share these and other tips about what you can do to lose weight and improve your well-being. There’s no magic formula, but there are steps and strategies that work instead. It’s a journey we all share but we all have our own path to follow. I’ve recently shared how I’ve lost weight and have been able to keep my exact same weight without restricting my food choices or reducing my calories. Again, this has been working for me for more than one year, but maybe is not something sustainable for you. Learning and experimenting is always the best way. You can find my article here. I’m here to help you to find the direction for your next step. Subscribe to our Newsletter and receive one email every Sunday with weight loss tips and tricks to boost your well-being. It’s free and you can unsubscribe anytime you want. The Obesity Codebook, available at Amazon.com.
<urn:uuid:ef7b0f9b-9ee4-4c10-9305-7be5083a5ab0>
CC-MAIN-2023-23
https://daystofitness.com/eat-less-move-more-is-not-a-weight-loss-solution/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646181.29/warc/CC-MAIN-20230530230622-20230531020622-00285.warc.gz
en
0.954793
1,805
3.578125
4
Whether you are injured, pregnant, sore, tired, et cetera there is a benefit to training even in a reduced capacity. There is a positive hormonal response from exercise, which can affect a multitude of facets from body weight to recovery to mental state. Although the nature of injury may determine what is possible, the goal is still the same: replicate all workout variables as closely as possible. - Make sure you partake in a pain-free range of motion. Where the movement cannot be performed exactly, a trainer should find a substitute that best replicates the basic function and/or range of motion. However, any movement that still relies primarily on the injured joint/body part should be used cautiously, if at all. A trainer may need to get creative at times to accomplish this in order to avoid boredom and still working for a new skill. - Single-limb work can be utilized: contrary to the belief that this will result in a problematic muscle imbalance, exercising the non-injured side can reduce atrophy on the injured side. Dumbbells are a perfect tool for one-sided work, and the number of repetitions can increase in cases where the loading is limited. However, this should not be the only option for someone with an injured limb. If an exercise involves 2 movement functions, they may be able to perform one with both sides. For example, in a thruster, an athlete with an injured upper body may still be able to squat or front squat. If he or she has an injured lower body, the athlete might still be able to press or push press. If there are no reasonable options for an injured person to perform a similar movement, omit the movement or substitute something else.
<urn:uuid:dd88a917-afd1-4bda-8f83-e99e1f4fe0ee>
CC-MAIN-2017-39
http://isabellafitness.com/a-practical-guide-for-scaling/
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689897.78/warc/CC-MAIN-20170924062956-20170924082956-00717.warc.gz
en
0.949169
347
2.546875
3
Britain may only be a small island, but its great scientists and inventors have literally created the modern world: from the invention of the steam engine, computers and the world-wide web to the discovery of the theory of evolution and the atom. In this series some of Britain's leading scientific figures - Stephen Hawking, Richard Dawkins, James Dyson, David Attenborough, Robert Winston, Paul Nurse, Jim Al-Khalili, Kathy Sykes and Olivia Judson - tell the stories of the people behind these innovations. From Isaac Newton to Frank Whittle, James Watt to Isambard Kingdom Brunel, and Joseph Banks to Rosalind Franklin, these are the people who - through blood, sweat and tears - overcame all obstacles in the search for answers. Stephen Hawking and Jim Al-Khalili explain how Isaac Newton saw mathematics at the root of everything, from gravity to light. James Dyson demonstrates Robert Boyle's air pump, which revealed the life-giving invisible world around us, whose laws could be understood through experiment and reason. David Attenborough celebrates the many interests of Christopher Wren, who was best known as an architect, but was equally fascinated by surgery and astronomy. Richard Dawkins explores Robert Hooke's revelatory microscopic world, and champions the virtues of a scientist whose name was almost wiped from the history books by men who despised him: most notably his arch-rival Newton.
<urn:uuid:db67bfba-41ca-4de5-8f9d-793bc05b145c>
CC-MAIN-2017-43
https://topdocumentaryfilms.com/genius-of-britain/
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825812.89/warc/CC-MAIN-20171023073607-20171023093607-00449.warc.gz
en
0.931159
290
3.046875
3
Eurozone as of 2013 Non-EU areas using the euro |Union type||Economic & Monetary| |Established||1 January 1999| |Political control||Euro Group| |Group president||Jeroen Dijsselbloem| |Issuing authority||European Central Bank| |ECB president||Mario Draghi| |Affiliated with||European Union| |GDP (2012)||€9.5 trillion| |Trade balance||€81.8 bn surplus| The eurozone ( pronunciation (help·info)), officially called the euro area, is an economic and monetary union (EMU) of 17 European Union (EU) member states that have adopted the euro (€) as their common currency and sole legal tender. The eurozone currently consists of Austria, Belgium, Cyprus, Estonia, Finland, France, Germany, Greece, Ireland, Italy, Luxembourg, Malta, the Netherlands, Portugal, Slovakia, Slovenia, and Spain. Other EU states (except for the United Kingdom and Denmark) are obliged to join once they meet the criteria to do so. No state has left and there are no provisions to do so or to be expelled. Monetary policy of the zone is the responsibility of the European Central Bank (ECB) which is governed by a president and a board of the heads of national central banks. The principal task of the ECB is to keep inflation under control. Though there is no common representation, governance or fiscal policy for the currency union, some co-operation does take place through the Euro Group, which makes political decisions regarding the eurozone and the euro. The Euro Group is composed of the finance ministers of eurozone states, however in emergencies, national leaders also form the Euro Group. Since the late-2000s financial crisis, the eurozone has established and used provisions for granting emergency loans to member states in return for the enactment of economic reforms. The eurozone has also enacted some limited fiscal integration, for example in peer review of each other's national budgets. The issue is highly political and in a state of flux as of 2011 in terms of what further provisions will be agreed for eurozone reform. Monaco, San Marino and Vatican City have concluded formal agreements with the EU to use the euro as their official currency and issue their own coins. Andorra negotiated a similar agreement which will permit them to issue euros as early as 1 July 2013. Others, like Kosovo and Montenegro, have adopted the euro unilaterally. However, these countries do not formally form part of the eurozone and do not have representation in the ECB or the Euro Group. Member states In 1998 eleven member states of the European Union had met the Euro convergence criteria, and the eurozone came into existence with the official launch of the euro (alongside national currencies) on 1 January 1999. Greece qualified in 2000 and was admitted on 1 January 2001 before physical notes and coins were introduced on 1 January 2002 replacing all national currencies. Between 2007 and 2011, five new states acceded. World Bank, 2009 of total (nominal) |GDP per capita World Bank, 2009 (incl. UK military base) |France||1999-01-01||65,075,373||2,649,390||21.26%||40,713|| New Caledonia[b] Wallis and Futuna[b] Ten countries (Bulgaria, the Czech Republic, Denmark, Hungary, Latvia, Lithuania, Poland, Romania, Sweden, and the United Kingdom) are EU members but do not use the euro. Before joining the eurozone, a state must spend two years in the European Exchange Rate Mechanism (ERM II). As of 2011, the National Central Banks (NCBs) of Latvia, Lithuania, and Denmark participate in ERM II. Denmark and the United Kingdom obtained special opt-outs in the original Maastricht Treaty. Both countries are legally exempt from joining the eurozone unless their governments decide otherwise, either by parliamentary vote or referendum. Sweden gained a de facto opt-out by using a legal loophole. It is required to join the eurozone as soon as it fulfils the convergence criteria, which include being part of ERM II for two years; joining ERM II is voluntary. Sweden has so far decided not to join ERM II. The 2008 financial crisis increased interest in Denmark and initially in Poland to join the eurozone, and in Iceland to join the European Union, a pre-condition for adopting the euro. However, by 2010 the debt crisis in the eurozone caused interest from Poland and the Czech Republic to cool. Latvian Prime Minister Valdis Dombrovskis applied for entrance in the Eurozone in March 2013, and the Economic and Financial Affairs Council of the EU is expected to make a decision on their application in July 2013. Lithuania plans to adopt the euro in 2015. Non-member usage The euro is also used in countries outside the EU. Three states – Monaco, San Marino, and Vatican City — have signed formal agreements with the EU to use the euro and issue their own coins. Nevertheless, they are not considered part of the eurozone by the ECB and do not have a seat in the ECB or Euro Group. Andorra's monetary agreement with the EU to use the euro came into force in April 2012 and will permit it to issue its own euro coins as early as 1 July 2013, provided that Andorra implements relevant EU legislation. They are expected to issue their first coins on 1 January 2014. Kosovo[g] and Montenegro officially adopted the euro as their sole currency without an agreement and, therefore, have no issuing rights. These states are not considered part of the eurozone by the ECB. However, sometimes the term eurozone is applied to all territories that have adopted the euro as their sole currency. Further unilateral adoption of the euro (euroisation), by both non-euro EU and non-EU members, is opposed by the ECB and EU. Expulsion and secession While the eurozone is open to all EU member states to join once they meet the criteria, the treaty is silent on the matter of states leaving the eurozone, neither prohibiting nor permitting it. Likewise there is no provision for a state to be expelled from the euro. Some, however, including the Dutch government, favour such a provision being created in the event that a heavily indebted state in the eurozone refuses to comply with an EU economic reform policy. The benefits of leaving the euro would vary depending on the exact situations. If the replacement currency were expected to devalue, the state would experience a large-scale exodus of money, whereas if the currency were expected to appreciate then more money would flow into the economy. Even so a rapidly appreciating currency would be detrimental to the country's exports. A problem is that leaving the euro can't be done so quickly, banknotes must for example be printed. So during preparations, a lot of money would leave the country, and people can be expected to withdraw euro's in cash, causing a bank run. The theory on a normal devaluation of a currency says it must be done immediately after it is presented. Administration and representation The monetary policy of all countries in the eurozone is managed by the European Central Bank (ECB) and the Eurosystem which comprises the ECB and the central banks of the EU states who have joined the euro zone. Countries outside the eurozone are not represented in these institutions. Whereas all EU member states are part of the European System of Central Banks (ESCB). Non EU member states have no say in all three institutions, even those with monetary agreements such as Monaco. The ECB is entitled to authorise the design and printing of euro banknotes and the volume of euro coins minted, and its president is currently Mario Draghi. The eurozone is represented politically by its finance ministers, known collectively as the Euro Group, and is presided over by a president, currently Jeroen Dijsselbloem. The finance ministers of the EU member states that use the euro meet a day before a meeting of the Economic and Financial Affairs Council (Ecofin) of the Council of the European Union. The Group is not an official Council formation but when the full EcoFin council votes on matters only affecting the eurozone, only Euro Group members are permitted to vote on it. Since the global financial crisis first began in 2008, the Euro Group has met irregularly not as finance ministers, but as heads of state and government (like the European Council). It is in this forum, the Euro summit, that many eurozone reforms have been agreed. In 2011, former French President Nicolas Sarkozy pushed for these summits to become regular and twice a year in order for it to be a 'true economic government'. On 15 April 2008 in Brussels, Juncker suggested that the eurozone should be represented at the International Monetary Fund as a bloc, rather than each member state separately: "It is absurd for those 15 countries not to agree to have a single representation at the IMF. It makes us look absolutely ridiculous. We are regarded as buffoons on the international scene." However Finance Commissioner Joaquín Almunia stated that before there is common representation, a common political agenda should be agreed. Comparison table |Eurozone||317 million||€8.4 trillion||14.6%||21.7% GDP||20.9% GDP| |EU (27)||494 million||€11.9 trillion||21.0%||14.3% GDP||15.0% GDP| |United States||300 million||€11.2 trillion||19.7%||10.8% GDP||16.6% GDP| |Japan||128 million||€3.5 trillion||6.3%||16.8% GDP||15.3% GDP| Nominal GDP (billions in USD) |(01) European Union|| |(02) United States|| |(07) United Kingdom|| |(13) South Korea|| |(17) Saudi Arabia|| The twenty largest economies in the world counting the EU as a single entity and the eurozone as a single entity, by nominal GDP (2011) Interest rates Interest rates for the eurozone, set by the ECB since 1999. Levels are in percentages per annum. Between to June 2000 and October 2008, the main refinancing operations were variable rate tenders, as opposed to fixed rate tenders. The figures indicated in the table from 2000 to 2008 refer to the minimum interest rate at which counterparties may place their bids. |Date||Deposit facility||Main refinancing operations||Marginal lending facility| Public debt |Country||CIA 2007||OECD 2009||IMF 2009||CIA 2009||EuroStat 2010||EuroStat 2011| Fiscal policies The primary means for fiscal coordination within the EU lies in the Broad Economic Policy Guidelines which are written for every member state, but with particular reference to the 17 current members of the eurozone. These guidelines are not binding, but are intended to represent policy coordination among the EU member states, so as to take into account the linked structures of their economies. For their mutual assurance and stability of the currency, members of the eurozone have to respect the Stability and Growth Pact, which sets agreed limits on deficits and national debt, with associated sanctions for deviation. The Pact originally set a limit of 3% of GDP for the yearly deficit of all eurozone member states; with fines for any state which exceeded this amount. In 2005, Portugal, Germany, and France had all exceeded this amount, but the Council of Ministers had not voted to fine those states. Subsequently, reforms were adopted to provide more flexibility and ensure that the deficit criteria took into account the economic conditions of the member states, and additional factors. The Organisation for Economic Co-operation and Development downgraded its economic forecasts on 20 March 2008 for the eurozone for the first half of 2008. Europe does not have room to ease fiscal or monetary policy, the 30-nation group warned. For the euro zone, the OECD now forecasts first-quarter GDP growth of just 0.5%, with no improvement in the second quarter, which is expected to show just a 0.4% gain. The European Fiscal Compact is a proposal for a treaty about fiscal integration described in a decision adopted on 9 December 2011 by the European Council. The participants are the eurozone member states and all other EU members except for the United Kingdom. Treaty text is still to be drafted and participation approvals from national parliaments are still to be granted. Bailout provisions The late-2000s financial crisis prompted a number of reforms in the eurozone. One was a u-turn on the eurozone's bailout policy that led to the creation of a specific fund to assist eurozone states in trouble. The European Financial Stability Facility (EFSF) and the European Financial Stability Mechanism (EFSM) were created in 2010 to provide, alongside the International Monetary Fund (IMF), a system and fund to bail out members. However the EFSF and EFSM were temporary, small and lacked a basis in the EU treaties. Therefore, it was agreed in 2011 to establish a European Stability Mechanism (ESM) which would be much larger, funded only by eurozone states (not the EU as a whole as the EFSF/EFSM were) and would have a permanent treaty basis. As a result of that its creation involved agreeing an amendment to TEFU Article 136 allowing for the ESM and a new ESM treaty to detail how the ESM would operate. If both are successfully ratified according to schedule, the ESM would be operational by the time the EFSF/EFSM expire in mid-2013. Peer review Strong EU oversight in the fields of taxation and budgetary policy and the enforcement mechanisms that go with it have sometimes been described as potential infringements on the sovereignty of eurozone member states However, in June 2010, a broad agreement was finally reached on a controversial proposal for member states to peer review each other's budgets prior to their presentation to national parliaments. Although showing the entire budget to each other was opposed by Germany, Sweden and the UK, each government would present to their peers and the Commission their estimates for growth, inflation, revenue and expenditure levels six months before they go to national parliaments. If a country was to run a deficit, they would have to justify it to the rest of the EU while countries with a debt more than 60% of GDP would face greater scrutiny. The plans would apply to all EU members, not just the eurozone, and have to be approved by EU leaders along with proposals for states to face sanctions before they reach the 3% limit in the Stability and Growth Pact. Poland has criticised the idea of withholding regional funding for those who break the deficit limits, as that would only impact the poorer states. In June 2010 France agreed to back Germany's plan for suspending the voting rights of members who breach the rules. In March 2011 was initiated a new reform of the Stability and Growth Pact aiming at straightening the rules by adopting an automatic procedure for imposing of penalties in case of breaches of either the deficit or the debt rules. See also - The self-declared Turkish Republic of Northern Cyprus is not recognised by the EU and uses the Turkish lira. However the euro does circulate widely. - French Pacific territories use the CFP franc, which is pegged to the euro. - Uses the Swiss franc. However the euro is also accepted and circulates widely. - Aruba is part of the Kingdom of the Netherlands, but not the EU. It uses the Aruban florin, which is pegged to the US dollar. - Currently uses the Netherlands Antillean guilder and plans to introduce the Caribbean guilder on 1 January 2012; both are pegged to the US dollar. - Uses the US Dollar. - Kosovo is the subject of a territorial dispute between the Republic of Serbia and the self-proclaimed Republic of Kosovo. The latter declared independence on 17 February 2008, but Serbia continues to claim it as part of its own sovereign territory. Kosovo's independence has been recognised by 99 out of 193 United Nations member states. - The ECB announced on 22 December 1998 that, between 4 and 21 January 1999, there would be a narrow corridor of 50 base points interest rates for the marginal lending facility and the deposit facility in order to help the transition to the ECB's interest regime. - "Total population as of 1 January". Epp.eurostat.ec.europa.eu. 24 November 2011. Retrieved 8 December 2011. - "Gross domestic product at market prices". Epp.eurostat.ec.europa.eu. Retrieved 8 December 2011. - Key ECB interest rates, ECB - HICP – all items – annual average inflation rate Eurostat - Harmonised unemployment rate by gender – total – [teilm020,; Total % (SA) Eurostat - For the whole of 2012. Euroindicators 15 February 2013, Eurostat - "Countries, languages, currencies". Interinstitutional style guide. the EU Publications Office. Retrieved 2 February 2009. The euro area, European Central Bank - "Who can join and when?". European Commission. Retrieved 2012-09-10. - "The euro outside the euro area". Europa (web portal). Retrieved 26 February 2011. - "Agreements on monetary relations (Monaco, San Marino, the Vatican and Andorra)". European Communities. 30 September 2004. Retrieved 12 September 2006. - A glossary issued by the ECB defines "euro area", without mention of Monaco, San Marino, or the Vatican. - "Swedish Parliament EU Information". Swedish Parliament. 4 December 2009. Retrieved 16 January 2010. - "Information on ERM II". European Commission. 22 December 2009. Retrieved 16 January 2010. - Dougherty, Carter (1 December 2008). "Buffeted by financial crisis, countries seek euro's shelter". The New York Times. Retrieved 2 December 2008. - "Czechs, Poles cooler to euro as they watch debt crisis". Reuters. 16 June 2010. Retrieved 18 June 2010. - "Latvia formally applies for eurozone membership". 4 March 2013. Retrieved 4 March 2013. - "Lithuanian government endorses euro introduction plan". 15 min. 2013-02-25. Retrieved 2013-03-03. - "Monetary Agreement between the European Union and the Principality of Andorra". 30 June 2011. Retrieved 10 September 2011. - "The government announces a contest for the design of the Andorran euros". 2013-03-19. Retrieved 2013-03-26. Text "publisherAndorra Mint" ignored (help) - "European Foundation Intelligence Digest". Europeanfoundation.org. Retrieved 30 May 2010. - "Euro used as legal tender in non-EU nations". International Herald Tribune. 1 January 2007. Retrieved 22 November 2010. - "Europe, The eurozone's 13th member". BBC News. 11 December 2001. Retrieved 30 May 2010. - "Unilateral Euroization By Iceland Comes With Real Costs And Serious Risks". Lawofemu.info. 15 February 2008. Retrieved 30 May 2010. - Athanassiou, Phoebus (December 2009) Withdrawal and Expulsion from the EU and EMU, Some Reflections (PDF), European Central Bank. Retrieved 8 September 2011 - Phillips, Leigh (7 September 2011). Netherlands: Indebted states must be made ‘wards’ of the commission or leave euro. EU Observer, 7 September 2011. Retrieved on 2011-09-08 from http://euobserver.com/19/113552. - Eichengreen, Barry (23 July 2011) Can the Euro Area Hit the Rewind Button? (PDF), University of California. Retrieved 8 September 2011 - Treaty of Lisbon (Provisions specific to member states whose currency is the euro), EurLex - "An economic government for the eurozone?". Federal Union. Retrieved 26 February 2011. - Protocols, Official Journal of the European Union - Elitsa Vucheva (15 April 2008). "Eurozone countries should speak with one voice, Juncker says". EU Observer. Retrieved 26 February 2011. - "An international currency". Europa (web portal). Retrieved 26 February 2011. - Figures from the October 2012 update of the International Monetary Fund's World Economic Outlook Database. for the countries of the world. Retrieved 10 October 2012. - European Central Bank (14 December 2007). "Euro area (changing composition) – HICP – Overall index, Annual rate of change, Eurostat, Neither seasonally or working day adjusted". Retrieved 9 September 2011. - October 2008 "The World Factbook – (Rank Order – Public debt)". Archived from the original on 15 October 2008. Retrieved 15 October 2008. (all estimates 2007 data unless noted) - "Annex Table 33. General government net financial liabilities". 2 January 2011. - ""Annex Table 32. General government gross financial liabilities" in OECD Economic Outlook". OECD. Retrieved 2 January 2011. (2009; direct URL of datasheet is OECD.org - "Report for Selected Countries and Subjects". IMF. Retrieved 11 December 2008. (General government gross debt 2008 estimates rounded to one decimal place) - "The World Factbook – (Rank Order – Public debt)". Archived from the original on 2 January 2011. Retrieved 2 January 2011.(all estimates 2009 data unless noted) - "General government debt". Eurostat. Retrieved 28 May 2011. (General government gross debt 2009 estimates rounded to one decimal place) - "Key figures on Europe 2012". Eurostat. Retrieved 14 February 2013. - "Public Information Notice: IMF Executive Board Concludes 2010 Article IV Consultation with Austria". Imf.org. Retrieved 17 May 2012. - "Public Information Notice: IMF Executive Board Concludes 2009 Article IV Consultation with Belgium". Imf.org. Retrieved 17 May 2012. - "IMF Executive Board Concludes 2010 Article IV Consultation with Cyprus". Imf.org. Retrieved 17 May 2012. - "State of the Union: Can the euro zone survive its debt crisis? (p.4)" (PDF). Economist Intelligence Unit. 1 March 2011. Retrieved 1 December 2011. - "Public Information Notice: IMF Executive Board Concludes 2010 Article IV Consultation with Finland". Imf.org. Retrieved 17 May 2012. - "Public Information Notice: IMF Executive Board Concludes 2010 Article IV Consultation with France". Imf.org. Retrieved 17 May 2012. - "Public Information Notice: IMF Executive Board Concludes 2010 Article IV Consultation with Germany". Imf.org. Retrieved 17 May 2012. - "Public Information Notice: IMF Executive Board Concludes 2010 Article IV Consultation with Ireland". Imf.org. Retrieved 17 May 2012. - "Public Information Notice: IMF Executive Board Concludes 2010 Article IV Consultation with Italy". Imf.org. Retrieved 17 May 2012. - "Public Information Notice: IMF Executive Board Concludes 2010 Article IV Consultation with Luxembourg". Imf.org. Retrieved 17 May 2012. - "Kingdom of the Netherlands—Netherlands: 2009 Article IV Consultation—Staff Report; Staff Statement; Public Information Notice on the Executive Board Discussion; and Statement by the Executive Director for the Kingdom of the Netherlands—Netherlands; IMF Country Report 10/34; December 15, 2009" (PDF). Retrieved 18 May 2011. - "Public Information Notice: IMF Executive Board Concludes 2009 Article IV Consultation with Portugal". Imf.org. Retrieved 17 May 2012. - "Public Information Notice: IMF Executive Board Concludes 2010 Article IV Consultation with the Slovak Republic". Imf.org. Retrieved 17 May 2012. - "Public Information Notice: IMF Executive Board Concludes 2010 Article IV Consultation with Spain". Imf.org. Retrieved 17 May 2012. - "European Council Press release on the creation of a fiscal union". Consilium.europa.eu. 9 December 2011. Retrieved 17 May 2012. - (English)"see Ionescu Romeo, "Romania and Greece: Together or Alone", AJBM Vol. 4(19), p. 4197, December 2010 [Quoting Q1 2010 article (in French) by M. Nicolas J. Firzli in Revue Analyse Financière]". Retrieved 29 March 2011 - EU agrees controversial peer review of national budgets, EU Observer - Willis, Andrew (15 June 2010) Merkel: Spain can access aid if needed, EU Observer - "Council reaches agreement on measures to strengthen economic governance" (PDF). Retrieved 18 May 2011. - Jan Strupczewski (15 March 2011). "EU finmins adopt tougher rules against debt, imbalance". Uk.finance.yahoo.com. Retrieved 18 May 2011. |Wikisource has original text related to this article:| - European Central Bank - Central Bank Rates, ECB Key Rate, chart and data - European Commission – Economic and Financial Affairs – Eurozone
<urn:uuid:dffe20c1-b554-4148-a9f6-90bad4e6306b>
CC-MAIN-2013-20
http://en.wikipedia.org/wiki/Eurozone
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00010-ip-10-60-113-184.ec2.internal.warc.gz
en
0.882962
5,214
2.765625
3
What is Magnitude? Earthquake Magnitude By Analogy University of South Carolina, Dept Geological Sciences This activity has benefited from input from faculty educators beyond the author through a review and suggestion process. This review took place as a part of a faculty professional development workshop where groups of faculty reviewed each others' activities and offered feedback and ideas for improvements. To learn more about the process On the Cutting Edge uses for activity review, see http://serc.carleton.edu/NAGTWorkshops/review.html. This activity was selected for the On the Cutting Edge Exemplary Teaching Collection Resources in this top level collection a) must have scored Exemplary or Very Good in all five review categories, and must also rate as “Exemplary” in at least three of the five categories. The five categories included in the peer review process are - Scientific Accuracy - Alignment of Learning Goals, Activities, and Assessments - Pedagogic Effectiveness - Robustness (usability and dependability of all components) - Completeness of the ActivitySheet web page For more information about the peer review process itself, please see http://serc.carleton.edu/NAGTWorkshops/review.html. This page first made public: Jul 5, 2007 Used this activity? Share your experiences and modifications Understanding magnitude scales by analogy to distance. Students use distance as a proxy for understanding how the logarithmic earthquake magnitude scale works. Very simple class or lab exercise for introductory courses to address math-related concepts. Activity designed for intro physical geology course for nonmajors Designed for an introductory geology course Skills and concepts that students must have mastered Requires basic map-reading skills, students should know how to convert units in the metric system How the activity is situated in the course This activity is part of a 3-hour lab exercise where the students also complete exercises from the NAGT Physical Geology Lab Manual. This could also be used as a classroom exercise. Content/concepts goals for this activity Introduction to the idea of earthquake magnitude scales. Relating magnitude to energy release. Higher order thinking skills goals for this activity Numerical units conversion, Understanding logarithmic scales, Evaluating analogy Other skills goals for this activity Description of the activity/assignment This is a introductory lab exercise that is intended to convey the concept of the logarithmic scale used for earthquake magnitude. The students will visualize magnitude as a distance over the ground, by using a contrived conversion between magnitude and distance. Using distances helps the students understand how logarithmic scales, like magnitude, work because this is one of the few scales that students are familiar with that spans several orders of magnitude. Students typically use calculators to determine the distance associated with each magnitude. Maps should be provided in the lab/classroom that are on several scales: campus maps, city maps, state maps, and a national map work well. This activity gives the students practice in making unit conversions and in developing arguments by analogy. Addresses student fear of quantitative aspect and/or inadequate quantitative skills Addresses student misconceptions Determining whether students have met the goals The mechanics of doing the conversion from magnitude into distance, and then coming up with appropriate units for the distance can be evaluated from the fill-in-the-blanks. The most important part of the evaluation is the paragraph in which the student should relate the distances derived back to the magnitude scale. More information about assessment tools and techniques. Download teaching materials and tips
<urn:uuid:15c71b51-8517-4aaa-81ff-1c5017daff90>
CC-MAIN-2014-35
http://serc.carleton.edu/NAGTWorkshops/geophysics/activities/18921.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500813887.15/warc/CC-MAIN-20140820021333-00197-ip-10-180-136-8.ec2.internal.warc.gz
en
0.895122
747
3.46875
3
What Does Subdomain Mean? A subdomain is a domain that is a part of a larger domain under the Domain Name System (DNS) hierarchy. It is used as an easy way to create a more memorable Web address for specific or unique content with a website. For example, it could make it easier for users to remember and navigate to the picture gallery of a site by placing it in the address gallery.mysite.com, as opposed to mysite.com/media/gallery. In this case, the subdomain is gallery.mysite, whereas the main domain is mysite.com. A subdomain is also known as a child domain. Techopedia Explains Subdomain A subdomain is basically a child domain under a larger parent domain name. In the larger scheme of the Domain Name System, it is considered a third-level domain used to organize site content. In the Web address example above (gallery.mysite.com), the suffix ".com" is the first-level domain, "mysite" is the second-level domain and "gallery" is the third-level domain. Uses of subdomains include: - Organizing website content according to category, i.e., gallery.mysite.com, faq.mysite.com and store.mysite.com - Sharing the allotted domain space with other users by providing them subdomains and their own username and password with varying levels of feature access. For example, admin.mysite.com, user1.mysite.com and guest.mysite.com - Shortening long links and making them easy to remember. For example, the link "http://mysite.com/offers/bonus/referal_id^56$#9?.asp" can be placed into the subdomain "referral.mysite.com" to make it easier to navigate and remember.
<urn:uuid:dc3c5334-8984-4e3a-a79e-4b130a4dfab2>
CC-MAIN-2023-50
https://www.techopedia.com/definition/13370/subdomain
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100229.44/warc/CC-MAIN-20231130161920-20231130191920-00706.warc.gz
en
0.873459
399
3.0625
3
On this page For further assistance with your assignment, ask at the Belzberg Library Reference Desk or contact Nina Smart, Liaison Librarian for Gerontology (778.782.5043 / [email protected]) Monday to Thursday ; Instructor: Dr. Irving Rootman "This course is designed to cover and critically examine concepts, models, theories, and practices of health promotion specific to aging populations. It will cover the origins and development of health promotion, population health and health education theories and frameworks, current approaches to health promotion, and applications targeting specific groups of older adults. Students will gain an understanding of the major issues surrounding health promotion and health education at the level of the individual, social network, institution, community, and public policy." [Note: for further information see the course outline] Health behavior and health education : theory, research, and practice [print] Guidebook that discusses the foundations of health behaviour as well as different models, combines theory, research, and practice to provide a comprehensive overview on health behaviour and education. Health promotion in Canada : critical perspectives on practice [print] This text is a "comprehensive profile of the history and future of health promotion in Canada," focusing on the issues and concepts most crucial to the Canadian context. Use the SFU catalogue to find books. Online journal databases To find journal articles and government reports, use the following databases: - Ageline - the major social gerontology database - Canadian Electronic Library from desLibris - database for Canadian e-books, health and public policy documents (Same terms: Aged; Health promotion) - Canadian Research Index - Canadian government publications (Sample terms: Health promotion; Public health) - Achieving Health for All: A Framework for Health Promotion - a 1986 Health Canada report that examines health promotion in Canada as an "integration of ideas from several arenas of public health, health education and public policy." - Action statement for Health Promotion in Canada - 1996 action report from the Canadian Public Health Association that breaks down and describes ways for improving efforts for promoting health - Bangkok Charter for Health Promotion in a Globalized World - WHO report focused on "identifies actions, commitments and pledges required to address the determinants of health in a globalized world through health promotion." - A new perspective on the health of Canadians - 1974 Health Canada report that "identifies two main health-related objectives: the health care system; and prevention of health problems and promotion of good health." - Ottawa Charter for Health Promotion - 1986 World Health Organization (WHO) publication that "focused on the [public health] needs in industrialized countries, but took into account similar concerns in all other regions." Health promotion for older adults - Arthritis Society - Canadian organization focused on "providing education, programs and support" to those with Arthritis; source of research and information on arthritis - Canadian Study of Health and Aging: Study methods and prevalence of dementia - journal article analyzing the prevalence of dementia in five Canadian regions - Dare to age well - website published by the Division of Aging and Seniors Health Canada; contains publications from Health Canada - Dependency, chronic conditions and pain in seniors - Health Reports Supplement that examines the prevalence of dependency and chronic conditions amongst older Canadian adults - General Social Survey: Time Use - Statistics Canada study examining how Canadians spend their time - Healthy Aging Through Healthy Living - report that provides "an evidence based framework for a comprehensive approach and establishes the Ministry of Health's strategic platform for healthy aging" - How Healthy Are Canadians? - report on the health of Canadians which focuses on differences in health between men and women - Measured obesity: adult obesity in Canada: measured height and weight - Statistics Canada report - Preventing Chronic Diseases: A Vital Investment - WHO global report on chronic diseases which "presents a state-of-the-art guide to effective and feasible interventions; and provides practical suggestions for how countries can implement these interventions to respond successfully to the growing epidemics" - Population Projections for Canada (2013 to 2063) - Statistics Canada publication that provides population projections based on census data - Social Capital in Action Thematic Policy Studies - Policy Research Initiative report on social relationships as a form of capital; see section: The Role of Social Capital in Aging Well Paradigm shifts in the health arena - BC Healthy Communities - province-wide organization that promotes healthy living; provides research and resources on topics including aging well - Black's Medical Dictionary [print] - comprehensive medical dictionary for non-experts - Healthy Cities, Healthy Children - commentary published by the UNICEF Progress of Nations - Health Promotion Glossary - glossary published by the World Health Organization including information and definitions for terms related to health promotion - The Quality of Life Model - information on model created by the Quality of Life Research Unit at the University of Toronto - Twenty Steps for Developing a Healthy Cities Project - WHO publication on the Healthy Cities Project which "provide a vehicle for the application of health for all (HFA) principles at the local level." Predictors of Healthy Aging - Active Ageing: A Policy Framework - WHO publication that proposes questions and discussion on issues related to aging for policy makers - Aging and Seniors - online portal for publications of the Division of Aging and Seniors of the Public Health Agency of Canada Other useful web sites and books - Ageing- website for United Nations Division for Social Policy and Development providing information on current issues as well as resources - Health Promotion - Health Canada webpage providing links on topics of health promotion - Seniors - Health Canada webpage providing information and resources on senior health - Shape the Future of Health Care - information about the Commission on the Future of Health Care in Canada: The Romanow Commission
<urn:uuid:28ae3a93-2525-4c5b-be90-590d74121563>
CC-MAIN-2017-43
http://www.lib.sfu.ca/help/research-assistance/subject/gerontology/gero820
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825154.68/warc/CC-MAIN-20171022075310-20171022095310-00101.warc.gz
en
0.899021
1,192
2.609375
3
Limiting Reactant Calculations In the reactions previously discussed, an amount of only one of the reactants was given. We assumed that we could use as much of the other reactant as we needed. Unfortunately, this is not always the case. Situations where specific amounts of both of the reactants are given are called “limiting reactant” problems. The limiting reactant is the one that runs out first! In limiting reactant problems three possibilities exist. Possibility one is that the first reactant is used up first. Possibility two is that the second reactant runs out first and possibility three is that both reactants run out at the same time. Containers of nuts and bolts are to be threaded together. One nut threaded on one bolt. How many combinations Can be made ? bolts nuts Only four nut – bolt combinations can be made. The bolts have run out. The bolts are the limiting factor Containers of nuts and bolts are to be threaded together. Two nuts threaded on one bolt. How many combinations Can be made ? boltsnuts Only three nut – bolt combinations can be made. The nuts have run out. The nuts are the limiting factor The one with the smallest number is not always the limiting factor. It depends on the ratio of combinations! THOUGHT LAB: 1 car body + 4 wheels + 2 wipers 1 car 1 car body : 4 wheels : 2 wipers : 1 car 1)If you have 35 car bodies, 120 wheels, 150 wipers how many complete cars can you make? Ans: You can only make a max of 30 cars because every car requires 4 wheels!!! THOUGHT LAB: 2) a) Which item is limiting? Ans: The wheels are limiting or in other words, the wheels will run out first! 2) b) Which items are in excess? Ans: The body and wipers THOUGHT LAB: 2) c) How much of the excess item remains after the reaction? If you use up all the wheels (120), you have made 30 cars. Each car requires 1 body and 2 blades so you will also use up 30 bodies and 60 blades. This means you have 5 car bodies and 90 blades remaining!!! THOUGHT LAB: ANALYSIS 1)Does the excess amount of an item affect the quantity of product made? Ans: No, because they are just extra (left- over) items THOUGHT LAB: ANALYSIS 2) Even though there are fewer car bodies than wheels and wipers, explain why car bodies are NOT the limiting item (though they are the smallest in amount) Ans: This is because the least amount of them are needed for the reaction to occur OR Every 1 car : 4 wheels (you cannot have a car with less than 4 wheels!) LIMITNG REACTANT The reactant which is completely used up It determines how much product will be produced Ex: wheels EXCESS REACTANT A reactant that is left over in a reaction Once the limiting reactant is used up, no more excess reactants can be used Ex: car bodies, wipers When solving problems with LR… THINGS TO THINK ABOUT… How do we know how much product is produced? Which reactant is limiting? –It’s the reactant that will product the least amount of product Set up mole ratio product with the LIMITING REACTANT LIMITING REACTANT EXAMPLE Li 3 N (s) + H 2 O (l) NH 3(g) + LiOH (aq) a)If you have 4.87g Li 3 N and 5.80g water for the above reaction, what is the LR? b)How much NH 3(g) is produced? c)How much LiOH is produced? First we have to balance our chemical reaction! Li 3 N (s) + 3 H 2 O (l) NH 3(g) + 3 LiOH (aq) LIMITING REACTANT EXAMPLE GIVEN: Li 3 N m of Li 3 N = 4.87 gM = 34.8 g/mol Therefore, n= m/M = 0.140 mol Li 3 N H2OH2O m of H 2 O = 5.80 gM = 18.02 g/mol Therefore, n=m/M = 0.322 mol H 2 O LIMITING REACTANT EXAMPLE 1) Find the LR: look at both reactants Li 3 N: 1 mol Li 3 N = 1 mol NH 3 0.140 mol Li 3 N x mol NH 3 x = 0.140 mol NH 3 b) H 2 O: 3 mol H 2 O = 1 mol NH 3 0.322 mol H 2 O x mol NH 3 x = 0.107 mol NH 3 Since the least amount of NH 3 is produced from H 2 O, it is the LR LIMITING REACTANT EXAMPLE b) How much NH 3 is produced? Since H 2 O is the limiting reactant, we know that 0.107 mol NH 3 is produced (part a) n of NH 3 = 0.107 mol M of NH 3 = 17.03 g/mol m = n x M = 0.107 mol x 17.03 g/mol = 1.82 g NH 3 LIMITING REACTANT EXAMPLE c) How much LiOH is produced? Set up mole ratio with LR ONLY 3 mol H 2 O = 3 mol LiOH 0.322 mol H 2 O (given) x mol LiOH x = 0.322 mol LiOH M of LiOH = 23.95 g/mol m = n x M = 0.322 mol x 23.95 g/mol = 7.71 g Therefore, 7.71 g LiOH will be produced Just wondering… How much Li 3 N is left over? Set up mole ratio with LR (H 2 O) 1 Li 3 N = 3H 2 O x Li 3 N 0.322 mol H 2 O x = 0.107 mol Li 3 N was used Total mol of available Li 3 N was 0.140 mol; but only used 0.107 mol of the available. 0.140 mol – 0.107 mol = 0.033 mol was left over m = n x M m = 0.033 mol Li 3 N x 34.8 g/mol = 1.15 g of Li 3 N will be left over. Limiting Reactant Now, let’s try a limiting factor (reactant) problem using a chemical reaction! Remember, numbers of atoms and molecules are measured in moles. The balanced equation tells us the ratio of combination of the atoms and molecules that are used to make the products. In the reaction: H 2 + I 2 2 HI, one molecule of hydrogen is combined with one molecule of iodine to give two molecules of hydrogen iodide. It is also true to say that one mole of hydrogen is combined with one mole of iodine to give two moles of hydrogen iodide. All we have really done is multiply the entire equation through by 6.02 x 10 23 (1 mole). Limiting Reactant H 2 + I 2 2 HI Suppose that exactly one mole of H 2 and exactly one mole of I 2 are available. In this case we can make exactly two moles of HI and no H 2 or I 2 will be left. Now, suppose that we have one mole of H 2 and two moles of I 2. Once the H 2 is used up, no more I 2 can be reacted. One mole of H 2 will use exactly one mole of I 2 leaving an extra mole of I 2 unused. The limiting reactant is H 2. It ran out first. The excess reactant is I 2. We have extra I 2. Only two moles of HI can be made. According to our balanced equation, for each H 2 used, two HI are formed. Only one mole of H 2 was used so only two moles of HI are produced. The quantity of products formed is based on the limiting reactant! Limiting Reactant Problem: Given the reaction: 1Ca + 1Cl 2 1CaCl 2. If we mix 120 grams of calcium and 71 grams of chlorine, which reactant is the limiting factor? How many grams of CaCl 2 can be made? Solution: Remember that balanced equations are based on moles. We must first convert the given grams to moles. Moles for Ca = 120 / 40 = 3.0, Moles for Cl 2 = 71 / (2 x 35.5) = 1.0 From the balanced equation, 1 Ca requires exactly 1 Cl 2. Since only 1.0 mole of Cl 2 is available only 1.0 mole of Ca can be consumed and 2.0 moles of Ca remain unused. Chlorine (Cl 2 ) is the limiting reactant. The amount of product that can be formed is based on the limiting reactant. The equation tells that for each Cl 2 used, one CaCl 2 is made. Since 1.0 mole Cl 2 are used, 1.0 mole of CaCl 2 are produced. Grams CaCl 2 = 1.0 moles x (40 +2(35.5)) grams per mole = 111 g Limiting Reactant Problem: Given the equation: 2Na + Cl 2 2NaCl. How many grams of NaCl can be made by reacting 69 grams of Na with 5.0 moles of Cl 2 ? Solution: Again we must work in moles. Cl 2 is already moles but Na must be converted (69 / 23 = 3.0 moles of Na) From the equation, 2Na needs 1Cl 2 (half the moles of Na) so 3 Na needs 1.5 moles of Cl 2. We have 5.0 moles of Cl 2, more than enough. Therefore all of the Na is used and Na is the limiting reactant! The amount of product is based on the limiting reactant. Since 2Na make 2NaCl, 3.0 Na will make 3.0 NaCl Grams of NaCl = 3.0 moles x (23 + 35.5) gram per mole of NaCl = 175.5 grams of NaCl are formed. HOMEWORK Practice Problems : page 309 #31-39 Practice Problems: page 311 #30-50
<urn:uuid:9de2445b-504b-4a3d-8d6e-b9f5e6b87d1e>
CC-MAIN-2017-34
http://slideplayer.com/slide/4052756/
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104160.96/warc/CC-MAIN-20170817210535-20170817230535-00681.warc.gz
en
0.913209
2,162
4.09375
4
Optimization is the selection of a best element from some set of available alternatives. An optimization problem consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function. The generalization of optimization theory and techniques to other formulations comprises a large area of applied mathematics. Optimization is finding the best available value of some objective function given a defined domain, including a variety of different types of objective functions and different types of domains. An optimization problem can be represented in the following way: Given: a function f: A→R from some set A to the real numbers Sought: an element x0 in A such that f(x0) <= f(x) for all x in A (“minimization”) or such that f(x0) >= f(x) for all x in A (“maximization”). This formulation is called an optimization problem or mathematical programming problem. Many real world applications are modeled in their general framework. By convention, the standard form of an optimization problem is stated in terms of minimization, unless both of the objective functions and the feasible region are convex in a minimization problem.
<urn:uuid:d791268a-54e9-406b-82ba-b49644444508>
CC-MAIN-2017-26
https://brainmass.com/math/optimization
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323970.81/warc/CC-MAIN-20170629121355-20170629141355-00674.warc.gz
en
0.900172
251
3.609375
4
A crown (or cap) is a covering that encases the entire tooth surface, restoring it to its original shape and size. A crown protects and strengthens tooth structure that cannot be restored with fillings or other types of restorations. Although there are several types of crowns, porcelain (tooth colored crown) are the most popular. They are highly durable and will last many years, but like most dental restorations, they may eventually need to be replaced. Porcelain crowns are made to match the shape, size, and color of your teeth, giving you a long-lasting, beautiful smile. Porcelain and gold have long been the traditional materials used for composing crowns. Porcelain crowns can give a more natural look but can also be prone to chipping. Gold crowns have high lab costs, have an unsightly appearance and can produce negative allergic reactions. Over the past decade, more dentists and dental laboratories are using crowns made of zirconium. Zirconium is a strong type of crystal that has numerous favorable qualities. It is known for being long-lasting and nearly indestructible. Besides its strength and durability, zirconium is also compatible with the human body, making its use popular in the medical field, not just in dentistry. Some of the major benefits of using zirconia include an increase in biocompatibility (fewer chances of infections, complications, discomfort, and allergic reactions), increased durability over porcelain, more conservative (due to the strength of the material, less original tooth structure is required to be removed), no usage of metal materials, and are aesthetically pleasing. The benefits of each type of material will be explained and your dentist will give you their recommendation based on the individual circumstance. Reasons for crowns: What does getting a crown involve? A crown procedure usually requires two appointments. Your first appointment will include taking several highly accurate molds (or impressions) that will be used to create your custom crown. A mold will also be used to create a temporary crown that will stay on your tooth for approximately two weeks until your new crown is fabricated by a dental laboratory. While the tooth is numb, the dentist will prepare the tooth by removing any decay and shaping the surface to properly fit the crown. Once these details are accomplished, your temporary crown will be placed with temporary cement and your bite will be checked to ensure you are biting properly. At your second appointment, your temporary crown will be removed, the tooth will be cleaned, and your new crown will be carefully placed to ensure the spacing and bite are accurate. You will be given care instructions and encouraged to have regular dental visits to check your new crown.
<urn:uuid:8410f48d-20d3-4994-9707-ebb56abc5fb1>
CC-MAIN-2020-10
http://www.bracedentistry.com/procedures/restorative-and-cosmetic-dentistry/crowns-caps/
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144165.4/warc/CC-MAIN-20200219153707-20200219183707-00361.warc.gz
en
0.950876
566
2.8125
3
Mice injected with methamphetamine and exposed to Cryptococcus fungi showed more extensive and hard-to-eradicate infections. Also this week: an organic pesticide fights parasitic worms. Meth May Worsen Fungal Infections Using methamphetamine may increase susceptibility to infections with a variety of pathogens by damaging the blood-brain barrier, a study in mice suggested. Prior studies have shown that injected methamphetamine accumulates at the highest concentrations in the lungs, and Luis Martinez, PhD, of Long Island University Post, and colleagues explored whether the drug would affect the severity of pulmonary infection with Cryptococcus neoformans, the leading cause of fungal meningitis in patients with AIDS. The lungs of mice injected with methamphetamine showed greater fungal colonization and biofilm formation and higher levels of the major capsular polysaccharide of the fungus, which is believed to be a key player in pathogenesis. Methamphetamine was also associated with a more rapid progression of the infection into the central nervous system, and the injected mice had higher levels of the fungus in their brains. "Methamphetamine-induced alterations to the molecules responsible for maintaining the integrity of the blood-brain barrier provide an explanation for the susceptibility of a methamphetamine abuser to brain infection by HIV and other pathogens," the researchers wrote in their paper in mBio. -- Todd Neale Pesticide as Parasite Tx? A bacterial protein used as a pesticide in organic farming could treat parasitic worms in people cheaply, an animal model study suggested. The protein Cry5B is produced by Bacillus thuringiensis bacteria applied to crops to kill soil-borne hookworms and other intestinal parasites and considered safe for consumption, already engineered into some corn and rice crops for pest resistance. In hamsters with hookworms, a single, 10 mg/kg dose killed off 93% of the infestation, which is better than clinically-used anthelminth medications, Raffi Aroian, PhD, of the University of California San Diego, and colleagues reported in Applied and Environmental Microbiology. "The challenge is that any cure must be very cheap, it must have the ability to be mass produced in tremendous quantities, safe, and able to withstand rough conditions, including lack of refrigeration, extreme heat, and remote locations," Aroian noted in a statement. His group suggested that the live bacteria could be a good solution for the estimated 1 billion people infected with worms around the world, largely in poor, tropical countries. -- Crystal Phend Mishap Reveals Plaque Stabilizer Göran K. Hansson, MD, PhD, of Karolinska University Hospital in Sweden, and colleagues were investigating a new mouse model of atherosclerosis when something didn't quite add up -- the coronary plaques were rich with collagen, a sign of stability, they noted in Science Translational Medicine. Upon further investigation, they treated a mouse model with antibodies that neutralize interleukin (IL)-17A and the plaques lost their stability, a state where they are vulnerable to rupture, block the heart's blood supply, and cause a massive heart attack. In addition, IL-17A stimulated collagen production by human vascular smooth muscle cells. These results "could lead to new plaque-stabilizing therapies, and should prompt an evaluation of cardiovascular events in patients treated with IL-17 receptor blockade," researchers concluded. -- Chris Kaiser Prolactin Protects the Inflamed Joint Among the multiple actions of the hormone prolactin is the prevention of cytokine-induced chondrocyte apoptosis in the joint, suggesting a novel approach to inhibiting the destructive articular processes in rheumatoid arthritis, researchers reported in the Journal of Clinical Investigation. In a series of rodent experiments, injection of inflammatory cytokines such as tumor necrosis factor-alpha and interleukin 1-beta into the knee joint led to pronounced chondrocyte destruction. However, when prolactin was injected first, chondrocyte apoptosis was delayed and inflammation and swelling was markedly reduced. Even when the prolactin was administered 2 weeks after the cytokine injection, when inflammation was well established, swelling was attenuated and lower levels of inflammatory markers were found in the animals' joints. That hyperprolactinemia might be therapeutic in rheumatoid arthritis or other autoimmune diseases was a "novel and unexpected" finding, according to the researchers. In fact, most studies have focused on the likely pathogenic role for prolactin in these disorders, because of the preponderance of female patients and changes in disease during pregnancy and lactation. Two-for One Blockade Targets B-Cell Lymphoma The regulatory transcription factor BCL6 plays a central role in the growth of diffuse large B cell lymphomas, but because it also has a host of important functions in the immune system, it has been regarded as a difficult target for drug therapy. Now researchers led by Ari Melnick, MD, of Weill Cornell Medical College in New York City, think they may have found a way past that obstacle. It turns out that BCL6 has two functions that promote lymphoma growth, both of which are mediated through a single binding site, the protein's N-terminal BTB domain. Blocking that site should be a potential therapy for diffuse large B cell lymphomas, but should have limited effects of other BCL6 functions, they suggest in Cell Reports. Melnick and colleagues have been working with a BCL6 inhibitor dubbed RI-BP, which blocks both of the lymphoma-promoting functions of BCL6. It completely reversed established lymphomas in all mice treated with the substance with no apparent side effects, they found, and there was no evidence of residual disease or tumor regrowth in 60% of the animals after the drug was stopped. "This is wonderfully serendipitous -- our drug just happens to be able to overcome both of the biological mechanisms that are key to survival of aggressive lymphoma," Melnick said in a statement. The investigators are now working to move RI-BP into human trials. -- Michael Smith
<urn:uuid:9a67e86c-f8a6-4595-9c7b-0dbb183a8df2>
CC-MAIN-2023-14
https://www.medpagetoday.com/labnotes/labnotes/40812
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943562.70/warc/CC-MAIN-20230320211022-20230321001022-00590.warc.gz
en
0.950855
1,280
2.765625
3
International Day Of Sport For Development And Peace: On 6th April, the International Day of Sport for Development and Peace (IDSDP) is observed annually. This day is celebrated to highlight the importance of sports in society. - This day recognizes the positive impact due to sports on the harmony and peace of various communities across the globe. - Sports help in promoting social ties, peace, and sustainable development across the planet. - In 2013, the United Nations General Assembly (UNGA) had declared that the 6th of April is to be celebrated as the International Day of Sport for Development and Peace. April 6th was selected as on this day in 1896 the first-ever Modern Olympics took place in Athens. Since 2014, this day has been annually observed across the planet. - In 2015, sports were included in the Sustainable Development Goals of the United Nations after it was considered to play a crucial role in sustainable development. - Hence, this day acts as a platform for nations to turn their focus to the development of this sector. - The nations are encouraged to invest in the development of sports and sporting infrastructures, awareness among the masses, and quality education. The theme for this year - ‘Securing a Sustainable and Peaceful Future for All: The Contribution of Sport’ is the theme for this year’s the International Day of Sport for Development and Peace. - The theme highlights the importance of sport in creating a better future for humans. - The focus of this year’s theme is on lowering Greenhouse Gas emissions and climate change.
<urn:uuid:8780ccfc-921e-4ba3-b56f-e1ef25b051d1>
CC-MAIN-2023-23
https://crackittoday.com/current-affairs/international-day-of-sport-for-development-and-peace-2022-theme/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224650264.9/warc/CC-MAIN-20230604193207-20230604223207-00692.warc.gz
en
0.943952
326
3.171875
3
Being home to the Nobel Prize is more than a symbolic nod to great achievements in science and the humanities; Swedish ingenuity continues to drive progress in a range of fields, in particular information and communications technologies (ICT). The meteoric rise of the music-streaming service Spotify is big news in the growing list of countries where the Swedish-founded company has launched since it kicked off in 2008. Spotify's founders Daniel Ek and Martin Lorentzon have shot to fame in the vanguard of European entrepreneurs of the Internet age. These two innovators are joined by another Swede who is not only changing the tools we use to communicate, but also the language we speak while using them. Keep this in mind, next time someone says 'Skype me!' For the 10 people on the planet who don't know what this means, Skype is a system using voice-over-Internet protocol (VoIP) that allows people to make low-cost telephone calls via the web. The company was co-founded by Swedish-born Niklas Zennström and sold in 2011 to Microsoft, by which time it had around 700 million users. Going back in time, we see communications technology could indeed be in the Swedish blood. Lars Magnus Ericsson (1846-1926) started the company bearing his name around a century ago. Today, it is one of the largest telecom companies in the world. Meanwhile, earlier in the 19th century, Alfred Nobel put his great wealth towards creating the Nobel Prize reportedly to atone for the harm to the world that his most famous invention - dynamite - had caused. Today's great scientists and thinkers are recognised with great ceremony in Sweden as the year's Nobel Laureates. Further again in history, Carl Linnaeus' contribution to botany, zoology and even modern-day taxonomy is still being felt today. His binomial nomenclature (two-part names) for animal and plant species has helped create order in the natural world. Many also credit him as the father of ecology. And Anders Celsius was the astronomer who came up with the 100-point thermometer scale in the early 1700s, a system used across the world today. This brief history of famous scientific Scandinavians is just a prelude to the Swedish pioneers of today. In academia, research labs and industry, Swedish researchers are pre-eminent in fields ranging from conservation sciences to cytogenetics; software development to Internet safety. To this day, Sweden has one of the highest levels of public investment in research compared to the size of its population. According to ERAWatch, Swedish governments have tended to maintain public outlays for research at almost 1% of GDP. Add in private investment, and Sweden is one of the only two EU countries that has managed to surpass the target of 3% of GDP invested in R&D annually. Show me the IT! Swedish pragmatism and leadership is on show in the EU-funded SHOWE-IT (1) pilot study aimed at reducing energy and water consumption in social housing in three locations: Rochdale (UK), St Etienne (FR) and Botkyrka (SE). Each of the initial 211 households chosen for the trial has been equipped with easy-to-use 'smart' meters and other ICT-based tools which will help them reach a target of 20% savings in energy and water consumption - a threshold for the SHOWE-IT approach to be considered commercial viability. Meanwhile, the BECA (2) project, which involves Swedish housing specialists ÖrebroBostäder AB, is taking the 'big picture' approach to domestic energy and water conservation across Europe. Social housing organisations in seven European countries (Bulgaria, Czech Republic, Germany, Italy, Serbia, Spain and Sweden) and their partners are cooperating in the project to provide ICT-based energy management and energy awareness services directly to around 5,000 social housing tenants and service operators. Following a year of investigation and prototypes, the three-year project is now entering an important operational phase and will conclude at the end of 2013. Sweden is taking a leadership role in global efforts to make the Internet a safer place for young people. For example, the Safer Internet Centre Sweden and related Awareness Node scheme, as well as the Internet Safety Helpline, are keeping parent and student groups, governments, industry, associations and educators to informed of the latest trends and dangers young people face using online technologies. Meanwhile, the Robert (3) project is studying the tactics that Internet predators use to groom young people and using the findings to equip children, especially the more vulnerable ones, for their forays online. And the FIVES (4) project is developing novel forensic techniques and tools tailored to help police investigate the vast amounts of evidence (videos and images) collected of child sex abuse cases. '[Looking] for illegal images and videos or other investigative leads in the large amounts of data found on seized storage devices,' the project team explains. 'An average investigation could have several terabytes of data stored in different media and formats.' The FIVES project, led by Karlstad University with support from NetClean Technologies Sweden AB, is using perceptual optimisation techniques, object matching and image similarity techniques, among other methods, to allow details of crime scenes to be linked between different image sets or videos and lift the burden on investigators. Healthy respect for technology Swedish partners are also active in the field of eHealth, which supports wider eGovernment initiatives. Take for example, the EU-funded Sustains (5) project which is trialling 'Electronic health record' (EHR) technologies in 11 pilots across nine European countries. The project is looking to empower patients and improve the overall quality of care for Europeans while making health care more efficient and cost-effective. Meanwhile, the EU-supported epSOS (6) pilot project is making it easier for people to receive medical assistance anywhere in the EU by removing linguistic, administrative and technical obstacles. According to the project's coordinator Fredrik Linden, of the Swedish Association of Local Authorities and Regions (SALAR), some 30,000 health professionals will use the new services developed (ePrescriptions and Patient Summaries) within the project. Addressing the challenge of an ageing European population, Swedish researchers are also active in field of palliative care, seeking answers to the question: What can caregivers do during the final days of their patients' lives, apart from administer drugs? According to Dr Olav Lindqvist of Sweden's Karolinska Institutet, 'Palliative care is all about satisfying fundamental human needs, but … it entails so much more than one might at first assume. If we are to further develop palliative care, we must learn more about this type of daily care-giving and tease out its nuances.' The research team, supported under the EU-funded OPCARE9 (7) project, analysed 16 palliative outpatient and inpatient clinics in nine countries. Nursing staff, doctors and volunteers from each clinic were all asked to record non-pharmacological activities that were carried out during the final days of a patient's life, for three to four weeks. The results were recently reported in the journal 'PloS Medicine' and promoted on CORDIS News. A mercurial bug The last example of Swedish ICT prowess, if further evidence is needed, comes from the recently concluded EU-funded project 'Property-based testing' ( Protest) which has developed cutting-edge software engineering approaches to improve the reliability of software systems. According to reports, the Protest team, which includes Swedish partners Ericsson, Quviq AB and Chalmers University of Technology, was able to find bugs in systems that had been used for years and, while they sometimes demonstrated strange behaviour, no previous tests could find the cause. According to the EU Commission official in charge of the project, Protest has delivered an outstanding set of results: 'One of the best projects I have ever had.' And in true Swedish spirit the research will find its way into technological innovations which stand to improve industry and Europe's bottom line. 'Their tools will be used in the telecom industry (Ericsson) and also the car industry (Volvo) where they can test if software systems are functioning according to the Autosar standard,' according to the Commission. Continued investment in skills and research capacities should help ensure that Sweden maintains its vital contribution to the European Research Area, with many more names to join the list of illustrious Swedish scientists. And perhaps someday soon, a Nobel laureate who will not have so far to travel to attend the Stockholm ceremony honouring them! The projects featured in this article have been supported by the Competitive and Innovation Programme's (CIP) ICT-Policy Support scheme or the Seventh Framework Programme (FP7) for research. (1) 'Real-life trial in social housing, of water and energy efficiency ICT services' (2) 'Balanced European conservation approach' (3) 'Risktaking online behaviour - Empowerment through research and training' (4) 'Forensic image and video examination support' (4) 'Support users to access information and services' (5) 'Smart open services - Open eHealth initiative for a European Large Scale Pilot of patient summary and electronic prescription' (6) 'A European collaboration to optimise research for the care of cancer patients in the last days of life' - FP7 on CORDIS - CIP on CORDIS - SHOWE-IT on Europa - BECA on Europa - Safer Internet SE AC-HP on Europa - Safer Internet SE AN-HELP on Europa - Robert on Europa - FIVES on Europa - Sustains on Europa - epSOS on Europa - OPCARE9 on CORDIS - Protest on CORDIS - EU-funded scientists investigate intricacies of end-of-life care - Going from e to we-government - Date: 2012-03-29 - Offer ID: 8320
<urn:uuid:5c63edab-811c-48c0-8461-5a70c0edb00e>
CC-MAIN-2017-30
https://ec.europa.eu/digital-single-market/news/swedish-knack-technology
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424154.4/warc/CC-MAIN-20170722222652-20170723002652-00543.warc.gz
en
0.935563
2,088
2.90625
3
|Page tools: Print Page Print All RSS Search this Product| Good infant and maternal health can have a significant positive impact on the future health and wellbeing of an individual. Therefore, infant health is an important indicator of the level of health and wellbeing existing within a society. This article focuses on factors affecting the health of the more than a quarter of a million babies born in Australia each year, such as their gestation, birthweight, breastfeeding status and immunisation. Infant mortality and illness rates are also examined, as are some maternal factors associated with the health outcomes of infants. Australian babies today have better health prospects for their first year of life than any previous generation. Over the past century, improved sanitation and hygiene, better ante and post-natal care, greater parental education, the introduction of universal immunisation programs and improved medical technology have all contributed to both dramatically reducing the infant mortality rate and preventing the development of long term health problems in infants. However, despite great improvements to infant health over recent decades, there remain a range of interventions and behaviours that can affect health outcomes for babies. It is acknowledged that the biological, social, family, community and economic conditions of children are important predictors of their future health, educational, behavioural, criminal and psycho-social outcomes. (Endnote 1) The Australian Government has recognised the importance of early childhood health and wellbeing in ensuring improved outcomes for Australian children in the development of a National Agenda for Early Childhood. (Endnote 2) This article examines the general characteristics of Australian babies aged under one year, with a particular focus on factors affecting, and improvements to, infant health. Data sources and definitions used in this article. BABIES: SELECTED CHARACTERISTICS Over the last two decades, the number of babies born each year has averaged around a quarter of a million. In 2005 there were 259,800 births, compared with 247,300 in 1985. The age of the mothers of these babies has been steadily increasing over the past two decades, from a median age of 27.3 years in 1985 to 30.7 in 2005 (for more information on recent fertility trends, refer to Australian Social Trends 2007, Recent increases in Australia's fertility.) The ratio of male to female births has remained stable over this period, with 105.6 male births recorded for every 100 female births in 2005, compared to 105.2 for every 100 births in 1985. The length of gestation is considered to be a key indicator of infant health, with pre-term birth being associated with poorer health outcomes in babies. Over the thirteen years to 2004 a decrease in the number of post-term births (from 5% in 1991 to 1% in 2004) and a marginal increase in the percentage of pre-term births (from 7% in 1991 to 8% in 2004) have contributed to a shorter average length of gestation. In 2004 the average gestation period was 38.8 weeks, a decrease from 39.2 weeks in 1991. The percentage of babies born at term increased, from 88% in 1991 to 91% in 2004. GESTATION OF BABY Most babies born in Australia are born by spontaneous vaginal birth. In 2004, 59% of women gave birth in this way, a fall from 68% in 1991. Much of this decline can be explained by the increasing use of caesarean section for delivery, with 29% of women giving birth by caesarean section in 2004, a substantial increase from 18% in 1991. Factors associated with increased caesarean rates are advancing maternal age, multiple pregnancy, low birthweight, breech presentation and private accommodation status in hospital. (Endnote 3) Around one in nine mothers (11%) had an assisted vaginal delivery, with forceps or vacuum extraction being used to assist the birth, a decrease from 13% in 1991. The birthweight of a child is widely accepted as a key indicator of infant health and can be affected by a number of factors, including the age, size, health and nutritional status of the mother, pre-term birth, and tobacco smoking during pregnancy. (Endnote 4, Endnote 5) In 2004 the average birthweight for babies born in Australia was 3,370 grams, similar to the average of 3,350 grams recorded in 1991. Low birthweight is generally associated with poorer health outcomes, including increased risk of illness and death, longer periods of hospitalisation after birth, and increased risk of developing significant disabilities. (Endnote 5) A baby is defined as having a low birthweight if they are born weighing less than 2,500 grams. (Endnote 5) Low birthweight occurred in 6% of liveborn babies born in both 1991 and 2004. An increasing number of babies today are being born with the aid of assisted reproduction technology (ART), which uses medical technology such as in-vitro fertilisation or other fertility treatments to assist in the conception of a child. In 2004, an estimated 2.5% of all births in Australia were the result of ART treatment. Between 1989 and 2004, the number of live births occurring in Australia and New Zealand as a result of ART treatment increased by 74%. (Endnote 7, Endnote 8) Mothers in Australia and New Zealand who conceive in this way tend to be older than mothers in general, with an average age at delivery of 34.5 years in 2004, compared with an average age of 29.7 years for all Australian mothers in 2004. (Endnote 7) Pregnancies commenced using ART are also substantially more likely to result in a multiple birth, with 16% of all deliveries resulting in a multiple birth. (Endnote 7) The percentage of confinements that result in multiple births has increased over the past 20 years, from 1.1% in 1985 to 1.7% in 2005. The increased use of ART is a major factor in the higher rate of multiple births observed during this period. Babies born as the result of a multiple birth are more likely to have a low birthweight and short gestation, and experience an increased risk of illness, mortality and longer periods of hospitalisation. (Endnote 9, Endnote 10) Twins born in 2004 weighed on average one kilogram less than their singleton counterparts, with an average weight of 2,410 grams, compared with 3,410 grams for singleton babies. Low birthweight occurred in half (50%) of all twin births and nearly all (95%) triplet and higher order multiple births in 2004, compared with just 5% of singleton births. INFANT MORTALITY AND ILLNESS Infant and neonatal mortality Infant mortality refers to the deaths of children before their first birthday and is a key indicator of infant health, in addition to providing insight into the broader social conditions of the population. Over the past twenty years, the infant mortality rate (the number of infant deaths per 1,000 live births) has halved, from 9.9 in 1985 to 5.0 in 2005. The neonatal mortality rate (the death of a child during their first 28 days of life, per 1,000 live births) has also halved during this period, from 6.1 in 1985 to 3.1 in 2005. Factors that have contributed to these declines include improved medical care and technology, such as developments in neonatal intensive care, and a major reduction in the number of deaths from Sudden Infant Death Syndrome (SIDS). INFANT AND NEONATAL MORTALITY RATES(a)(b) Between 1985 and 2005, deaths from SIDS declined by 83%, from 523 deaths in 1985 to 87 in 2005. The decline in SIDS deaths in Australia during this period is strongly associated with a public health campaign launched by SIDS and Kids (formerly the National SIDS Council of Australia). (Endnote 11) The campaign raised awareness of the risk factors which increased the likelihood of sudden infant death and promoted the importance of safer practices (such as placing the baby to sleep on their back) in reducing the risk of SIDS. The actual birth itself can be a mortality risk for babies, with fatalities caused by complications of pregnancy, labour and delivery and maternal factors being a major cause of infant death, accounting for 27% of deaths. Respiratory and cardiovascular disorders are also a major cause of infant death, causing 8% of deaths. In addition, conditions related to low birthweight and short gestation, congenital and genetic conditions, communicable diseases, accidents and injury, infections and SIDS are significant causes of death and ill health in infants. INFANT MORTALITY: MAIN CAUSES — 2005 An analysis of data from the Australian Institute of Health and Welfare's National Hospital Morbidity Database shows that disorders relating to the length of gestation and fetal growth were the most common cause of hospital separations for infants in 2004–05. This cause accounted for 15% of hospital separations for infants in 2004–05, an increase from 11% in 1994–95. Respiratory conditions, most commonly acute bronchiolitis, were the next most common cause of hospitalisation, responsible for 13% of separations in 2004–05, down from 14% in 1994–95. Infectious and parasitic diseases accounted for 6% of separations, unchanged from 1994–95. Hospital separations relating to injuries and poisoning also did not change during this period, accounting for 2% of separations in both 2004–05 and 1994–95. Breastfeeding has been shown to provide significant health benefits for both mother and child. For babies, breastfeeding increases resistance to infection and disease, reduces the likelihood of allergic diseases such as asthma and eczema, and is also associated with higher IQ scores. (Endnote 13, Endnote 14) Mothers who breastfeed tend to experience a quicker recovery from childbirth and reduced risk of breast cancer before menopause. (Endnote 13) For these reasons both the Australian Government and the World Health Organisation recommend that babies are fed only breastmilk until 6 months of age. (Endnote 13) At the beginning of the previous century before the widespread use of infant formula, breastfeeding or the use of a wet nurse was the most common way to feed an infant. There is evidence that most Australian newborns were breastfed before the 1940's. However, by the 1970's only 40–50% of babies were breastfed. (Endnote 14) Since then the prevalence of breastfeeding has increased along with growing public awareness of the importance of breastfeeding. In 2004–05, 88% of children aged under 3 years had ever been breastfed, receiving breastmilk either exclusively, or as part of their diet in combination with breastmilk substitutes and/or solid food. BREASTFEEDING RATES(a) BY EDUCATION LEVEL OF MOTHER —2001 Immunisation programs for children are recognised as a highly effective public health intervention, greatly reducing the incidence of epidemics of infectious diseases. As a result of widespread vaccination programs, many once common childhood illnesses such as polio and diphtheria are no longer major causes of death and disability for Australian children. Babies aged under 12 months currently experience high rates of vaccination, although overall vaccination coverage has declined marginally in recent years. In 2006, 91% of children in this age group were fully immunised, compared with 92% in 2002. An analysis of vaccines administered under the National Immunisation Program Schedule reveals that 92% of children at 12 months of age in 2006 had received the DTP vaccine, which provides immunisation against diphtheria, tetanus and pertussis (whooping cough), compared with 93% in 2002. For individual vaccines, 92% were immunised against polio (93% in 2002), 94% against Haemophilius influenzae type B (HIB), slightly less than in 2002 (95%), and 94% were immunised against Hepatitis B (95% in 2002). VACCINATION COVERAGE FOR AUSTRALIAN BABIES AT 12 MONTHS OF AGE As discussed above, the health of the mother can affect infant health both during gestation and after birth. A mother who is healthy, receives good nutrition and does not smoke or drink at risky levels, is more likely to give birth to a healthy child. Smoking is one major risk factor that can adversely affect infant health, increasing the likelihood of low birthweight, pre-term birth, fetal and neonatal death and SIDS. (Endnote 6) Women are less likely to smoke during pregnancy than women of the same age in the general population, with 17% of women giving birth in 2003 (excluding Victoria, Tasmania and Queensland) smoking during their pregnancy, compared with 25% of women in the childbearing age group of 15–44 years in 2004. Younger women are more likely to smoke during pregnancy, with 42% of mothers aged under 20 reporting smoking during pregnancy, compared with 11% of mothers aged over 40 years. Drug taking and excessive use of alcohol are also associated with poorer infant outcomes. Illicit drug taking during pregnancy is associated with increased risk of low birthweight, prematurity, growth retardation and birth defects,while heavy drinking during pregnancy is associated with fetal alcohol syndrome. (Endnote 15, Endnote 16) In 2004, 6% of women who were pregnant and/or breastfeeding in the past 12 months reported using an illicit drug whilst pregnant and/or breastfeeding, and 47% reported having used alcohol whilst pregnant and/or breastfeeding. The proportion of women who drink at risky levels during pregnancy is not known. The age of mother at birth can also affect health outcomes. Very young and older mothers are more likely to give birth to babies with shorter gestation times and lower birthweights than the average. In 2004, 6% of babies were born with a low birthweight. The percentage of babies born with a low birthweight rose to 9% both for babies born to mothers aged 15–19 years and mothers aged 40 years and over. The risk of low birthweight increased substantially for babies born to mothers aged over 45 years, with 16% of babies in this category being born with a low birthweight (although this is based on a relatively small number of births). PERCENTAGE OF BABIES BORN WITH LOW BIRTHWEIGHT BY AGE OF MOTHER — 2004 2 Commonwealth Task Force on Child Development, Health and Wellbeing, The National Agenda for Early Childhood: A Draft Framework, FaCSIA, viewed 24 November 2006, http://www.facsia.gov.au/internet/facsinternet.nsf/via/early_childhood/$File/naec_aug04.pdf. 7 Wang, YA, Dean, JH, Grayson, N and Sullivan, EA (for Australian Institute of Health and Welfare) 2006, Assisted reproduction technology in Australia and New Zealand 2004, cat. no. PER 39, AIHW, Sydney. 12 World Health Organisation 2005, Facts and Figures from the World Health Report 2005, WHO, viewed 8 January 2007, http://www.who.int/whr/2005/media_centre/facts_en.pdf. 13 National Health and Medical Research Council 2003, Dietary Guidelines for Children and Adolescents in Australia, incorporating the Infant Feeding Guidelines for Health Workers, Commonwealth of Australia, Canberra. 15 Better Health Channel 2006, Pregnancy and drugs, viewed 12 January 2007, http://www.betterhealth.vic.gov.au/bhcv2/bhcarticles.nsf/pages/Pregnancy_and_drugs. 16 Department of Health and Ageing 2006, Maternal and Infant Health, DoHA, viewed 12 January 2007, Data used in this article are drawn from multiple sources, with the main data sources being the ABS Births, Deaths and Health collections, the Australian Childhood Immunisation Register, and the Australian Institute of Health and Welfare's (AIHW) National Perinatal Data Collection. A confinement is a pregnancy which results in at least one live birth. A multiple birth is a confinement which results in two or more babies, at least one of which is live-born. Gestation refers to the duration of pregnancy in completed weeks: Pre-term refers to babies born at less than 37 weeks gestation. At term refers to babies born between 37 and 41 weeks gestation. Post-term refers to babies born at or after 42 weeks gestation. A caesarean section is an operative birth through an abdominal incision. A separation is an episode of care for a patient admitted to hospital. Health outcomes for Indigenous babies remain significantly poorer than those experienced by the general Australian population. Adverse health outcomes are far more prevalent, with infant mortality nearly triple the non-Indigenous rate. Indigenous babies are also more likely to have a lower birthweight, be born prematurely, and are less likely to be fully immunised, or breastfed past 6 months of age. Mothers of Indigenous babies have a median age that is 6 years younger than mothers of non-Indigenous babies, and are more than twice as likely to smoke during pregnancy. (Endnote 6) BABIES SELECTED INDICATORS Infant mortality: an international perspective Considerable variation exists in infant mortality rates internationally. In the developing world, where infant mortality rates are high, infectious diseases, diarrhoea and malnutrition are still common causes of infant death. In developed countries, where infant mortality rates are low, illnesses relating to preterm birth and congenital causes are more likely to be major causes of infant death. Significant differences also exist in neonatal mortality rates: the chances of a woman (during her childbearing years) losing a baby during its first 28 days of life is 1 in 5 in Africa, compared with 1 in 125 in more developed countries. (Endnote 12) INFANT MORTALITY RATES, SELECTED COUNTRIES — 2004
<urn:uuid:73409759-a0b0-4cb6-ba61-230f35bbd354>
CC-MAIN-2017-34
http://www.abs.gov.au/AUSSTATS/[email protected]/7d12b0f6763c78caca257061001cc588/04FEBEF9C81FE6BACA25732C002077A2?opendocument
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105955.66/warc/CC-MAIN-20170819235943-20170820015943-00222.warc.gz
en
0.955405
3,695
3.40625
3
Using the worlds most effective radio antenna, researchers have found stars unexpectedly blasting out radio waves, potentially showing the presence of hidden worlds. The University of Queenslands Dr. Benjamin Pope and coworkers at the Dutch nationwide observatory ASTRON have actually been browsing for worlds using the worlds most effective radio telescope Radio frequency Array (LOFAR) positioned in the Netherlands. ” Weve discovered signals from 19 distant red dwarf stars, 4 of which are best discussed by the existence of planets orbiting them,” Dr Pope said. ” Weve long known that the planets of our own solar system produce effective radio waves as their electromagnetic fields communicate with the solar wind, but radio signals from planets outside our planetary system had yet to be chosen up. ” This discovery is an essential action for radio astronomy and could possibly cause the discovery of worlds throughout the galaxy.” Formerly, astronomers were just able to spot the really nearest stars in consistent radio emission, and whatever else in the radio sky was interstellar gas, or exotica such as black holes. Now, radio astronomers are able to see plain old stars when they make their observations, and with that info, we can look for any planets surrounding those stars. The team concentrated on red dwarf stars, which are much smaller than the Sun and understood to have extreme magnetic activity that drives outstanding flares and radio emission. However some old, magnetically inactive stars likewise appeared, challenging traditional understanding. Dr. Joseph Callingham at Leiden University, ASTRON and lead author of the discovery, said that the group is confident these signals are coming from the magnetic connection of the stars and unseen orbiting planets, similar to the interaction in between Jupiter and its moon, Io. ” Our own Earth has aurorae, frequently acknowledged here as the northern and southern lights, that also produce effective radio waves– this is from the interaction of the worlds electromagnetic field with the solar wind,” he stated. ” But in the case of aurorae from Jupiter, theyre much stronger as its volcanic moon Io is blasting material out into space, filling Jupiters environment with particles that drive uncommonly powerful aurorae. ” Our model for this radio emission from our stars is a scaled-up version of Jupiter and Io, with a world covered in the electromagnetic field of a star, feeding product into huge currents that likewise power bright aurorae. ” Its a spectacle that has actually attracted our attention from lightyears away.” The research study group now wished to verify the proposed worlds do exist. ” We cant be 100 percent sure that the 4 stars we think have worlds are indeed world hosts, however we can state that a planet-star interaction is the best description for what were seeing,” Dr. Pope said. ” Follow-up observations have actually dismissed worlds more huge than Earth, however theres absolutely nothing to say that a smaller sized world would not do this.” The discoveries with LOFAR are simply the start, but the telescope only has the capacity to keep an eye on stars that are relatively close by, as much as 165 lightyears away. With Australia and South Africas Square Kilometer Array radio telescope lastly under building and construction, hopefully switching on in 2029, the group predicts they will have the ability to see numerous appropriate stars out to much higher distances. This work shows that radio astronomy is on the cusp of transforming our understanding of worlds outside our Solar System. ” The population of M dwarfs observed at low radio frequencies” by J. R. Callingham, H. K. Vedantham, T. W. Shimwell, B. J. S. Pope, I. E. Davis, P. N. Best, M. J. Hardcastle, H. J. A. Röttgering, J. Sabater, C. Tasse, R. J. van Weeren, W. L. Williams, P. Zarka, F. de Gasperin and A. Drabent, 11 October 2021, Nature Astronomy.DOI: 10.1038/ s41550-021-01483-0. ” The TESS View of LOFAR Radio-emitting Stars” by Benjamin J. S. Pope, Joseph R. Callingham, Adina D. Feinstein, Maximilian N. Günther, Harish K. Vedantham, Megan Ansdell and Timothy W. Shimwell, 11 October 2021, The Astrophysical Journal Letters.DOI: 10.3847/ 2041-8213/ ac230c.
<urn:uuid:fa4ce84f-9310-41c6-a346-71d37ebf851c>
CC-MAIN-2023-14
https://terapiayrehabilitacionfisica.com/Datos/2021/10/12/unexpected-radio-signals-from-distant-stars-suggest-hidden-planets/
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.97/warc/CC-MAIN-20230322211955-20230323001955-00105.warc.gz
en
0.937668
968
2.921875
3
Allergies and Intolerances: What you need to know to safeguard your health Do you know the difference between a true allergy and an intolerance? It’s worth learning the difference, because knowing what you’re dealing with can help you to make better decisions for your long-term health and wellbeing. Let’s start by talking about allergies. A true allergy causes an immune system response. An allergy can be very dangerous, severe and even life threatening. The immune response caused by an allergy can will affect multiple organs and systems in the body, and you may have a progressively worse response with each exposure. An intolerance can cause similar symptoms, which is why it can be easily confused with an allergy. But an intolerance is less likely to be life threatening. It will cause discomfort, but a person may be able to ingest smaller quantities of the food or substance without having a severe reaction. Instead, you’ll have less acute issues that might be easier to ignore, but still cause problems in the body. What are the causes of intolerances? - Lack of digestive enzymes needed to break down the foods. A common intolerance you might be familiar with is lactose - Stress. Really! Your body may react poorly under emotionally difficult circumstances, leading to intolerances and symptoms that seem vague. This can include symptoms of abdominal bloating - Celiac disease. This is not a true allergy because there isn’t a risk of anaphylactic shock, but it can cause similar, serious, multiple organ issues. Symptoms may include digestive trouble, but also headaches and joint pain - Sensitivity to chemical food additives, such as dyes and preservatives - Irritable bowel syndrome can mimic the symptoms of an allergy by causing severe gastrointestinal distress In general, if you have concerns, we recommend that you get medical counseling and tested for allergies. We also have additional recommendations to help you determine the best course of action. Don’t forget, allergies and intolerances can also apply to problems with environmental factors such as mold, dust, and other issues. You’ll want to check for each to know for sure. Here are Proactive Wellness, our medical practitioners take a holistic, whole health approach. For that reason, we always recommend that you start with the basics. Here are some early things you can do to tell what’s going on. - Know your body. Unexplained symptoms should never be ignored. If you are feeling unwell, it’s worth a trip to visit your trusted provider . - Pay attention to how you feel after you eat. Keep a food diary, if you suspect that you might be having symptoms after eating certain foods. - Try tracking other environmental factors. Be ready to consider that the issue might be caused by mold, or Lyme disease. Have you been around an unfamiliar animal recently? You may have contracted a parasite. - Consider that your symptoms might be caused by chronic inflammation, or other stress factors. - Finally, get tested for underlying infections. The flair-up symptoms may mimic those of an allergy or intolerance. Finally, please don’t hesitate to reach out! At Proactive Wellness, we will provide a complete screening and counseling at each step to help you determine whether you have an allergy or an intolerance. We’ll provide counseling and support to help you make good decisions to support complete health and wellness. If you have any questions, we will run tests and give you full screenings to rule out any issues and prevent additional risk to your long-term health. If you have any questions about your health please don’t hesitate to give us a call! Check out our website for more information, as well. We want to partner with you to find solutions that work, and help you to live your most active, healthy life – with or without allergies and intolerances.
<urn:uuid:0a503f8f-7fd1-43c1-8f3a-2bcf399574ae>
CC-MAIN-2023-23
https://organic-cbd-oil.net/allergies-and-intolerances-what-you-need-to-know-to-safeguard-your-health/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649177.24/warc/CC-MAIN-20230603064842-20230603094842-00571.warc.gz
en
0.939442
802
3.234375
3
Most of us statistics (and data science!) educators understand that knowing how to use statistical software is integral to student successes, both in their coursework and in their careers, for our statistics and data science majors. However, in many degree programs, software usage is seen as a means to an end – getting an analysis – rather than an end goal in its own right. How did this come about, why does it matter, and what can we do to change our software-related instruction? These are the questions I discuss below, first by looking at some history of programming in these contexts, then by presenting two current philosophies on how to incorporate programming. Getting a bachelor’s degree in mathematics has long meant learning computer programming. As statistics degree offerings appeared, they adopted this convention and the emerging field of data science, with its inherent computational needs, has followed suit. Whether Java, C, R (or S-PLUS back in my day!), Python, or SAS – students pursuing a degree in statistics or data science routinely program in at least one of these languages. Unfortunately, this is often enforced by adding a course to an existing degree program. In some cases, this course is merely borrowed from another department and does not meet the students’ discipline-specific needs! Even in relatively new degree programs, our field’s approach to programming seems anachronistic. We commonly use software in the classroom to help teach a variety of topics – exploring data graphically, computing classical summary or inferential statistics, or conducting a simulation to study the properties of a resampling technique – and these advances in the inclusion of software are often touted when discussing how we have modernized our curricula. However, if students do not build the programming skills necessary to implement and understand these analyses, then software becomes a black box. Why Does This Matter? While there are several reasons to revisit how we teach programming, the one I’m focusing on here is that programming is different than most other skills we teach – if a program is inefficient, doesn’t follow good programming practices, or is otherwise sub-optimal, it can still produce correct results! Programming is not just something students should be doing to get an answer. We have an obligation to go beyond teaching students how to write functional code – we must train high-quality programmers. Statistics and data science careers that make extensive use of programming are exceedingly popular in their own right and as data sets get larger and programming becomes a ubiquitous skill, there is immense value in students having not only an ability to write code that solves a problem, but in using best practices when doing so. Degree programs have typically adopted one of two prevailing philosophies regarding programming instruction: integrated or standalone. Both approaches have advantages and disadvantages that are important for designing an optimal educational experience that prepares our students to write the end-to-end programs they will use in their careers. In this context, I’m defining end-to-end programming as the application of the following three components. 1. Data cleaning and preparation 2. Data summary, analysis, and modeling 3. Reporting/presentation of results Of course, most programs make use of general computing concepts (e.g. file types, paths, etc.) and not every program needs to employ all three components. However, students trained to write this style of end-to-end program can easily adapt to writing programs that only require one or two of the components. This approach typically focuses only on data summary, analysis, and modeling – concepts used as a means to an end for discipline-specific course content – e.g. a regression course teaching SAS modeling tools such as PROC REG but excluding any programming concepts not explicitly needed to complete the course. The most obvious pedagogical benefit to this approach is instructors can present a programming skill after students are familiar with the statistical concept. A second, but related, benefit of the integrated approach is logistical – students do not need to worry about when to take a programming course because programming is learned in concert with the discipline-specific content. However, the drawbacks to solely using integrated instruction are substantial. The overarching issue is that students are less likely to gain an appreciation for, or even an understanding of, the general computing principles necessary to be a practicing statistician. One of the primary examples is that the classroom data sets to which students are exposed have already been sanitized, meaning a loss of opportunities to develop skills with reading, cleaning, and restructuring data. Students are also better able to understand the requirements for developing good data collection methods when exposed to the results of poorly collected and/or maintained data sets. Additionally, more instructional time is required to teach programming along with discipline-specific content. Integrated instruction also requires all instructors to teach at least some computing concepts in addition to the course-specific content. Depending on department size, this can be an unrealistic expectation if all faculty are not well-versed in the same language because, as learners, it is important to expose students to the same language repeatedly. Exposing them to multiple languages is valuable, of course, but if done without proper structure, students cannot build on what they learned in an earlier course and instructors cannot assume prior knowledge. I’m defining standalone instruction to mean classes covering the language-specific concepts and any general programming concepts required to effectively use the language. For example, in a SAS course this would mean not only covering SAS concepts but also including path/directory structure, file types/attributes, image resolution, etc. There are two common “flavors” of the standalone course: applications-focused and whole-language. The applications-focused flavor – where material on data summary/analysis/modeling (Component 2) provides students with the skills necessary to carry out discipline-specific analyses needed in their other courses – is similar to the integrated approach above except these analysis tools are all in a single course and some time may be devoted to data cleaning/preparation and reporting/presentation of results (Components 1 and 3, respectively). The whole-language approach provides much less in the way of Component 2 skills and instead focuses on Components 1 and 3 by teaching the analysis software from the computer science perspective by covering syntax, compilation, and good programming practices while including a few basic Component 2 concepts so students get practice writing end-to-end programs. The applications-focused course suffers from several significant logistical issues – when should students take the course and what should be included? If taken too early, students are unlikely to understand most of the analysis techniques but if taken too late they cannot apply any of the programming skills in their discipline-specific courses, severely limiting the programming course’s utility. To determine the course’s content, instructors need to agree on what skills will be useful throughout the degree program and deviation from that list in later courses also reduces the course’s utility. Additionally, concentrating the analysis in a single course still deprives students of a deeper understanding of the software’s capabilities and operation and is less likely to instill an understanding of good programming practices. The whole-language approach should still include simple analysis techniques which is both a benefit and a drawback. It removes the logistical barriers because students can take the course earlier in their degree program, but then it derives its usefulness from how programming is emphasized in the remainder of a student’s coursework. If future courses never/rarely require students to use the skills obtained in this early-career course, then its benefits are severely blunted. However, when used properly, the whole-language approach provides a solid foundation onto which students can add skills presented in later classes while lowering the instructor’s burden in those courses. What Should We Do? To best educate our students, we should apply both approaches in a way that minimizes drawbacks and maximizes benefits to make sure we are truly training programmers and not just teaching our students to write a program as a means to an end. Because students need to be prepared to write high-quality end-to-end programs, we need to explain to students early in their career what that process looks like. To meet these goals, I propose the following as a starting point for degree programs looking to modernize their approach to teaching programming skills. 1. Employ an early-career, standalone course using whole-language instruction. Use it to introduce the three components and establish good programming practices. 2. Use integrated instruction in the same language in multiple future courses, each time assessing the students’ programming skills. 3. Enforce a common set of good programming practices across all courses. 4. Apply a common rubric for assessing programming skills across all courses. Of course, most of us are not in a position to rewrite our department’s curricula or convince our colleagues to teach their courses differently. However, we can all take steps, such as collaborating with whoever teaches your programming course (or proposing a new course!) and choosing to assess programming in our own classes to help our students develop these crucial skills. By building a strong foundation, vertically integrating a programming language into our curriculum, and enforcing good programming practices we can not only produce high-quality data scientists and statisticians, we can also move beyond just teaching programming and start training programmers for the careers that are waiting for them. Contributing author Jonathan Duggins is a Teaching Assistant Professor in the Department of Statistics at North Carolina State University. Duggins, J. and Blum, J. SAS Global Forum. March 29 – April 1 2020. The Past, Present, and Future of Training SAS Professionals in a University Program. Leave a Reply
<urn:uuid:a3e03ab3-e5e0-4b57-b056-08e777ee6dbb>
CC-MAIN-2023-14
https://stattlc.com/2020/05/07/teaching-programming-vs-training-programmers-where-the-means-justify-the-means/
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949331.26/warc/CC-MAIN-20230330132508-20230330162508-00262.warc.gz
en
0.941611
1,997
2.703125
3
Small circle defined via mouse input h = scircleg(ncirc) h = scircleg(ncirc,npts) h = scircleg(ncirc,linestyle) h = scircleg(ncirc,PropertyName,PropertyValue,...) [lat,lon] = scircleg(ncirc,npts,...) h = scircleg(track,ncirc,...) h = scircleg(ncirc) brings forward the current map axes and waits for the user to make (2 * ncirc) mouse clicks. The output h is a vector of handles for the ncirc small circles, which are then displayed. h = scircleg(track,ncirc,...) specifies the logic with which ranges are calculated. If the string track is 'gc' (the default), great circle distance is used. If track is 'rh', rhumb line distance is used. This function is used to define small circles for display using mouse clicks. For each circle, two clicks are required: one to mark the center of the circle and one to mark any point on the circle itself, thereby defining the radius. A small circle is the locus of all points an equal surface distance from a given center. For true small circles, this distance is always calculated in a great circle sense; however, the scircleg function allows a locus to be calculated using distances in a rhumb line sense as well. You can modify the circle after creation by shift+clicking it. The circle is then in edit mode, during which you can change the size and position by dragging control points, or by entering values into a control panel. Shift+clicking again exits edit mode.
<urn:uuid:83f89f88-9533-4241-b211-0a2929bc7fec>
CC-MAIN-2014-10
http://www.mathworks.cn/cn/help/map/ref/scircleg.html?nocookie=true
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678683543/warc/CC-MAIN-20140313024443-00003-ip-10-183-142-35.ec2.internal.warc.gz
en
0.844739
360
3.734375
4
The pongids are the four great apes: orangutan, gorilla, chimpanzee, and bonobo. Rigorous primate-behavior field research during the last fifty years has clearly demonstrated that these apes are closer to the human species than Thomas Huxley, Ernst Haeckel, or even Charles Darwin had anticipated in the nineteenth century. Today, scientific evidence, ranging from biochemistry and genetics to morphology and psychology, confirms those striking similarities between these wild pongids and the human species in terms of organic evolution. The wild orangutan or “man of the woods” (Pongo pygmaeus) is the only great ape of Asia; like the two lesser apes or hylobates (gibbon and siamang), this elusive pongid now faces extinction. This rare but fascinating red ape of Indonesia is found only on the islands of Borneo and Sumatra. Most orangutans live in the upper tropical rain-forest canopy; adult females and juveniles swing among the vines and creepers of this arboreal world, while huge adult males may leisurely forage on the damp jungle floor below. The wild mountain gorilla of central East Africa (Gorilla gorilla beringei) is the largest of the four great apes. It is an introverted but intelligent and powerful pongid. In remote highlands, scattered groups of gorillas freely roam through the lush, wet forests. A social unit is dominated by the magnificent adult silverback male: Such a mature gorilla may reach a height of six feet and weigh up to 600 pounds. Yet, this gentle giant of the primate world is shy and acts somewhat like a recluse. The gorilla and chimpanzee (as well as the bonobo) are found in Africa. Their striking similarities to the human animal led Darwin to write that the origin and early evolution of humankind had taken place on the so-called “Dark Continent,” a hypothesis that is now supported by both molecular and paleontological evidence. The common chimpanzee (Pan troglodytes) is the most curious, intelligent, and extroverted of the four pongids inhabiting the earth today. In the forests and on the woodlands, the African chimpanzees generally exhibit a free-ranging and harmonious existence. Whether in the trees or on the ground, small groups of these apes spend much time searching for food when not playing, grooming, or merely resting in their nests. The bonobo (Pan paniscus) also inhabits Central Africa, but it has remained elusive to anthropologists until the last twenty years. Both in terms of biology and behavior, it remarkably resembles the human species. Unfortunately, this pongid is also an endangered species threatened with extinction. Each of the four great apes has no tail, and is both larger and more intelligent than any monkey. The pongid adult dental formula is 2-1-2-3 (also shared by all of the Old World monkeys and humankind). With arms longer than their legs, these apes brachiate through the trees but usually take a quadruped position when terrestrial. Their thumbs and big toes, both opposable, are favorable adaptations for an arboreal life. However, because of their large size, adult great apes spend considerable time on the ground during the day. Today, these four wild pongids need to be both saved by and protected from the human species, the most dangerous animal of all. To do so, since the middle of the twentieth century, there has been a concerted effort by primatologists to study the great apes in their natural habitats. Such close-range, long-term pongid research supplements those scientific findings as a result of research in laboratories and zoos. In prehistoric times, giant orangutans probably inhabited much of Southeast Asia (the hominoid fossils Ramapithecus and Gigantopithecus are now considered to be ancestral to this red ape). Today, the orangutan is the largest of the three Asian apes but it is restricted to the dense tropical jungles on the islands of Borneo and Sumatra. In the nineteenth century, the naturalist and evolutionist Alfred Russel Wallace spent several years investigating the flora and fauna of Indonesia. In his 1869 book The Malay Archipelago, dedicated to Charles Darwin, Wallace related his encounters with the wild orangutan in the swampy dense forests of Borneo. It is now very disconcerting to read how casually this biologist hunted and killed this very rare pongid in order to obtain specimens for scientific study; nevertheless, Wallace was one of the first naturalists to observe the orangutan’s general habits in its natural environment. His report includes physical descriptions of this ape as well as information on its diet, nestbuilding activity, and arboreal loco-motion. Apparently, the orangutan’s only past enemies were the leopard, the cobra, the python, and the crocodile; yet, it is now an endangered species because of the encroachment of human civilization, with the resultant destruction of its habitat and disruption of its activity. In the twentieth century, zoologist John MacKinnon presented a popular account of his three-year behavior study of the elusive orangutan in both Borneo and Sumatra. In his 1974 book, In Search of the Red Ape, MacKinnon gives a vivid picture of this wild pongid’s general environment: the endless and inhospitable green jungles of these two islands in which only deep grunts, smacking lips, and crashing branches betray the presence of the shy orangutan. He describes this ape’s typical day of feeding, traveling leisurely, and resting in tree nests. Since 1971, anthropologist Birute Mary F. Galdikas has rigorously undertaken the first intensive, longrange study of the wild orangutan (Pongo pygmaeus) of interior Borneo in its own environment. The late anthropologist Louis S.B. Leakey had inspired her to study this red ape of Indonesia, a project never before attempted by a female scientist. This “man of the woods” is the rarest and least social of the four great apes. Its natural habitat is now restricted to diminishing forests on the islands of Borneo and Sumatra. At present, there are fewer than 4,000 free-ranging orangutans existing in their own ecological niche. Operating from three research campsites within the lowland jungle of the Tanjung Puting Nature Reserve (Indonesian Borneo), Galdikas has devoted herself to two major objectives: the rehabilitation of orphaned orangutans and the ethological study of this great ape in its own environment. Young, orphaned orangutans usually die from neglect, disease, or malnutrition. At her remote rehabilitation center in Kalimantan, Galdikas prepares these tame animals so that they may eventually adjust to the wild woods. After camp life, the successfully rehabilitated juveniles are returned to their natural habitat to be free and magnificent in the trees. Hopefully, this undertaking will help to increase the number of wild orangutans in their own environment. The wild orangutan is now an endangered species for two major reasons: Hunters needlessly slaughter many adult pongids in order to obtain infants and juveniles for the “black market” sale to zoos, circuses, museums, or individuals (illegally held apes are confiscated), while farmers continue to destroy this pongid’s natural habitat by cutting timber and clearing cropland, thereby depriving the red ape of forest area, food supply, natural protection, and normal social activity (including sexual behavior). Obviously, there is an urgent need to understand and appreciate the wild orangutan if this unique primate is to be protected from human encroachment and saved from imminent extinction. Since 1974, Galdikas has followed an adult male orangutan through the Borneo jungle to observe his behavior in the wild. This task has required her venturing through swamps with leeches, mosquitoes, crocodiles, and poisonous snakes. She discovered that this large ape spends much of its time walking on the ground. However, most orangutans are generally arboreal; they move slowly and deliberately through the trees by using their hook-like hands and prehensile feet. The adult orangutan is essentially asocial: There are no problems to be solved in a day-to-day life that demands little cooperation, other than mating and minimal infant care by a mother. Adult males and females are usually solitary or found in loose social organizations. The only temporary social unit is an adult female with her infant or juvenile. Adult social interaction is infrequent. Two males may engage in combat over a female at mating time; however, there is only sporadic male/female companionship for sexual purposes. Females prefer the larger males as partners in consort relationships. Since orangutans are usually arboreal, there is no group structure or social discipline. They build simple nests in the forest trees or on the jungle ground; during heavy rain, they make a “roof” of leaves. Orangutans rarely groom each other. Their diet is primarily frugivorous and includes buds; shoots; seeds; young leaves; wild, unripe and ripe fruits; flowers; soft bark; wood pith; honey; termites; birds’ eggs; and insects. In fact, the orangutan uses a stick to extract honey from a tree. Caged in zoos, the shaggy red ape is lethargic and very prone to both obesity and erotic behavior. It suffers the same diseases humans do, from malaria to the common cold. This pongid may even identify with humans (thereby contributing to problems of sexual reproduction in captivity). At about the age of 12, an adult male orangutan will exhibit these characteristics: a large-domed head, pronounced cheek pads (flares of tissue), a throat sack (gular pouch), and a beard, as well as at times the making of a long, shrill call. There is marked sexual dimorphism, with the adult male being about twice as large as the adult female. The male may reach a height of nearly five feet, weigh over 200 pounds, and have a cranial capacity of about 200 cc. The wild orangutan is an independent and gentle creature, living a rather lonely existence and acting viciously only when provoked. Leaving the primeval rain-forest canopy during the day, an adult male spends time on the ground and may even venture out of the tropical jungle. Such an isolated ape may forage for as long as six hours while walking awkwardly for long distances through the rain forest on all fours, and perhaps even nap on the damp, humus-covered floor of the jungle. Although free-ranging, adult females never leave their home area throughout their lives. As there is clearly an urgent need to protect and preserve the wild orangutan, which is sometimes referred to as a “missing link,” the serious research of Galdikas (among others) is a major step to understanding and appreciating this endangered great ape of Borneo and Sumatra. The wild gorilla of central East Africa is the largest of the four great apes; this mountain subspecies was discovered in 1847. Until recently, this primate was thought to be a vicious animal of the jungle; in sharp contrast to this view, however, field studies have revealed that this huge pongid is actually shy, gentle and basically introverted, but curious, intelligent, and very powerful. In prehistoric times, wild gorillas were abundant and ranged freely throughout central Africa. Today this great ape is represented by only three geographically isolated subspecies: the lowland or valley gorilla of central West Africa (Gorilla gorilla gorilla), the highland or mountain gorilla of central East Africa (Gorilla gorilla beringei), and a third subspecies also found in central East Africa (Gorilla gorilla gmveri). The differences among these three subspecies are minor: the mountain gorilla is larger and has thicker hair (an adaptive advantage), shorter arms, a narrower skull, a longer palate, and does not share its forest with wild chimpanzees. Now an endangered species, the vanishing wild gorilla needs to be understood and appreciated in order to ensure both its protection and survival in captivity as well as in its natural environment. It is estimated that only about 240 mountain gorillas now inhabit the lush, wet, tropical jungle areas of central East Africa. Their ecological niche is gravely threatened by hunters, farmers, and poachers; thus the desperate plight of this mountain pongid. George B. Schaller, a zoologist, was the first scientist to seriously study the wild mountain gorillas (Gorilla gorilla beringei) in their own habitat. His pioneering, twenty-month research project (1959-1960) remains a classic contribution to primate ethology. He presented his important findings in two books: The Mountain Gorilla and The Year of the Gorilla. Gorillas are primarily quadrupedal and terrestrial, living in the humid rain forests of Africa. In his extensive study of the mountain subspecies, Schaller concentrated on the ecology and general behavior of this remarkable great ape. His 466 hours of direct observation were made from the viewing advantage and safety of trees. He was able to watch free-living gorillas wander through the dense jungle, forage on succulent herbs and vines (they prefer young, secondary forest growth), groom and play, defend territory, and build nests in the trees or on the ground. The zoologist observed no tool use, no meat eating, and no drinking of water; furthermore, gorillas do not like to get wet. Both vocalizations and sexual behavior are infrequent. Of particular interest, however, is the wild gorilla’s chest-slapping behavior, an elaborate sequence of nine distinct acts to relieve frustration. Despite his dedicated study of this great ape for nearly two years, Schaller never made physical contact with the mountain gorilla; this incredible feat was first accomplished by a remarkable primatologist, Dian Fossey (to date, the other two gorilla subspecies have not been seriously studied in their own habitats). In 1967, the late anthropologist Dian Fossey began living among the wild highland gorillas of the Virunga Volcanoes in central East Africa. Her research—camp, Karisoke, is located in the ancient rain forest of Mt. Vasokie; it is a tropical jungle world of fog and mist in Rwanda near Zaire. Encouraged by the late Louis S.B. Leakey and following in the footsteps of Schaller, Fossey undertook to investigate the mountain gorilla in its natural habitat with only notebook, camera, and binoculars. Her courageous and intensive years of observations between 1967 and 1985 in the ancient rain forests resulted in an invaluable scientific as well as compassionate contribution to the understanding of and appreciation for this giant of the primate world. Like the orangutan, chimpanzee, and bonobo, the free-living mountain gorilla is an endangered species. Because of the ongoing encroachment of human beings with civilization, this largest of the four great apes is now a vanishing animal threatened with extinction. In 1984, Fossey published Gorillas in the Mist, in which she gave a personal account of her thirteen-year study of four family groups of this rare pongid among the dense and remote rain forests of those dormant Virunga Volcanoes in central East Africa (Rwanda, Uganda, and Zaire). She followed gorilla groups on the ground in order to observe them at close range during their leisurely day-long search for food and shelter. Fossey was very successful in establishing a rapport with the mountain gorillas. Habituation required her patiently learning to imitate their feeding methods, general sounds, basic gestures, submissive postures, and vocalizations of contentment. This led to her being gradually accepted by these wild pongids, which are usually timid despite their size and power. Her book focused on various group dynamics and behavioral patterns: social range, kinship bonds, nest building, group structure, play and grooming, sexual habits, ritual displays, inter-group and intra-group interactions, cannibalism, and even infanticide. Fossey also wrote about the gorilla’s vegetarian diet, mountainous ecosystem, territoriality, and diseases. As a result of Fossey’s relentless efforts, she was able to examine the uninhibited behavior of mountain gorillas under normal conditions in their natural range. Her close-range, long-term, intimate study of this highland pongid was a rich and rewarding experience for science. Unlike Schaller, and only after a long period of slow habituation, Fossey was able to actually make physical contact with several of these apes; the first time a scientist has touched this primate in its own habitat. In the glorious rain forests, this magnificent great ape is primarily a ground-dweller and usually a peaceful vegetarian. Yet a nervous adult gorilla will hoot, mouth a stick or leaf, rise bipedally, thrash branches and vegetation, slap its chest, kick its legs, run sideways, and finally thump the ground with its palms. It may even bluff a roaring “charge” as protective action if necessary, doing so only when provoked (this fierce-looking and very powerful ape will try to avoid physical contact with intruders). In truth, this pongid’s alleged savagery toward human beings is grossly exaggerated. However, gorilla brutality does exist. This ape is capable of aggressive behavior and even murder, including infanticide. A mountain gorilla’s staple diet consists of fruit, bark, roots, thistles, nettles, wild celery, blackberry leaves, galium vines, and other such succulents. Primarily folivorous, this ape has never been observed eating meat, insects, or even birds’ eggs in its natural environment (the lowland gorilla is both frugivorous and folivorous). Most gorillas live in small, scattered groups which usually consist of about ten individuals (there are even both peripheral and loner blackbacks as well as silverbacks). These stable social units roam through the rain forests of central West Africa and wander on and below the slopes of the Virunga Volcanoes. A social group is dominated by the adult, alpha silver-back male. Other members include a hierarchy of subordinate blackback males, adult females, juveniles, and infants. The fearsome adult silverback male gorilla is characterized by nuchal and sagittal crests, a prominent supraorbital torus, marked prognathism, long arms and powerful shoulders for modified arboreal brachiation, and a quadrupedal stance on the ground with the weight of the body supported on clenched knuckles and plantigrade feet (the presence of silvery grey hair on his back, rump, and hind legs clearly indicates a dominant adult male). Fossey discovered adult males to be unusually protective and tolerant toward the young; she once saw an old male tickle an infant with a flower, as might a kindly grandfather. Protected by the dense jungle foliage and with the advantage of its huge size, an adult, male mountain gorilla may even live as a loner (apparently the leopard, buffalo, and elephant are no serious threat to this great ape). Each gorilla has a unique personality as well as a distinct nose print and voice print. These gorillas com-municate by using at least twenty-two distinct sounds. The gorilla is an intelligent animal; in the rain and bamboo forests of mountainous central East Africa, more fully developed mental capabilities would not have increased its fitness. In the safety of the tropical jungle, there is no need for this ape to tax its brain (this had resulted in an early underestimation of this pongid’s true intelligence when compared to the extroverted behavior of the chimpanzee and bonobo). Because of its awesome size as well as brute strength and placid nature, it is extremely difficult for a human being to train and manage the mountain gorilla in captivity. Until recently, captive gorillas were displayed as prisoners in small, sterile cages which cruelly removed them from their normal biome and essential troop behavior. Fortunately, growing numbers of concerned biologists and anthropologists are becoming involved with the survival of this great ape in captivity. In zoos, the gorilla is prone to fatal pneumonia and tuberculosis; therefore, all artificial environments must be carefully watched and controlled. Unfortunately, it is also very difficult to mate this huge pongid in captivity; gorillas require privacy and conditions similar to their natural habitat for successful sexual activity (in fact, a gorilla may even identify itself with a human being). The mountain gorilla is in grave danger of extinction. This pongid’s precarious existence is threatened by human encroachment and ongoing neglect. Civilization is slowly engulfing the shrinking domain of this magnificent ape. If serious measures are not taken immediately, this unique, giant anthropoid of the natural primate world will soon vanish from the earth forever. Concerning extinction, several factors are determined by the evolutionary history of a particular primate and its own genetic information, for example, such species-specific factors as: size of the original geographic range, natural habitat requirements (ecosystem), population density limits, body size and weight, and behavioral traits. Other contributing factors include habitat alteration or destruction and human predation (i.e., hunting, collecting, and killing). Of course, humankind still remains the greatest danger to the survival of the wild gorilla in its own environment (and to the survival of other threatened primates). One alarming point: for the survival of this impressive animal, prompt and strict conservation measures are necessary, or else the impressive mountain gorilla will soon disappear. There is an urgent need for both the short-term preservation and the long-range conservation of this giant ape so that future generations may benefit from understanding and appreciating this remarkable pongid. The gorilla is a key species in the scientific study of primate evolution, biology, psychology, and behavior. Fossey’s pioneering research was both an intimate portrait of and an accurate report about this imperiled mountain pongid, dispelling legends and myths surrounding this majestic rare ape while bringing its plight to the consciousness of both naturalists and general readers. Her steadfast dedication to, and deep concern for, these precarious creatures represented primate ethology at its best. Ongoing research in Gabon, central West Africa, is providing needed information on the ecology and behavior of both lowland gorillas and chimpanzees (sympatric apes in the lush, tropical rain forest of Lope). Since 1960, anthropologist Jane Goodall has been studying the wild chimpanzee (Pan troglodytes schwe-infurthi) in its natural environment. She has lived among chimpanzees in the tropical rain forests and woodlands of central East Africa; her patient, sustained, and courageous efforts for over forty years have resulted in remarkable discoveries about the behavior of this great ape. Goodall had been inspired to observe the wild chimpanzee in its own habitat by Louis S. B. Leakey. Her dedicated and pioneering research is now recognized as a milestone in primate ethology. Her two books, In the Shadow of Man and The Chimpanzees of Gombe: Patterns of Behavior, are major contributions to the scientific literature on pongid behavior. Among the primates, as well as in terms of biological evolution, the chimpanzee is closest to the human species. Goodall has studied this wild ape at close range in the Gombe Stream Research Center on the shore of Lake Tanganyika in Tanzania. Established in 1943, this natural park remains a sanctuary for approximately 100 chimpanzees. With only her camera and binoculars, Goodall followed these trusting pongids into their own habitat. This unique, long-term, scientific study of a wild chimpanzee community yielded incredible new findings of inestimable value about this human-like ape of Africa. As a direct result of her impressive field research, she is now the preeminent authority on this endangered species. The chimpanzee is found throughout tropical Africa and is both arboreal and terrestrial. When in the trees, it is capable of modified brachiation, but while on the ground it will walk quadrupedally on the knuckles of its hands and on the outsides of its feet. Occasionally, it does stand erect in order to move about in bipedal locomotion for short distances. Unlike the orangutan and gorilla, the chimpanzee is an extroverted pongid (like the bonobo). In captivity, chimpanzees are easily trained for circuses and zoos. In the wild, they live in open, poorly defined, temporary, and unstable nomadic groups. A shifting band of 60-80 members consists of a hierarchy: adult males with a temporary male leader, young males, females with babies, and females without babies. There is no nuclear family unit; the only temporary bonding relationship is the parental care of an adult female for her infant or juvenile. Play and mutual grooming seem to ensure a state of well-being among the members of this pongid society. Chimpanzee sexual behavior is grounded in biochemistry. Mating habits are normatively promiscuous, but only when the adult female is in estrus. Presenting and mounting are typical social behavior patterns among the adult males, representing subordinate and dominant actions, respectively. In nature, chimpanzees live a casual life within constantly shifting groups (these societies meander within a home range). They may even defend their territory against other intruding chimpanzee units. Chimpanzees build nests and dislike getting wet. Whether living in dense trees or on the jungle ground, these apes apparently fear only large cats (especially the lion and leopard) and the human species. In fact, human civilization is the major threat to this pongid’s existence in the wild. Chimpanzees show much individuality, differing in their facial expressions and mannerisms. They are very intelligent and highly emotional; their temperament ranges from violent aggression to gentle playfulness. Actually, one may even speak of chimpanzee personalities. Chimpanzees communicate and control behavior through a variety of calls as well as by touch, gesture, and cooperative behavior patterns (especially play and grooming). A chimpanzee group is dominated by the top-ranking or alpha male, and each sex has its own fluctuating dominance hierarchy. The bulk of chimpanzee food consists of fruits, leaves, and blossoms, as well as seeds, stems, bark, and nuts. Occasionally, this ape will add ants and termites to an otherwise basically frugivorous and vegetarian diet. Goodall’s major discovery is that wild chimpanzees make, use, and transport simple tools. They deliberately modify stems, twigs, sticks, or blades of grass for the specific purpose of probing insect mounds at certain times in order to extract and eat ants or termites (thereby challenging our species as the only tool-making animal). They seem to consider such insects as delectable morsels. Chimpanzees also crumple and chew leaves to make a “sponge” for obtaining water from notches in trees or for sopping up the soft brains from inside a monkey’s skull cavity. They even use large leaves as containers for carrying water, and also use sticks and stones as weapons. These activities clearly demonstrate intelligent, learned behavior in a social environment. In short, wild chimpanzees have a technology—albeit a very simple one. Wild chimpanzees are not only tool-makers, but also meat-eaters, occasionally adding raw flesh to their general diet. They will hunt, kill, and eat small red colobus, redtail, and blue monkeys (including infant baboons), as well as young bush-bucks and young bushpigs. Goodall was the first primatologist to observe the frenzied “rain dance” ritual of this great ape, a stylized display of apparent nervous behavior. During a thun-der-and-lightning storm, excited male chimpanzees stage a unique pattern of activity: They leap to the jungle floor and careen through the grass, then charge downhill while bellowing and brandishing boughs; this activity is followed by the act of slapping the ground or swatting at trees. This activity may last up to thirty minutes. Females and their young are merely arboreal spectators. Goodall also discovered that wild chimpanzees can display aggressive behavior and extreme brutality; they are capable of murderous violence and primitive warfare. At times, these apes are savage killers and ruthless cannibals. Between 1974 and 1977, she observed the clash between two neighboring chimpanzee groups that resulted in the gradual extermination of the small southern Kahama society by the apes of the northern Kasakela region. It had not been known previously that wild chimpanzees would systematically and deliberately attack and kill one another. Slaughterous behavior may be grounded in territoriality, as one ape group defends its home range against unwanted intruders. Goodall also witnessed adult chimpanzees killing and eating their infants. Perhaps such primate aggression is biologically inherited, an innate aspect independent of social and environmental forces. Goodall’s pioneering research has resulted in original information on wild chimpanzee vocal communication, sexual activity, social hierarchies, facial expressions, greeting gestures, parental care, nest building, diseases, diet, grooming, and play. In general, chimpanzee behavior bears uncanny similarities to human behavior. Since the publications of Huxley, Haeckel, and Darwin in the nineteenth century, naturalists have become increasingly convinced that the human species does share a common prehistoric ancestor with both the chimpanzee and the gorilla. Fossil and genetic evidence suggests that a momentous split in primate evolution occurred about five million years ago, resulting in one route leading to the three living apes of Africa and another to the human animal itself. The anthropologist Adrienne Zihlman argues that the common ancestor shared by the chimpanzee, bonobo, and the human species was probably a hominoid that looked and behaved very much like the contemporary pygmy chimpanzee (Pan paniscus), that rare and intriguing great ape of equatorial Africa. If her hypothesis should prove to be true, the pygmy chimpanzee of today represents a living link with our remote evolutionary past. Chimpanzees are fascinating creatures with advanced brains and complex behavior. They demonstrate mental capacity for foresight, learning, symbolizing, problem-solving, and even self-awareness. As a result of rigorous research in biochemistry and ethology, one fact has been clearly established: The human animal is closer to the chimpanzee and bonobo than to any other living species. Jane Goodall’s impressive field work supports this conclusion and contributes to those efforts in preventing the extinction of wild chimpanzees. The wild bonobo (Pan paniscus) is found only in the forests of the Democratic Republic of the Congo, Central Africa, where about 15,000 members of this species now live. Although known as the pygmy chimpanzee, this great ape differs from the common chimpanzee. Compared to the chimpanzee, the bonobo has a smaller head, a darker face with tufts of hair on each side, and is taller with a more slender build (narrower shoulders, and legs longer relative to arms). It is a peaceful and gentle pongid. Bonobos have been scientifically studied only during the last two decades. They are more arboreal than chimpanzees, but less aggressive and less excitable, with violent behavior being very infrequent. Bonobos live in female-dominated, fluid social groups that are headed by an adult alpha female. Their sexual behavior is frequent, promiscuous, and inventive (bonobos literally “make love, not war” in order to reduce tension and resolve conflict). Bonobos use objects (for example, stones and branches) as tools, stand erect, and walk upright for short distances more often than chimpanzees and gorillas, and they display a range of emotions and activities that suggest an eerie resemblance to how our remote ancestors probably looked and behaved. One bonobo has even figured out, by himself, how to make a simple stone implement like those made by our own remote hominid ancestors in Africa 2.5 million years ago. Significant bonobo research is being done by Frans B. M. de Waal, whose unique book, Bonobo: The Forgotten Ape, which released in 1997, is particularly important to understanding and appreciating this pongid. Ongoing field studies of the wild bonobos by biologists and anthropologists will shed more light on the bio-social origin and evolution of the human species. The gap between modern humans and the four great apes is very narrow, indeed (demonstrating how remarkable were the original insights of Huxley, Haeckel, and finally Darwin himself). Even so, only the human species is capable of sustained bipedality, using symbolic language as articulate speech (not to mention the complexity of its abstract thoughts and intricate behaviors), and creating an extraordinarily multifaceted socio-cultural milieu in which simple implements are used for the manufacturing of far more complex tools, weapons, and other objects. If the human species journeys to other planets and beyond, then it will carry with it those indelible biosocial marks of its primate origin and evolution on earth. Even as the future cosmic ape, humankind will always remain akin to the four living pongids on earth. - Caravan, J. M. (1999). Gorillas: A portrait of the animal world. New York: Todtri. - Galdikas, B. M. F. (1995). Reflections of Eden: My years with the orangutans of Borneo. Boston: - Little/Brown. Galdikas, B.M.F., et al. (2000). Orangutan odyssey. New York: Harry N. Abrams. - Galdikas, B.M.F. (2005). Great ape odyssey. New York: Harry N. Abrams. - Lindsey, J. (1999). The great apes. New York: - Friedman/Fairfax. Montgomery, S. (1991). Walking with the great apes: Jane Goodall, Dian Fossey, Birute Galdikas. Boston: - Houghton Mifflin. Prince-Hughes, D. (2001). Gorillas among us: A primate ethnographer’s book of days. Tucson: - University of Arizona Press. de Waal, F. B. M. (1997). Bonobo: The forgotten ape. Berkeley: University of California Press. - de Waal, F. B. M. (2001). Tree of origin: What primate behavior can tell us about human social evolution. Cambridge: Harvard University Press. - Weber, B., & Vedder, A. (2001). In the kingdom of gorillas: Fragile species in a dangerous land. New York: Simon & Schuster. - Wrangham, R. W., et al. (Eds.). (1996). Chimpanzee cultures. Cambridge: Harvard University Press.
<urn:uuid:d8f2c66b-57ad-439c-92d5-a5cc52a568fe>
CC-MAIN-2023-40
https://anthropology.iresearchnet.com/pongids/
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511000.99/warc/CC-MAIN-20231002132844-20231002162844-00794.warc.gz
en
0.932588
7,375
3.546875
4
Four years ago, I wrote about a group of African frogs that remind me of the Marvel Comics character Wolverine, who fights with three retractable claws in each arm. The frogs, belonging to the family Arthroleptidae, also have bone claws in their feet. They use these in defence, as many naturalists discovered to their dismay. They’re not alone. On another continent, Noriko Iwai from the University of Tokyo has studied a different species – the Otton frog – that carries a similar bony spike in its foot. It’s large for a frog, growing to around 12 centimetres in length. The males use their spikes as anchors to latch onto females, and flick-knives for duelling with rival males. Stranger still, the Otton frog houses its spike in a “thumb”, which other frogs lack. Frogs have five toes on their hind legs, just like us, but most species have just four on their front legs. There are exceptions, though, and the Otton frog is one of them. It has a fifth front toe – a “pseudothumb” – which houses its spike. Iwai first described the spike in 2010, but she had no idea how it was used. The species is hard to study—it’s endangered, and only found on in the Amami Islands in southern Japan. Since then, Iwai has been carefully observing the frogs at their breeding sites, videotaping their behaviour with infrared cameras, and gently inspecting captured animals. When captured, they’d kick out furiously. But when something irritated their chest, they’d pull their arms in and jab the spines towards themselves, as if to deliver a stabbing hug. The jabs of female frogs are so weak that they barely hurt. The males, however, are more forceful – they are bigger, they have longer and thicker thumb-spikes, and they’re better at unsheathing those spikes from their thumbs. In the delightfully deadpan style of academia, Iwai wrote: “If jabbed in the finger by a male’s spines, the researcher responded by dropping the frog.” I imagine so. Iwai found that the males use their spikes during sex to hold onto the females, which end up with stab wounds on their sides and armpits. More than a third of them bear the scars of these encounters. Iwai even tried to prise a mating male off his partner, and found his thumb dug into her side (see image below). The males also fight each other with their weapons, often ambushing another male mid-coitus. Their bouts are intense (video), as are Iwai’s descriptions: “At 03:28 h when the couple started laying their eggs, another male (male A) suddenly head-butted the amplexing [a mating position – EY] male (male B), and the two grappled with a growl. Male A jabbed his arms into the head of male B while holding its head from two sides. Male B struggled to escape from the grasp of male A, but male A continued jabbing. For more than 4 min, male B kept trying to escape from male A by kicking and flapping. While grappling, the two frogs floated deeper into the water away from the center of the screen. The Otton frog isn’t the only species to behave this way. A distantly related Central American species, Hypsiboas rosenbergi, also uses a spiky pseudothumb during combat. Their fights seem even more brutal – they jab their spines at the eyes and ear drums of their opponents, often inflicting lethal wounds. The Otton frogs are more restrained – males end up with scars, but they keep their lives. Other animals have also developed an extra digit, typically by extending one of the bones in the wrist. The giant panda uses its “pseudothumb” to grasp bamboo, and the elephant uses a sixth toe to support its massive bulk. The Otton frog uses its extra digit for two very different purposes: a mating anchor, and a fighting weapon. Iwai thinks that the anchor role came first. While many frogs breed in a disorganised mob, with many males crowding a female and randomly fertilising eggs, the Otton frog breed as a single pair. This leads to intense competition, and might have driven the evolution of a anchor to prevent the males from being dislodged. As this spike became larger, it gained a secondary role: as a tool for combat. Reference: Iwai. 2012. Morphology, function and evolution of the pseudothumb in the Otton frog. Journal of Zoology http://dx.doi.org/10.1111/j.1469-7998.2012.00971.x Images by Noriko Iwai
<urn:uuid:f0a60a71-837b-4572-93fe-85ebe69c4b32>
CC-MAIN-2017-39
http://phenomena.nationalgeographic.com/2012/10/19/more-wolverine-frogs-japanese-species-uses-bony-thumb-spikes-to-fight-and-hold-onto-mates/
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689686.40/warc/CC-MAIN-20170923141947-20170923161947-00603.warc.gz
en
0.970397
1,023
3.25
3
The Divine Liturgy is the primary worship service of the Church. The Divine Liturgy is a eucharistic service. It contains two parts: the Liturgy of the Catechumens, sometimes called the Liturgy of the Word, at which the Scriptures are proclaimed and expounded; and the Liturgy of the Faithful, sometimes called the Liturgy of the Eucharist, in which the gifts of bread and wine are offered and consecrated; the faithful then partake of them in the Sacrament of Holy Communion. The Church teaches that the gifts truly become the body and blood of Jesus Christ, but it has never dogmatized a particular formula for describing this transformation. The Prothesis (or Proskomedia), the service of preparing the holy gifts, can be considered a third part which precedes the Liturgy proper. The Great Fast or Lent is the period of preparation leading up to Holy Week and Pascha. The Lenten Triodion governs the divine services of Great Lent as well as those of the Weeks of Preparation preceding Great Lent. Lent is a Middle English word meaning “spring.” The Great Fast has come to be called Lent by association; it is called “great” to distinguish it from the other fasts. Observance of Great Lent is characterized by abstention from many foods, intensified private and public prayer, personal improvement, and almsgiving. The foods traditionally abstained from are meat and dairy products, fish, wine and oil. (According to some traditions, only olive oil is abstained from; in others, all vegetable oils.) Since strict fasting is canonically forbidden on the Sabbath and the Lord’s Day, wine and oil are permitted on Saturdays and Sundays. If the Feast of the Annunciation falls during Great Lent, then fish, wine and oil are permitted on that day. Besides the additional liturgical celebrations described below, Orthodox Christians are expected to pay closer attention to their private prayers and to say more of them more often. The Fathers have referred to fasting without prayer as “the fast of the demons” since the demons do not eat according to their incorporeal nature, but neither do they pray. During the weekdays of Great Lent, there is a liturgical fast when the eucharistic Divine Liturgy is not celebrated. However, since it is considered especially important to receive the Holy Mysteries during this season the Liturgy of the Presanctified Gifts, also called the Liturgy of St. Gregory the Dialogist, may be celebrated on Wednesdays and Fridays. At this vesperal service some of the Body and Blood of Christ reserved the previous Sunday is distributed. On Saturday and Sunday the Divine Liturgy may be celebrated as usual, although on Sundays the more solemn Liturgy of St. Basil the Great is used in place of that of St. John Chrysostom. Like the observation of Lent in the West, Great Lent itself lasts for forty days, but unlike the West, Sundays are included in the count. It officially begins on Monday seven weeks before Pascha and concludes on the eve of Lazarus Saturday, the day before Palm Sunday. However, fasting continues for the following week, known as Passion Week, Great Week or Holy Week, up until Pascha. Great Lent begins on the Monday following Forgiveness Sunday (also called Cheesefare Sunday) with each Sunday highlighted as follows: 1. Sunday of Orthodoxy (John 1:43-51), 2. Sunday of St. Gregory Palamas, 3. Sunday of the Holy Cross, 4. Sunday of St. John Climacus, and 5. Sunday of St. Mary of Egypt. Great Lent is followed by Holy Week, the week beginning with Palm Sunday and preceding Pascha. Holy Easter (Pascha) Pascha (Greek: Πάσχα), also called Easter, is the feast of the Resurrection of the Lord. Pascha is a transliteration of the Greek word, which is itself a transliteration of the Hebrew pesach, both words meaning Passover. (A minority of English-speaking Orthodox prefer the English word ‘Pasch.’) Pascha normally falls either one or five weeks later than the feast as observed by Christians who follow the Gregorian calendar. However, occasionally the two observances coincide, and on occasion they can be four weeks apart. The reason for the difference is that, though the two calendars use the same underlying formula to determine the festival, they compute from different starting points. The older Julian calendar’s solar calendar is 13 days behind the Gregorian’s and its lunar calendar is four to five days behind the Gregorian’s. The Pascha date this year: April 12, 2015, next year: May 1, 2016, and April 16, 2017, the year after that. Celebration of the Feast The cycle starts with a fast of forty days that precedes the feast. It is called the Nativity fast or Advent. For the faithful, it is a time to purify both soul and body to enter properly into and partake of the great spiritual reality of Christ’s Coming, much like the preparation for the fast of the Lord’s Resurrection. The beginning of the fast on November 15 is not liturgically marked by any hymns, but five days later, on the eve of the Feast of the Presentation of the Theotokos, we hear the first announcement from the nine “Irmoi” of the Christmas Canon: “Christ is born, glorify Him!” This period includes other special preparatory days announcing the approaching Nativity: St Andrew’s Day, November 30; St Nicholas Day, December 6; the Sunday of the Forefathers; and the Sunday of the Fathers. December 20th begins the Forefeast of the Nativity. The liturgical structure is similar to the Holy Week preceding Pascha. The Orthodox Church sees the birth of the Son of God as the beginning of the saving ministry which will lead Him, for the sake of man’s salvation, to the ultimate sacrifice of the Cross. Eve of the Nativity On the eve of the Nativity, the Royal Hours are read and the Divine Liturgy of St. Basil the Great is served with Vespers. At these services the Old Testament prophecies of Christ’s birth are chanted. There is also a tradition of Vale or Holy Supper. This is a 12 course lenten dinner served before the family goes to vespers. The Vigil of Christmas begins with Great Compline because Vespers has already been served. At Compline there is the singing of the Troparion and Kontakion of the feast with special hymns glorifying the Saviour’s birth. There are also the special long litanies of intercession and the solemn blessing of the five loaves of bread together with the wheat, wine, and oil. The faithful partake of the bread soaked in the wine and are also anointed with the oil. This part of the festal vigil, which is done on all great feasts, is called in Slavonic the litya and in Greek artoklasia, or the breaking of the bread. The order of Matins is that of a great feast. Here, for the first time, the full Canon “Christ is born,” is sung while the faithful venerate the Nativity icon. The Nativity according to the flesh of our Lord, God and Saviour Jesus Christ, also called Christmas, is one of the Great Feasts of the Orthodox Church, celebrated on December 25. In the fullness of time,[note 1] our Lord Jesus Christ was born to the Holy Theotokos and Virgin Mary, thus entering into the world as a man and revealing Himself to mankind. According to the Bible and to Holy Tradition, Jesus was born in the city of Bethlehem in a cave, surrounded by farm animals and shepherds. The baby Jesus was born into a manger from the Virgin Mary, assisted by her husband St. Joseph. St. Joseph and the Theotokos were forced to travel due to a Roman census; the odd location of the birth was the result of the refusal of a nearby inn to accommodate the expecting couple (Luke 2:1-20). Since it is known historically that dwellings were built directly over such caves housing livestock–in order to make use of the heat Though three magi from the East are commonly depicted as visiting during the event itself (or, in Roman Catholic tradition, twelve days thereafter), the Bible records the coming of an unspecified number of wise men as being a few years after Jesus’ birth (see Matthew 2). In either case, these magi came bearing gifts of gold, frankincense, and myrrh (Matt 2:11). In the hymnography for the feast, these gifts are interpreted to signify Christ’s royalty, divinity, and suffering. Though Jesus’ birth is celebrated on December 25, most scholars agree that it is unlikely he was actually born on this date. The choice of December 25 for the Church’s celebration of the Nativity is most likely to have been in order to squelch attendance at pagan solstice festivals falling on the same day. At least, this is the urban myth promligated by both heterodox Christians and unbelivers for centuries. The Nativity of Christ (Menologion of Basil II, 10th-11th c.) However, the solstice festival fell on the 21st of December. To suggest that The Church chose a day of sacred observance defensivly instead of pro-actively is to devalue and disregard the sacred and authoritative action of The Church in establishing a proper date for the observance of The Nativity of Christ The Lord. Others within The Orthodox Church have observed that, under Hebrew law, male infant’s were both circumcised and received their name eight days after their birth. (See The Account of The Circumcision and Naming of John–The Forerunner and Baptist–in The Gospel according to The Apostle Saint Luke 1:59-66, and The Account of The Circumcision and Naming of Christ The Lord as Jesus in Luke 2:21 ) Also, within The Orthodox Church, January 1st is celebrated as the “name day” of The Lord Christ Jesus. Thus, the selection of December 25th to celebrate the nativity of The Christ (who would not be named for eight more days) would appear to have been a conscious counting backward from the first day of the calendar year–the day of his being proclaimed Son of Man–to the date of His birth, the day of his being proclaimed Son of God. Concluding the celebration of the Nativity of Christ is the Liturgy. It begins with psalms of glorification and praise instead of the three normal Antiphons. The troparion and kontakion mark the entrance with the Book of the Gospels. The baptismal line from Galatians 3:27 once again replaces the Thrice-Holy. The Epistle reading is from Galatians 4:4-7, the Gospel reading is the familiar Christmas story from Matthew (2:1-12), and then the liturgy continues in the normal fashion. Twelve days of Christmas The second day of the feast starts a two-day celebration of the Synaxis of the Theotokos. Combining the hymns of the Nativity with those celebrating the Mother of God, the Church points to Mary as the one through whom the Incarnation was made possible. St Stephen, the First Martyr, is also remembered on these two days. On the Sunday after Christmas the Church commemorates James the Brother of Our Lord, David the King, and Joseph the Betrothed. Eight days after the Nativity, is the feast of Circumcision of our Lord. The festal period extends to Theophany during which time the Christmas songs are sung and fasting and kneeling in prayer are not called for by the Church. Throughout this time, it is the custom of some Orthodox Christians to greet each other with the words: “Christ is born!” and the response: “Glorify Him!” Many in the English-speaking world will also use the culturally common “Merry Christmas!”
<urn:uuid:9ad5a636-f390-4114-9b50-950c4dc09a9a>
CC-MAIN-2017-47
http://www.agocwi.org/services/
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804610.37/warc/CC-MAIN-20171118040756-20171118060756-00204.warc.gz
en
0.95327
2,585
2.796875
3
The hottest topic in my elementary school tech classroom is Minecraft–and has been for several years. So I was thrilled when efriend, Josh Ward, offered to write an article for Ask a Tech Teacher connecting Minecraft and the most important topic in my classroom–Digital Citizenship. Josh is the Director of Sales and Marketing for green hosting provider, A Small Orange. He is originally from Southeast Texas, but has called Austin home for almost 20 years. He enjoys writing about his passion for all things Internet related as well as sharing his expertise in the web hosting industry and education. I think you’ll enjoy this article: Teaching Digital Citizenship with Minecraft A “digital citizen” is generally defined as “those who use the Internet regularly and effectively.” With children and teenagers moving more and more toward the Internet and away from television for their recreational and informational needs (95% of all teens from ages 12 to 17 are online, and 80% of those use social media regularly), the next generation of digital citizens isn’t just arriving, they’re already here. Advertisers and corporations have known this for some time, and have begun targeting the youth demographic that will drive the country’s economic future, making responsible and informed “digital citizenship” that much more important. The Internet has come to play a huge part in not only our daily lives, but our educational future, and these formative years are a perfect time to stress the importance of a free and open Internet, as well as developing a strong sense of civic identity, cooperation, and participation. Building Worlds Together Games like Minecraft can actually be a valuable tool in building digital citizenship. Unlike many traditional games, Minecraft places a strong focus on creativity, resource management, and cooperation. Minecraft’s basic gameplay is deceptively simple — the player exists in a large, open-ended world, gathering natural resources to survive in a world populated by hostile creatures. To survive, a player must chop wood, mine stone, build shelters, acquire food, and build weapons to survive, using only the materials found in the game world itself. The game has often been compared to LEGO building blocks, only digital (and thus functionally infinite). Beyond this simple concept, however, lies a deeper level of gameplay. Once a player masters the basics of survival, the potential of an open-ended game world reveals itself. A player can build anything he or she can conceive, from buildings and gardens to elaborate architectural and engineering marvels. Industrious Minecraft players have done everything from recreating fictional or historical buildings to building working virtual machines. Minecraft can teach not only logic, problem-solving, and resource management, but also the value of cooperation, coordination, and leadership. Many Minecraft players, including students, set up Minecraft servers in which many players can cooperate on a single goal. Servers and Sharing Setting up one’s own Minecraft server can be a project in itself. Since Minecraft runs on Java, anyone desiring to set up their own server must at least know how to install and run both the server software and the game client. Setting up a server from scratch requires some basic networking knowledge, such as IP addresses, ports, and rudimentary network configuration. While there are extensive step-by-step tutorials on setting up one’s own server, there are also many hosting companies that offer server “rentals,” taking care of the heavy lifting of server setup and allowing users to get started playing right away. Once the server is set up, the administrator may invite several players to join, who can all play together in the same persistent game world. A game server is a single machine, running a single instance of a Minecraft “world,” which can then be accessed via an IP address. There are already thousands of Minecraft servers on the Internet; some open to the public, others restricted to a few chosen members. The administrator of any server decides not only who can participate, but must manage membership and play style — an open server, for example, is subject to vandalism by random players, who may discover the server and alter, damage, or even destroy the creations built by other players. Minecraft also features other types of gameplay — for example, “Adventure” servers, where players may have to cooperate to solve puzzles and achieve a single goal. The base game also includes a “Creative” mode, which removes the need to harvest resources or survive against monsters, freeing players up to build whatever their imagination can conceive. A Minecraft server can easily become a thriving microcosm of a real community. For large construction projects to be successful, resources must be coordinated and shared, and if the server is in “survival” mode (where monsters appear after sundown to attack players), time management, shelter, and defense become important skills. For example, a Minecraft “village” might feature a farm of domesticated animals, which must be herded, fed, and protected. Trees, an important source of wood, must be replanted using saplings, lest virtual deforestation occur and wood become scarce. Rare ores and minerals can be stored in chests for use by the group — and the larger the project, the greater the need for organization and leadership. The educational possibilities for a game like Minecraft are manifold — not only can the game teach skills like resource management, cooperation, and leadership, but it also touches on ecological themes (such as deforestation and mining). If students should set up their own Minecraft servers, kids and teens can learn more about hosting and basic computer networking. While deceptively simple on the surface, Minecraft can be a valuable tool in teaching digital citizenship to students. Josh Ward is the Director of Sales and Marketing for green hosting provider, A Small Orange. Their vision is simple: perfecting hosting while maintaining a homegrown feel with a focus on people – customers, employees, and the community. Josh is originally from Southeast Texas, but has called Austin home for almost 20 years. He enjoys writing about his passion for all things Internet related as well as sharing his expertise in the web hosting industry and education. Attribution for Minecraft image: This photo, The Village in Minecraft” is copyright (c) 2011 by post-apocalyptic research institute and made available under an Attribution-ShareAlike 3.0 license.
<urn:uuid:7126af13-c26f-4473-baba-43fc239b1add>
CC-MAIN-2020-05
https://askatechteacher.com/teach-digital-citizenship-with-minecraft/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250601615.66/warc/CC-MAIN-20200121044233-20200121073233-00475.warc.gz
en
0.956898
1,306
2.5625
3
All Roads in Congress May Lead to Block Granting Medicaid Congress is considering a number of different mechanisms that may result in cuts so large that the only option would be to block grant Medicaid. Under a block grant, Congress would give states a reduced, fixed amount of money and eliminate many of the requirements (such as who to cover and what services to provide). Block granting is the worst option for people with intellectual and developmental disabilities (I/DD) as it would fundamentally change the structure of the program, not just cut funding for it. The individual entitlement to health care and long term services and supports would be lost and the states’ entitlement to reimbursement for actual costs would be lost. This is why it is so important to hold Members of Congress accountable for their positions on each of the mechanisms described below. What are Spending Caps? One approach to deficit reduction that is being seriously considered is to impose spending caps or limits. These caps limit government spending, usually limiting it to a certain percentage of Gross Domestic Product (GDP). There is one proposal that would limit federal spending to 20.6% of GDP (spending is currently 24% of GDP). This figure is the average amount of federal spending compared to all goods and services produced by the country (or GDP) in the last 40 years (before spending on aging baby boomers, national security, and interest on the debt was significant). Congress is currently considering three types of caps: - A global spending cap (for all federal spending); - An entitlement spending cap (for Medicare, Medicaid, and Social Security spending); and - A global health spending cap (for Medicaid, Medicare, and Affordable Care Act spending). What happens if federal spending exceeds the spending caps? There would be an enforcement mechanism of automatic, across‐the‐board spending cuts (called “sequestration”) if the spending limits or targets were expected to be missed. Low income programs, such as Medicaid and Social Security, would not be exempted. To bring federal spending back in line with the proposed spending caps or targets, Congress would be forced to make drastic cuts in entitlement programs. Those cuts would most likely have to include block grants for the Medicaid program. What Legislation is Congress considering that might include spending caps? There a currently two main efforts in Congress that are expected to involve spending caps. The first, a measure to increase the debt ceiling, is by far the most serious threat, as the U.S. is close to reaching a point of default on its financial obligations. The second, a balanced budget amendment, may or may not advance. - Raising the Debt Ceiling. The U.S. debt reached the limit of $14.3 trillion allowed by law in mid‐May. However, the Treasury Secretary is able to manage accounts without defaulting until about August 2. If federal borrowing authority is not increased by August 2, the U.S. will begin defaulting on its debt, triggering a catastrophic global financial crisis. Some Members of Congress have stated that they will vote to raise the debt ceiling ONLY IF major cuts in federal spending are included. While no specific programs and amounts have yet been made public, Medicaid is widely expected to be a major target. - Balanced Budget Amendment. Unlike the constitutions of most states, the U.S. Constitution does not actually require the Congress to pass a balanced budget. Some Members of Congress are looking to add a balanced budget amendment to ensure that the federal government does not spend more than it takes in, including no borrowing authority. If this were to happen, most federal spending would be radically reduced, including Medicaid.
<urn:uuid:e74dc00d-b6d0-4052-b702-890a72aac14e>
CC-MAIN-2017-30
https://blog.thearc.org/2011/06/23/different-deficit-reduction-efforts-same-result-for-medicaid/
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424721.74/warc/CC-MAIN-20170724042246-20170724062246-00395.warc.gz
en
0.958647
738
2.609375
3
Interrupted aortic arch (IAA) is an extremely serious condition of the heart in which the development of the aortic arch is disrupted. The arch of the aorta is the segment which connects the ascending and descending parts of this great artery, which carries oxygenated blood from the left ventricle of the heart to the entire systemic circulation. In most cases the infant also has a ventricular septal defect (VSD) or an atrial septal defect (ASD). Many other genetic conditions are also associated with IAA, such as the DiGeorge syndrome, especially in type B, which is due to the same chromosomal deletion in both cases. Trisomy 13 and 18 have also been found to occur alongside IAA. Other associations include: - Truncus arteriosus, where a single vessel arises from both the ventricles - Aortopulmonary window, where there is a defect in the septum between the aorta and the pulmonary artery - Transposition of the great arteries (pulmonary artery and aorta) - Double-outlet right ventricle - Functional single ventricle defects Normal Heart Functioning In the normal functioning heart, the left side of the heart receives blood following oxygenation in the lungs, via the pulmonary arteries which empty into the left atrium. This passes through the mitral valve to reach the left ventricle, from where it is pumped out into the aorta to supply the whole body. The upper part of the body, namely, the head, neck and upper limbs, are supplied through the branches of the aorta that arise from the aortic arch. These comprise: - The right brachiocephalic artery which divides into the right common carotid and right subclavian arteries, supplying the head and right upper limb respectively. - The left common carotid artery - The left subclavian artery - The descending aorta branches off to the rest of the body. Returning venous blood drains into the right side of the heart, namely, the right atrium, through the great veins, the superior and inferior vena cavae. This blood empties into the right ventricle through the tricuspid valve, from where it is pumped to the lungs for oxygenation through the pulmonary trunk. Blood from the lungs returns to the left atrium and the cycle is repeated. Both sides of the heart are thus separated from each other throughout their functioning. During fetal life, the ductus arteriosus connects the pulmonary artery to the aorta, to bypass the pulmonary circulation. This vessel closes in the first few days after birth. Abnormal Circulatory Function in IAA In an infant with IAA, the missing aortic arch means that oxygenated blood from the left ventricle cannot reach the descending aorta and supply the body below the level of the subclavian arteries. The presence of the ductus arteriosus allows some desaturated or hypoxemic blood to be shunted into the aorta below the level of the ductus, and this reaches the lower part of the body. The upper part of the body receives oxygen-rich blood, however. When a VSD is present, some oxygenated blood from the left ventricle reaches the right ventricle and is pumped into the pulmonary artery. This will ensure that some oxygen reaches the lower body through the ductus. Thus the presence of right-to-left shunts at various levels helps the infant with IAA to compensate for the lack of the aortic arch to a small extent. However, these babies are usually very sick at birth. The closure of the ductus precipitates cardiogenic shock and metabolic acidosis. Surgical correction following emergency stabilization of the infant is the only possible treatment that can offer a chance of survival. One-stage or multiple-stage repairs have been carried out. At present most surgeons prefer a one-stage repair with primary repair of the aortic arch. The advantages of a one-stage repair include: - The need for fewer reoperations - Avoiding the need of pulmonary artery banding which increase the rate of development of subaortic stenosis, an independent risk factor for death - Possible avoidance of future reconstructions of the aortic arch Staged repair can use the left carotid artery as an autograft to fuse the two segments of the aorta, or employ synthetic grafts. The use of the native left carotid artery was not associated with any short-term or future neurodevelopmental or other adverse outcomes. Complications following aortic arch reconstruction include: - Bronchial compression when the anastomosis of the ascending and descending aorta produces too much tension on the two components - Aortic arch stenosis recurring after some time requiring reoperation - Left ventricular outflow tract obstruction Factors which increase the mortality odds include: - Low birth weight - Repair at younger ages - Type B IAA - VSDs which involve the ventricular outlet or trabeculae - Small VSDs - Subaortic stenosis In some centers, the mortality at 10 years is 19% after staged correction, which reflects a dramatic improvement over the decades since the development of corrective surgery for this condition. However, reoperation rates remain high at present. Reviewed by Yolanda Smith, BPharm
<urn:uuid:4c9f038f-0507-4956-ac04-80a3c544dbd4>
CC-MAIN-2017-34
https://www.news-medical.net/health/Outlook-for-Interrupted-Aortic-Arch.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109670.98/warc/CC-MAIN-20170821211752-20170821231752-00143.warc.gz
en
0.908198
1,160
3.671875
4
Emperor penguins live on the continent of Antarctica, forming colonies on the massive pieces of ice. These penguins live along the coast of Antarctica, moving between the ice and the frigid waters.Continue Reading Many Antarctic penguin species migrate to warmer areas in the winter. The Emperor penguin is the only specie that remains on the Antarctic ice, even during the harshest parts of winter when temperatures may reach 40 to 60 degrees below freezing in Fahrenheit. The penguins remain in colonies that consist of thousands of penguins. They huddle together for warmth or use the icebergs or cliffs for shelter from storms or extreme cold. The penguins cooperate to ensure warmth and survival of the colony, always shifting around to make sure penguins on the perimeter of the colony are brought into the middle to warm up.Learn more about Penguins
<urn:uuid:a66f059d-49e2-4186-8e96-d3048625031d>
CC-MAIN-2017-39
https://www.reference.com/pets-animals/emperor-penguins-live-4454c44d08d2f292
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686169.5/warc/CC-MAIN-20170920033426-20170920053426-00164.warc.gz
en
0.887762
169
3.796875
4
Adolescent age group is a very susceptible group. These children are in phase of transformation from children to adulthood. Most adolescence manages this transformation but many of them are indulged in behaviors like sexual experimentation, exploration and promiscuity, and through which lands in the problem of unmarried motherhood, abortions, STDs HIV infection, sexual abuse. India has the largest population of adolescents in the world about 243 million , among them 69.5 getting married before 20 years of age, about 2.47 cases of HIV infected persons in the country and with sexually transmitted diseases. This study aimed to assess the effectiveness of planned teaching program on education in selected Nursing College of Dehradun in Uttarakhand.The quantitative evaluative research approach was used. Setting Himalayan College of Nursing, Jolly grant, Dehradun, Sample consecutive sample of 44 General Nursing and Midwifery GNM students. Tool self structure questionnaire to assess the knowledge regarding sex education was prepared. Intervention planned teaching programme on sex education.The finding of the study revealed that post test knowledge score is significantly higher than pre test knowledge score. The different between pre test and post test shows difference at the level of p 0.005. There was no significant association between pre test knowledge score and demographic variables. Rajesh Singh | Anjali Gupta | Deepika Badola | Poonam Chauhan | Anupriya Bisht | Upma George "A Study to Assess the Effectiveness of Planned Teaching Programme on Sex Education among GNM First Year Students in a Selected College of Nursing in Dehradun Uttarakhand" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-6 , October 2021, URL: https://www.ijtsrd.com/papers/ijtsrd47494.pdf Paper URL : https://www.ijtsrd.com/medicine/nursing/47494/a-study-to-assess-the-effectiveness-of-planned-teaching-programme-on-sex-education-among-gnm-first-year-students-in-a-selected-college-of-nursing-in-dehradun-uttarakhand/rajesh-singh International Journal of Trend in Scientific Research and Development (IJTSRD) provide a digital platform where author can promote their original research / review work, innovation / new ideas, Comparative / Comparison study in various fields. Our Aim is knowledge sharing between researchers, developers, engineers, students, and practitioners working in and around the world.
<urn:uuid:20aaf67d-03f8-46d6-be41-26a767b784d5>
CC-MAIN-2023-40
https://www.edocr.com/v/ed7yzbjn/ijtsrdinfo/a-study-to-assess-the-effectiveness-of-planned-tea
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510501.83/warc/CC-MAIN-20230929090526-20230929120526-00781.warc.gz
en
0.906478
545
2.53125
3
In a world with increasingly global economies and competition, students need to learn how to think critically and analytically, and to apply their imaginations to solve complex problems. Problem-based learning (PBL) does just that, helping students identify problems, pose their own questions, research answers, report results, and create a stake in their own learning. While teachers know the benefits, they are sometimes challenged by the process. Expert John Barell troubleshoots the PBL process for teachers. Basic procedures make this remarkably effective teaching model accessible and highly doable for all teachers, from beginners to veterans. The author draws on practical classroom experiences and incorporates methods that are widely praised by reviewers and users of the first edition. This standards-based, teacher-friendly second edition includes: · A step-by-step method to simplify the process · Examples showing problem-based learning in action · Answers to frequently asked questions on standards-based implementation · Thorough guidelines for developing problems for students to solve and letting them develop their own · Rubrics and assessment tips to ensure that standards are met Problem-Based Learning, Second Edition, offers an easy-to-follow, rich teaching model for all teachers and grade levels, enabling you to confidently engage students for more meaningful learning and success, both inside and outside the classroom!
<urn:uuid:a9acf82f-e3dc-435d-bc24-d16ac9c4a10e>
CC-MAIN-2020-34
https://uk.sagepub.com/en-gb/eur/problem-based-learning/book230596
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739134.49/warc/CC-MAIN-20200814011517-20200814041517-00109.warc.gz
en
0.946312
271
3.84375
4
Right now, the most common use humans make of sounds is to communicate, emitting sounds through the mouth in a specific way to create words. These same words can be used in everyday conversations or to sing songs. Music could also be considered another form of communication, only a little more artistic, as it includes sounds emitted by musical instruments of all kinds, creating a melody that tries to convey feelings and ideas. Sound has always been an important part of technology, since the 19th century when the first phonograph was created, a very primitive radio system was patented, or when the first telephone that transmitted voices was built. During the twentieth century there were a number of impressive advances, making both telephone and radio a common technology and accepted by society, but also including other fabulous improvements, such as televisions (which also emitted images, along with sound), stereo sound and the ability to make audio recordings on cassettes, CDs, and digitally with the appearance of MP3 files. Find out more with us at Seksuroba. In spite of all these milestones in technology, where sound was the protagonist, when we think about what the future will be like, we never imagined that great advances could be made in the area of acoustics. However, right now, science is using sound to create things that are cooler than ever, and that may become more common in the future. A group of researchers at Penn State University in the United States have developed a refrigerator that works with sound. It operates on the principle that sound waves compress and expand when there is air around them, causing them to both heat and cool, creating a kind of gas. For this to work and replace the electricity currently used by these appliances, enough of the gas produced by the sound waves needs to be placed inside the cooling chamber. In this case, they decided to put the equivalent of 10 atmospheres of earth gas. Once inside the chamber, the gas is pressed with more than 173 decibeles of sound (which would be worse than sitting inside a running turbine of an airplane), producing heat. Then, the heat is absorbed by metal plates that take it to a changing system where it is removed, creating cold inside the refrigerator. Although it sounds somewhat complicated, its application would be a simple solution to create models of refrigerators much more “green” than any other existing in the current market. Now we find refrigerators that save energy, but still use chemical refrigerants that damage our atmosphere. We already know that the life of smartphone batteries is terrible, so wouldn’t it be great to be able to charge them easily, without having to rely on ordinary electricity at all? Maybe someday it can be done using only your voice. Sound, as we said before, is a viable source of energy because it produces heat and some researchers have been experimenting with this so that you can charge your mobile thanks to the emission of sound produced by your voice every time you make a call. During 2011, scientists in Seoul put nano-zinc oxide bars between two electrodes, a perfect and very small system for generating electricity using sound waves. This produced just 50 millivolts, which is too little energy to charge any mobile phone. By 2014, however, other scientists in London used the same method and managed to produce five volts, enough to charge a smartphone. Is this a real alternative that we can use in the future? Disney is not only dedicated to creating fantastic princesses, it also brings important science to the world, as is the case of the “Ishin-Den-Shin” project, a Japanese expression used when there is communication through mutual and tacit understanding. This project has an interesting system for transmitting sounds that consists of connecting a microphone to a computer so that someone can speak through it. The computer then converts it into a loop recording and sends it back into the microphone through a thin wire, but now it is not a sound but a high voltage, low current signal that is totally inaudible. The signal, in turn, creates an electrostatic field that produces a very small vibration when the finger of the person holding the microphone touches an object, turning the person into a loudspeaker. Since all this sounds a little complicated, let’s go to the coolest part of the system: when person A is holding the microphone in one of his hands, he can use one of his fingers on the other to touch the ear of person B, who will feel a small vibration. The moment of touch creates a loudspeaker between the two individuals.
<urn:uuid:d0172df6-386e-45b7-886d-2948818a6281>
CC-MAIN-2020-34
http://www.seksuroba.com/sound.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740679.96/warc/CC-MAIN-20200815035250-20200815065250-00127.warc.gz
en
0.960515
924
3.4375
3
The best reason to go to college is to learn more about the world you live in. You may have put off going to college because you were not ready or couldn't afford it. Now, as you think about college again, a college education offers other benefits. Getting a college degree is a career necessity in today's business world. College graduates earn nearly twice as much during their working years as high school graduates. Information from the U.S. Census Bureau 2008 report reinforces the value of a college education: workers 25 and over with a bachelor's degree earn an average of $60,954 a year, while those with a high school diploma earn $33,618. Workers with a master's degree make an average of $71,236, and those with a doctoral degree earn an average of $99,995, and a professional degree earns an average of $125,622. Looking at it from a different view, over an adult's working life (45 years), high school graduates can expect, on average, to earn $1.5 million; those with a bachelor's degree, $2.7 million; and people with a master's degree, $3.2 million. Persons with doctoral degrees earn an average of $4.5 million during their working life, while those with professional degrees do best at $5.6 million. College graduation will qualify you for many jobs that would not be available to you any other way. Your career advancement should be easier because some job promotions require a college degree. A college education will help you develop your skills in reasoning, tolerance, reflection and communication. These skills will help you resolve the conflicts and solve crisis that come up in the course of a personal or professional life. A college education will also help you understand other people's viewpoints and learn how to disagree sensibly. A satisfied life depends upon the rational resolution of conflicts and crises. Of course, these critical skills can be developed without going to college, but the college environment has proven to be a good place to practice, learn and polish skills that will last you a lifetime. College and Networking Many college graduates feel that the greatest benefit of their college years is the expansion of their social horizons. Meeting new people, making new friends, companionship and sharing new experiences lead to personal growth. The skill of meeting and sharing information with people is known as networking. College graduates say that contacts they made in college often helped them find the job they wanted. Others report that friends in college were tied to their own career climb. College graduates describe the value of these networks as having expanded their horizons from the tribal village to the global village. Back to top A college education will help you develop your skills in reasoning, tolerance, reflection and communication. These skills are of great benefit in resolving the conflicts and crisis that arise in the course of a personal or professional life. A happy life may depend upon the rational resolution of such conflicts and crises. Of course, these critical skills can be developed without going to college, but the college environment has proven to be a good incubator for developing these unique human skills. Back to top Survey results indicate that many college graduates feel that the greatest benefit of their college years was the expansion of their social horizons—the personal growth that results from meeting new people, making new friends, companionship and sharing new experiences, frequently referred to as networking. Many report that they were aided in finding the job they wanted or in the advancement of their career by help they got from friends made during their college years. College graduates describe the value of these networks as having expanded their horizons from the tribal village to the global village. Back to top In the last century, most colleges in the United States were small, private, church-related institutions that prepared a small percentage of the college-age population for the ministry, law or medicine. Now, America is filled with large, predominantly public universities and colleges that provide courses of study for every career choice to students of a variety of backgrounds. The only barrier to going to college may be cost, but there are ways to remove even that barrier. College Enrollment Trends America's technological advancements demand more and more college-educated people for a fast-paced workforce. The percentage of the U.S. population (2009 data) over the age of 25 who had completed a bachelor's degree is 19%. Over the past decade, the number of adults with a bachelors's degree has increased by two percent points. Trends in Cost of Tuition College tuition and room and board is not cheap, and the costs have been rising steadily for the past 20 years—with some schools costing twice as much. When asked about importance, more than 80 percent of Americans said that having a college degree is important to getting ahead; in fact, a college education has become as important as a high school diploma used to be! Actual Costs of College: Academic Year 2008-2009 College costs vary widely depending on individual schools. Private colleges or universities are more expensive than public colleges or universities because they receive less support from state governments. The two- or four-year college or university in your home state will probably be the least expensive school to attend. The table below gives detailed information on average costs for the 2008-2009 academic year compared with the 2009-2010 academic year (in-state charges). |COMPARING COLLEGE COSTS| |Four-Year Public|| Academic Year | Academic Year |Tuition and Fees||$6,585||$7,202| |Room and Board||$7,707||$8,193| |Four-Year Private|| Academic Year | Academic Year |Tuition and Fees||$25,243||$26,273| |Room and Board||$8,996||$9,363| |Two-Year Public||Academic Year |Tuition and Fees||$2,402||$2,544| The average surcharge for out-of-state or out-of-district students at public institutions is $11,528 at four-year colleges. You can get current cost data for all colleges and universities from collegeboard.com. Even if you choose a lower-cost school, you can save even more if you chose cost-saving plans that lower the costs of your room, meals and books. Books and Supplies The national average at four-year public colleges in 2009-2010 is $1,122. Future Cost of Tuition and Fees College tuition and fees are not likely to drop any time soon. The best that can be hoped for is that as our economy improves, state funding of colleges and universities will increase, allowing tuition charges to stabilize. Don't let costs stop you from getting an education that translated into a lifetime opportunity. Instead, take a look at the different ways you can finance your college education. Back to top Developing Your College Financial Plan Most people don't pay for college in cash. Financial aid is available and most American community colleges provide an associate degree program that is the equivalent of the first two years of a bachelor's degree. That means you can transfer to four-year college or university after you graduate from a two-year community college. That alone will save you the higher tuition and room and board of a four-year school. The final two years can be financed with a plan that includes Federal Student Aid, Federal and State Education Incentive programs, Federal student loan programs, college savings programs, and earnings. Tell your parents, grandparents and other family members or family friends of your financial plan; your support system is much more likely to donate supporting dollars when they see that your ambition to get a college education has determination and a plan behind it. Step 1: Select Your College Choices The first step in your plan is to decide where to go for your education. Your first choice should be based on where you think you will get the best preparation for your chosen career. Put that private college first on your list, if that is where you really want to be. Next should be top-rated public colleges for your chosen field. About two-thirds of all full-time undergraduate students receive grant aid. In 2009-10, estimated aid in the form of grants and tax benefits averaged about $3,000 per student at public two-year colleges, about $5,400 at public four-year colleges, and about $14,400 per student at private four-year colleges, according to The College Board. Finally, consider back-up colleges such as larger public universities or nearby community colleges. The benefits of a community college is that you can live at home, keep costs low, attend smaller class sizes and transfer to a senior college or university to complete your degree program. Step 2: Get Cost Information for the Colleges You Have Chosen Get realistic costs from each college you want to consider. That means including housing, lab and books costs. If you are planning on going in two years, check out how much costs have risen over the past two years. That keeps your costs realistic. Step 3: File Your Free Application for Federal Student Aid (FAFSA) After you have a list of colleges and costs, you are ready to file the Federal Student Aid application. It's an important part of your future, because it will give you the amount of financial help you can apply for. The FAFSA Formula When you file your FAFSA, you will be asked to report income, savings, family size, family assets, and number and identity of family members attending college, among other details. The information you provide on your application is plugged into a formula maintained by the U.S. Congress that gives you the dollar amount you and your family are expected to pay. You'll also know the dollar amount of eligibility you have for student aid during the year you apply. This figure is called the expected family contribution (EFC). Your financial need or eligibility for student aid is the difference between the cost of attending the college of your choice, as calculated by the college, and your calculated EFC. Here's the formula: Cost of college - expected family contribution = financial need The best and the most efficient way to file your FAFSA application is to use the FAFSA Web site, http://www.fafsa.ed.gov. This Web site includes everything you need to know about the application. There is a four-page pre-application worksheet that you can use before you attempt to file your FAFSA. The worksheet exercise lists all of the financial information that you must know before you can begin your FAFSA application. It will define who is considered a parent, what assets should or should not be included, what income should be included, and details regarding income taxes. After reviewing the worksheet, you are ready to file the real application. It's easy and convenient to file it on the Web. The Benefits of Filing on the Web A Web application can be faster because the process uses skip logic. Depending on your answers, the electronic forms moves you ahead in the application process, so you may have to answer fewer questions. The Web application process also checks your information before you submit it, reducing the chances that your application will be rejected because of missing or contradictory information. Your application information can be saved and transmitted at any time from any location that provides Web access. You'll need to file a FASFA for each year you attend college. If anything happens after you file that affects your expected family contribution, such as loss of employment, divorce or disability, you can update the information. Your FAFSA becomes your Student Aid Report (SAR), a copy of which is sent to the colleges you are considering attending. When the college student aid office receives your SAR, they put together a student aid package that will meet your financial need. Your FAFSA is also an important part of the document chain for student loans. Step 4: Determine the Financial Shortfall for Meeting College Costs Your SAR tells you how much money you are expected to make each year and how much in student aid you can expect. After all the financial support and student aid opportunities are considered, you are likely to discover that you still have to provide some additional money to pay for college. There are a number of ways to cover the costs, including working, but student loans are the first choice of many college students. Step 5: Consider Student Education Loans How much you rely on loans to cover your college expenses is your decision. Weigh the amount of the loan against your post-graduate earnings to see when and how fast you can pay back the loan. Over the past two decades, there has been a 70-percent increase in federal student loan debt for the average graduate of a public college. Step 6: Identify Other Resources, If Necessary You could also consider private loans, perhaps from family members. You may be able to cover college costs by cutting your room or board expenses. Consider part-time employment, but be careful about too much work the first year. It's often the hardest academically, so don't expect to be able to work a tough schedule while you go to school full time. Getting grades that lead to graduation is your main concern. Step 7: Matching Your Choice of School to Your Financial Plan Now that you have a good idea of how much money will come from you, your family, loans and financial aid, take another look at your school choices. If your first choice is too expensive, look to your other choices to see if you get more financial aid. What if You Still Can't Afford College? If you still can't enroll in a four-year plan, consider a two-plus-two program. These programs allow you to get the first two years of a four-year degree program at a community college that has a joint-transfer agreement with a four-year bachelor's degree college. Visit the college where you plan to transfer for your junior and senior years. Make sure you know the courses you must take at the community college to make the transfer work. Save all you can in the first two years at the lower-cost community college. If you can live at home while attending the community college, you should be able to save even more. It may not be your first choice, but it works. Your goal is to graduate from college, even if you have to live at home for two years. Back to top Planning and Paying for Higher Education The more time you have before you begin paying for higher education, the more opportunities you have to accumulate and save money to reach your goal. If you can accumulate funds for higher education on tax-free or tax-deferred basis, you may benefit by delaying or never paying taxes on your earnings and gains on investments. There are also ways to "lock in" the cost of higher education, protecting your family from future tuition increases. Education IRAs or Coverdell Education Plans Coverdell education savings accounts (ESAs) used to be known as education individual retirement accounts (IRAs). The name was changed because these accounts have nothing to do with retirement. However, the ESAs have many of the same features of IRAs in terms of how much you can save, tax benefits and penalties. Any individual (with modified adjusted gross income of less than $110,000) can contribute up to $2,000 a year to an ESA that will benefit any person under age 18. In fact, a child may contribute $2,000 to his or her own ESA. Since there is no limit on the number of ESAs that can be established for a minor, the money can really add up if you get family members to contribute. ESA withdrawals that are used to pay for qualified education expenses are tax free. Qualified education expenses include tuition, room, board, fees and supplies related to attendance at a qualified elementary, secondary or post-secondary (college) institution. The money must be used by the time the beneficiary reaches age 30. However, the account can be transferred to a relative under that age. There are some negatives to consider. No tax deduction is allowed for contributions to an ESA. Earnings withdrawn from an ESA and not used for qualified education expenses are subject to tax and a 10-percent penalty. Finally, the money in an ESA is considered part of the assets of the beneficiary. This will generally reduce the amount of need-based financial aid that will be available to the student. You can open a Coverdell ESA at most banks, credit unions and other financial institutions. Qualified Tuition Plans (529 Plans) Qualified Tuition Plans, also called "529" college savings plans, are established by individual states, but most are available through financial services companies. You do not have to open the account in your state of residence or the state in which the beneficiary will attend school, so it's best to shop around for the best plan. Many financial services companies offer plans from a number of states. |Main advantages of 529 college savings plans||Main disadvantages of 529 college savings plans| Savings bonds are an excellent way to accumulate funds for education expenses because the interest earned over the life of the bonds can be tax-free if you plan properly. For those in the 15 percent federal tax bracket, this means that 100 percent (instead of just 85 percent) of the interest earned on the eligible bonds will be available to pay tuition and fees for you and your family. Interest earned on Series I and Series EE Bonds issued after 1989 and used to pay for education may be partially or fully excluded from Federal income tax, provided that certain conditions are met. You do not have to designate the bonds for education when you purchase them. This gives you the flexibility to use the bonds for retirement or other purposes if it turns out that they are not needed for education. It also allows you to use bonds that you already own—for education expenses that you would have paid from other funds—to take advantage of the tax benefit. Qualifying for the full interest exclusion For the 2009 tax year, if your modified adjusted gross income is less than $104,900 on a joint return (or $69,950 for single taxpayers), you may qualify for the full interest exclusion. The exclusion is then phased out gradually until it is eliminated at income levels above $134,900 (joint) and $84,950 (single). You must use all of the proceeds from cashing the bonds to pay qualified education expenses in order to receive the full benefit. If your qualified education expenses are less than the proceeds of eligible bonds, then the interest exclusion is pro-rated. Summary of the fine print for using Savings Bonds for Education If it appears that you can take advantage of this tax benefit, see the instructions for IRS form 8815 at http://www.irs.gov/pub/irs-pdf/f8815.pdf for complete details. To be eligible for the education interest exclusion: As the time approaches for payment of tuition bills, you should consider what financial aid may be available, which funds to use first, and whether you will need to borrow money. Paying for Higher Education When you or a family member is ready to start in college or another institution of higher learning (or if someone has already started), your first step is to investigate the financial aid for which you may be eligible. A reliable and objective source of information about all types of financial aid for education is The College Board. Their "Pay for College" section has comprehensive information on scholarships, financial aid and loans, as well as calculators and tools for decision-making. The site even has links to apply for financial aid and loans. The Free Application for Federal Student Aid (FAFSA) is a standard financial form used to apply for federal and state student grants, work-study and loans. After your FAFSA is processed, you will receive a Student Aid Report (SAR). The most important number on the SAR is the Expected Family Contribution (EFC). Your EFC is the amount of money that you are expected to provide for the next year of education. The EFC is often more than most families expect. Although you may receive more aid than anticipated from some sources and you may be able to appeal the EFC, it is best to start thinking about which financial resources you will tap. It is usually preferable to first use any funds earmarked for education, such as Education IRAs and 529 Plans. Since funds in these accounts must be spent on education (in order to avoid taxes and penalties) they are your initial source to pay upcoming tuition bills. The only exception would be for situations in which you decide to re-designate some of these accounts for family members that will enter college later. Next, you should look to funds that may be withdrawn without tax consequences. These would be Variable Universal Life Insurance Certain types of insurance allow you to withdraw cash to the extent of investment premiums paid. Depleting a life insurance policy is a matter of balancing the value of education against the economic needs resulting from the death of the insured. After all the above sources are used, you still have options. A variety of parent loans for education, including Federal PLUS are available. See The College Board for more information. Parents may use a home equity line of credit or loan with some possibility of tax-deductible interest. Ask your tax advisor for details. Savings and investments may be liquidated. But plan any sales of investments carefully to avoid substantial taxes. Parents may tap TSPs, Pension Plans and IRAs with loans or withdrawals for education purposes. However, these options should be seen as a last resort because there may be adverse tax consequences and because these funds are earmarked for retirement. A good source of information about various plans is http://www.collegeboard.com/article/0,1120,6-29-53-8851,00.html?orig=sub. One of the best Web sites for comparing 529 plans is http://www.savingforcollege.com. Back to top
<urn:uuid:797ff37f-21e9-4117-ade8-888b57d44fb2>
CC-MAIN-2014-41
http://www.umuc.edu/students/aid/literacy/education.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663167.8/warc/CC-MAIN-20140930004103-00301-ip-10-234-18-248.ec2.internal.warc.gz
en
0.962034
4,578
2.5625
3
This page is part of © FOTW Flags Of The World website Flags of Linguistic Groups (page under construction!) Last modified: 2000-02-11 by antonio martins Keywords: language | Links: FOTW homepage | disclaimer and copyright | write us | Flags representing languages? What flag should I use to represent this language? Thereís no easy answer to this question, as national flags and languages donít always match up. Some nations have more than one language; some languages are spoken in more than one nation. The easiest rule is to know your audience. If you are writing in English, French, and Spanish to a group of Europeans, use England or the UJ for English, France for French and Spain for Spanish. For a group of North Americans, use the U.S. for English, Canada or Quebec for French, and Mexico for Spanish. Use common sense and choose a flag that will be easily understood by the most people.
<urn:uuid:d8aa8c03-2092-42b9-ba2c-d5feb986206b>
CC-MAIN-2017-39
http://www.1uptravel.com/flag/flags/flagling.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688158.27/warc/CC-MAIN-20170922022225-20170922042225-00378.warc.gz
en
0.884776
219
3.28125
3
Operating system differences can cause your Rust binaries to break when run in a different environment than they were compiled in. Here are the most common things to watch out for. Operating SystemsUsing Rust to build or explore operating systems. Thanks to PR #70740 and a lot of work of Vadim Petrochenkov, for current rust nightly the default binary output of the x86_64-unknown-linux-musl target is a static position independent executable (static-pie) with address space layout randomization (ASLR) on execution. My KVM Forum 2018 presentation titled Security in QEMU: How Virtual Machines provide Isolation (pdf) (video) reviewed security bugs in QEMU and found the most common causes were C programming bugs. This includes buffer overflows, use-after-free, uninitialized memory, and more. In this post I will argue for using Rust as a safer language that prevents these classes of bugs. In 2018 the choice of a safer language was not clear. C++ offered safe abstractions without an effective way to prohibit unsafe language features. Go also offered safety but with concerns about runtime costs. Rust looked promising but few people had deep experience with it. In 2018 I was not able to argue confidently for moving away from C in QEMU. Now in 2020 the situation is clearer. C programming bugs are still the main cause of CVEs in QEMU. Rust has matured, its ecosystem is growing and healthy, and there are virtualization projects like Crosvm, Firecracker, and cloud-hypervisor that prove Rust is an effective language for writing Virtual Machine Monitors (VMM). In the QEMU community Paolo Bonzini and Sergio Lopez's work on rust-vmm and vhost-user code inspired me to look more closely at moving away from C. This post describes how Ebbflow vends its client which is written in Rust to its Linux users, describing the tools used to build the various packages for popular distributions. In a future post, we will discuss how these packages are ultimately vended to users. An operating system is used to make our job easier when using graphics. In our instance, in addition to everything else. In this post, we will be writing a GPU (graphics processing unit) driver using the VirtIO specification. In here, we will allow user applications to have a portion of the screen as RAM–with what is commonly known as a framebuffer. Welcome to a new issue of "This Month in Rust OSDev". In these posts, we give a regular overview of notable changes in the Rust operating system development ecosystem. In July, we switched the bootloader/bootimage crates from `cargo-xbuild` to cargo's `build-std` feature. There was also some great progress on the acpi crate and much more. Today I’m publishing tihle, a new emulator targeting TI graphing calculators (currently only the 83+, but maybe others later). There’s rather a lot to say about it, but here I will discuss the motivation for a new emulator and the state of the art followed by technical notes on the design and initial development process. Data produced by programs need to be stored somewhere for future reference, and there must be some sort of organisation so we can quickly retrieve the desired information. A file system (FS) is responsible for this task and provides an abstraction over the storage devices where the data is physically stored. In this post, we will learn more about the concepts used by file systems, and how they fit together when writing your own. If you’ve been following my Redox Summer of Code progress, you might have noticed a long break after the last post. At first, the reason was that I just lost track of time. My previous years of RSoC have followed a similar inconsistent schedule, which I now refer to as an interval of one blog post per “programmer week”, where a “programmer week” is anywhere from 3 days to a month… Now, the reason for not finishing is that I’m basically done! That’s right, GDB has served us reliably for the past few weeks, where we’ve been able to debug our dynamic linker (ld.so) and find problems with shared libraries. This week has been mostly about advancing the interface as much as possible, with the goal of being the default for pcid, xhcid, and usbscisd, as I previously mentioned. With the introduction of the AsyncScheme trait, I have now actually been able to operate the pci: scheme socket (well, :pci) completely asynchronously and with io_uring, by making the in-kernel RootScheme async too. This post is about how I booted to bare metal Rust on x86_64. My goal is to describe my learning path and hopefully get you interested in things I talk about. I’ll be very happy if you find this content useful. Note that I’m a beginner and I may be wrong about many things. If you want to learn more, I’ll put links to many resources. Nixpkgs recently merged PR #93568, allowing the Nix package manager to cross-compile packages to Redox. Introduction After the last week where I was mainly blocked by the bug about blocking init, I’ve now been able to make further progress with the io_uring design. I have improved the redox-iou crate, which is Redox’s own liburing alternative, to support a fully-features buffer pool allocator meant for userspace-to-userspace io_urings (where the kernel can’t manage memory); to work with multiple secondary rings other than the main kernel ring; and to support spawning which you would expect from a proper executor in tokio or async-std. Getting timer interrupt is a common task in todo list of OS developer. Although it is very simple task on some architectures, to have it on AArch64 you need to configure so called Interrupt Controller. From this post you will know how to initialize Generic Interrupt Controller (GIC), control priorities and target an interrupt to specific core. Welcome to a new issue of "This Month in Rust OSDev". In these posts, we will give a regular overview of notable changes in the Rust operating system development ecosystem. I read the official Rust book already in the end of 2019 but never had a project idea. That’s why I decided to rewrite one of my already existing C++ projects. A few months after I started I already gained lots of experience and began to wonder whether it’s possible to rewrite my Windows Kernel Drivers in Rust. A quick search lead me to many unanswered questions and two Github repositories. One of these repositories is winapi-kmd-rs which is unfortunately really complicated and outdated. I almost gave up until I stumbled upon win_driver_example which made me realize that a lot has changed and that it’s not even that hard. This post summarize what went wrong and what I learned. I write a ton of articles about rust. And in those articles, the main focus is about writing Rust code that compiles. Once it compiles, well, we're basically in the clear! Especially if it compiles to a single executable, that's made up entirely of Rust code. That works great for short tutorials, or one-off explorations. Unfortunately, "in the real world", our code often has to share the stage with other code. And Rust is great at that. Compiling Go code to a static library, for example, is relatively finnicky. It insists on being built with GCC (and no other compiler), and linked with GNU ld (and no other linker). In contrast, Rust lends itself very well to "just write a bit of fast and safe code and integrate it into something else". It uses LLVM for codegen, which, as detractors will point out, doesn't support as many targets as GCC does (but there's always work in progress to address that), and it supports using GCC, Clang, and MSVC to compile C dependencies, and GNU ld, the LLVM linker, and the Microsoft linker to link the result. This week has initially been mostly minor bug fixes for the redox_syscall and kernel parts. I began the week by trying to get pcid to properly do all of its scheme logic, which it hasn’t previously done (its IPC is currently, only based on passing command line arguments, or pipes). This meant that the kernel could no longer simply process the syscalls immediately (which I managed to do with non-blocking syscalls such as SYS_OPEN and SYS_CLOSE) by invoking the scheme functions directly from the kernel. So for the FilesUpdate opcode, I then tinkered a bit with the built-in event queues in the kernel, by adding a method to register interest of a context that will block on the event, and by allowing non-blocking polls of the event queues. This week has been quite productive for the most part. I continued updating the RFC, with some newer ideas that I came up while working on the implementation, most imporantly how the kernel is going to be involved in io_uring operation. I also came up with a set of standard opcodes, that schemes are meant to use when using io_uring, unless in some special scenarios (like general-purpose IPC between processes). The opcodes at this point in time, can be found here. Introduction Yesterday at 15:08 I sent this image excitedly to the Redox chat, along with the message “Debugging on Redox… We’re soon, soon, there.” Earlier this year, we used the C2Rust framework to translate applications such as Quake 3 to Rust. In this post, we’ll show you that it is also possible to translate privileged software such as modules that are loaded by the Linux kenel. We’ll use a small, 3-file kernel module which is part of the Bareflank Hypervisor SDK developed by Assured Information Security but you can use the same techniques to translate other kernel modules. With Apple’s recent announcement that they are moving away from Intel X86 CPU’s to their own ARM CPU’s for future laptops and desktops I thought it would be a good time to take a look at the some differences that can affect systems programmers working in Rust. This is my first year of Redox Summer of Code, and my intent is continuing my prior work (outside of RSoC) on improving the Redox drivers and the kernel. I started this week by quite a minor change: implementing a more advanced syscall for allocating physical memory, namely physalloc3. Unlike the more basic physalloc which only takes a size as parameter, physalloc3 also takes a flags and minimal size; this allows a driver to request a large range and fall back to multiple small ranges, if the physical memory space were to be too fragmented, by using scatter-gather lists (a form of vectored I/O like preadv for hardware). It also adds support for 32-bit-only allocation for devices that do not support the entire 64-bit physical address space. As you might know, last year I spent the summer implementing a ptrace-alternative for Redox OS. It’s a powerful system where the tracing is done using a file handle. You can read all about the design over at the RFC. Thanks to this system I also got strace working, and then I started working on a simple gdbserver in Rust, for both Linux and Redox, but mainly Linux at that point, to lay the foundation for debugging on Redox using a Rust-based program. This week, I’ve been using the remnants of last year to work on porting this debugging server to Redox. To do this, I had to make some more changes to the kernel side of things. Diosix 2.0 strives to be a lightweight, fast, and secure multiprocessor bare-metal hypervisor for 32-bit and 64-bit RISC-V systems. It is written in Rust, which is a C/C++-like systems programming language focused on memory and thread safety as well as performance and reliability. The ultimate goal is to build fully open-source packages containing everything needed to configure FPGA-based systems with RISC-V cores and peripheral controllers, and boot a stack of software customized for a particular task, all generated on demand if necessary. This software should also run on supported ASICs and system-on-chips. Right now, Diosix is a work in progress. It can bring up a RISC-V system, load a Linux kernel and minimal filesystem into a virtualized environment called a capsule, and begin executing it. This is the moment we've all been waiting for. Ten chapters of setup have led us to this moment--to finally be able to load a process from the disk and run it. The file format for executables is called ELF (executable and linkable format). I will go into some detail about it, but there are plenty of avenues you can explore with this one file type. Welcome back and thanks for joining us for the reads notes… the thirteenth installment of our series on ELF files, what they are, what they can do, what does the dynamic linker do to them, and how can we do it ourselves. I've been pretty successfully avoiding talking about TLS so far (no, not that one) but I guess we've reached a point where it cannot be delayed any further, so. So. Thread-local storage. Let’s do another dive into packaging Rust for Debian with a slightly more complicated example. Welcome to the second issue of "This Month in Rust OSDev". In these posts, we will give a regular overview of notable changes in the Rust operating system development ecosystem. Did you know that Rust has a Tier 2 target called i586-pc-windows-msvc? I didn't either, until a few days ago. This target disables SSE2 support and only emits instructions available on the original Intel Pentium from 1993. So, for fun, I wanted to try compiling a binary that works on similarly old systems. My retro Windows of choice is Windows 98 Second Edition, so that is what I have settled for as the initial target for this project. Storage is an important part of an operating system. When we run a shell, execute another program, we're loading from some sort of secondary storage, such as a hard drive or USB stick. We talked about the block driver in the last chapter, but that only reads and writes to the storage. The storage itself arranges its 0s and 1s in a certain order. This order is called the file system. The file system I opted to use is the Minix 3 filesystem, which I will describe the practical applications here. For more overview of the Minix 3 file system or file systems in general, please refer to the course notes and/or video that I posted above. I will go through each part of the Minix 3 file system, but the following diagram depicts all aspects and the structure of the Minix 3 file system. The VirtIO protocol is a way to communicate with virtualized devices, such as a block device (hard drive) or input device (mouse/keyboard). For this post, I will show you how to write a block driver using the VirtIO protocol. The first thing we must understand is that VirtIO is just a generic I/O communication protocol. Then, we have to look at the block device section to see the communication protocol specifically for block devices. Welcome to the first issue of "This Month in Rust OSDev". In these posts, we will give a regular overview of notable changes in the Rust operating system development community. These posts are the successor of the "Status Update" posts on the "Writing an OS in Rust" blog. Instead of only focusing on the updates to the blog and the directly related crates, we try to give an overview of the full Rust OSDev ecosystem in this new series. This includes all the projects under the rust-osdev GitHub organization, relevant projects of other organizations, and also personal OS projects. Last fall I was working on a library to make a safe API for driving futures on top of an an io-uring instance. Though I released bindings to liburing called iou, the futures integration, called ostkreuz, was never released. I don’t know if I will pick this work up again in the future but several different people have started writing other libraries with similar goals, so I wanted to write up some notes on what I learned working with io-uring and Rust’s futures model. In Part 11, we spent some time clarifying mechanisms we had previously glossed over: how variables and functions from other ELF objects were accessed at runtime. We saw that doing so “proper” required the cooperation of the compiler, the assembler, the linker, and the dynamic loader. We also learned that the mechanism for functions was actually quite complicated! And sorta clever! And finally, we ignored all the cleverness and “made things work” with a three-line change, adding support for both GlobDat and JumpSlot relocations. We're not done with relocations yet, of course - but I think we've earned ourselves a little break. There's plenty of other things we've been ignoring so far! For example… how are command-line arguments passed to an executable? In our last installment of “Making our own executable packer”, we did some code cleanups. We got rid of a bunch of unsafe code, and found a way to represent memory-mapped data structures safely. But that article was merely a break in our otherwise colorful saga of “trying to get as many executables to run with our own dynamic loader”. The last thing we got running was the ifunc-nolibc program. In this post we explore cooperative multitasking and the async/await feature of Rust. We take a detailed look how async/await works in Rust, including the design of the Future trait, the state machine transformation, and pinning. We then add basic support for async/await to our kernel by creating an asynchronous keyboard task and a basic executor. Starting a process is what we've all been waiting for. The operating system's job is essentially to support running processes. In this post, we will look at a process from the OS's perspective as well as the CPU's perspective. We looked at the process memory in the last chapter, but some of that has been modified so that we have a resident memory space (on the heap). Also, I will show you how to go from kernel mode into user mode. Right now, we've erased supervisor mode, but we will fix that when we revisit system calls in order to support processes. Welcome back to the “Making our own executable packer” series, where digressions are our bread and butter. Last time, we implemented indirect functions in a no-libc C program. Of course, we got lost on the way and accidentally implemented a couple of useful elk-powered GDB functions - with only the minimal required amount of Python code. The article got pretty long, and we could use a nice distraction. And I have just the thing! A little while ago, a member of the Rust language design team stumbled upon this series and gave me some feedback. It has been a while since the last Redox OS news, and I think it is good to provide an update on how things are progressing. The dynamic linking support in relibc got to the point where rustc could be loaded, but hangs occur after loading the LLVM codegen library. Debugging this issue has been difficult, so I am taking some time to consider other aspects of Redox OS. Recently, I have been working on a new package format, called pkgar. Bottlerocket is a free and open-source Linux-based operating system meant for hosting containers. Bottlerocket focuses on security and maintainability, providing a reliable, consistent, and safe platform for container-based workloads. This is a reflection of what we've learned building operating systems and services at Amazon. You can read more about what drives us in our charter. The base operating system has just what you need to run containers reliably, and is built with standard open-source components. Bottlerocket-specific additions focus on reliable updates and on the API. Instead of making configuration changes manually, you can change settings with an API call, and these changes are automatically migrated through updates. In the last article, we cleaned up our dynamic linker a little. We even implemented the Dynamic relocation. But it's still pretty far away from running real-world applications. In the last article, we managed to load a program (hello-dl) that uses a single dynamic library (libmsg.so) containing a single exported symbol, msg. So… we got one application to load. Does it work on other applications? Let's pick up where we left off: we had just taught elk to load not only an executable, but also its dependencies, and then their dependencies as well. We discovered that ld-linux walked the dependency graph breadth-first, and so we did that too. Of course, it's a little bit overkill since we only have one dependency, but, nevertheless, elk happily loads our executable and its one dependency. My Rust adventure continues as I have been furiously working on Rust/WinRT for the last five months or so. I am looking forward to opening it up to the community as soon as possible. Even then, it will be early days and much still do. I remember chatting with Martyn Lovell about this a few years ago and we basically agreed that it takes about three years to build a language projection. Naturally, you can get value out of it before then but that’s what you need to keep in mind when you consider completeness. Still, I’m starting to be able to make API calls with Rust/WinRT and its very satisfying to see this come together. So, I’ll leave you with a sneak peek to give you sense of what calling Windows APIs looks like in Rust. Up until now, we've been loading a single ELF file, and there wasn't much structure to how we did it: everything just kinda happened in main, in no particular order. But now that shared libraries are in the picture, we have to load multiple ELF files, with search paths, and keep them around so we can resolve symbols, and apply relocations across different objects. After a long period of trawling through references, painstakingly turning C struct definitions into nom parsers and hunting down valid enum values… it's time for some graphs. In our last article, we managed to load and execute a PIE (position-independent executable). The big improvement in that article was that we started caring about relocations. It was enough for the code and data segments to be in the right place relative to each other, because it used RIP-relative addressing. We've seen that Relative relocations mean to replace the 64-bit integer at offset with the result of base + addend, where addend is specified in the relocation entry itself, and base is the address we chose to load the executable at. Well, that's all well and good. But what if we have two .asm files? The last article, Position-independent code, was a mess. But who could blame us? We looked at the world, and found it to be a chaotic and seemingly nonsensical place. So, in order to blend in, we had to let go of a little bit of sanity. The time has come to reclaim it. Short of faulty memory sticks, memory locations don't magically turn from 0x0 into valid addresses. Someone is doing the turning, and we're going to find out who, if it takes the rest of the series. While this is inspired by DOSBox, it is not a direct port. Many features are implemented differently or not at all. The goal was just to implement enough to play one of my favorite games and learn some Rust and emulation principles along the way. In the last article, we found where code was hiding in our samples/hello executable, by disassembling the whole file and then looking for syscalls. Later on, we learned how to inspect which memory ranges are mapped for a given PID (process identifier). We saw that memory areas weren't all equal: they can be readable, writable, and/or executable. Finally, we learned about program headers and how they specified which parts of the executable file should be mapped to which memory areas. System calls are a way for unprivileged, user applications to request services from the kernel. In the RISC-V architecture, we invoke the call using the ecall instruction. This will cause the CPU to halt what it's doing, elevate privilege modes, and then jump to whatever function handler is stored in the mtvec (machine trap vector) register. Remember, this is the "funnel" where all traps are handled, including our system calls. We have to set up our convention for handling system calls. We can use a convention that already exists, so we can interface with a library, such as newlib. But, let's make this ours! We get to say what the system call numbers are, and where they will be when we execute a system call. I have started to package rust things for Debian, and the process have been pretty smooth so far, but it was very hard finding information on how to start, so here is a small writeup on how I packaged my first rust crate for Debian. In part 1, we've looked at three executables: sample, an assembly program that prints “hi there” using the write system call. entry_point, a C program that prints the address of main using printf. The /bin/true executable, probably also a C program (because it's part of GNU coreutils), and which just exits with code 0. We noticed that entry_point printed different addresses when run with GDB, but always the same address when run directly. What happens if we run it ourselves? This post explains how to implement heap allocators from scratch. It presents and discusses different allocator designs, including bump allocation, linked list allocation, and fixed-size block allocation. For each of the three designs, we will create a basic implementation that can be used for our kernel. In this post, we will implement cooperative multitasking. For simplicity, we will use a round-robin scheduler, where each thread will be run in a FIFO order. What is a cooperative scheduler? Threads can run as long they want, and can let other threads run by yielding to them. The problem? If threads refuse to yield, other threads will be unable to run. epoll iocp kqueue This book aims to explain how Epoll, Kqueue and IOCP works, and how we can use this for efficient, high performance I/O. The book is divided into three parts: Part 1 - An express explanation: is probably what you want to read if you're interested in a short introduction. The Appendix contains some additional references and small articles explaining some concepts that I found interesting and which is related to the kind of code we write here. Part 2 is special. 99% of readers should not even go there. You'll find page up and down with code and explanations just to implement the simplest example of a cross-platform-eventloop that actually works. Turns out that there is no "express" way of doing this. There are few times when I'm so excited about a new feature that I'll write about it before multiple PRs are merged in multiple repos. Typically one would wait and have the patience until everything is fully merged to master yet I can't wait to talk about this one cause it's just too damn cool. What this new branch offers is a way to instantly reboot cloud vms whenever your application dies a horrible death. Let's say a bunch of bad packets from the wrong side of town arrive and decide to shoot your vm full of lead. In a typical linux setup that instance is probably dead, Jim. Your load balancer might start re-routing around it. In a container setup you might get the same sort of deal. Sure, if it was just the process that died systemd might be configured to restart on failure but the whole box? However, what if you weren't running a full blown linux as your base vm? What if your base vm was only a single application that your vm booted straight into and your application was re-spawned in seconds as if it was a process instead of a vm? Executables have been fascinating to me ever since I discovered, as a kid, that they were just files. If you renamed a .exe to something else, you could open it in notepad! And if you renamed something else to a .exe, you'd get a neat error dialog. Clearly, something was different about these files. Seen from notepad, they were mostly gibberish, but there had to be order in that chaos. 12-year-old me knew that, although he didn't quite know how or where to dig to make sense of it all. So, this series is dedicated to my past self. In it we'll attempt to understand how Linux executables are organized, how they are executed, and how to make a program that takes an executable fresh off the linker and compresses it - just because we can. Since the last big series, Making our own ping, was all about Windows, this one will be focused on 64-bit Linux. OxidizedOS is a multicore, x86-64 kernel written in Rust. In this Series, we will be discussing the implementation of kernel threads and a scheduler in Rust. Krabs is an experimental x86 bootloader written in Rust. Krabs can load and start the ELF format Linux kernel compressed with bzip2. Some of the source code uses libbzip2 C library for decompressing, but the rest is completely Rust only. This is chapter 6 of a multi-part series on writing a RISC-V OS in Rust. Processes are the whole point of the operating system. We want to start doing "stuff", which we'll fit into a process and get it going. We will update the process structure in the future as we add features to a process. For now, we need a program counter (which instruction is executing), and a stack for local memory. We will not create our standard library for processes. In this chapter, we're just going to write kernel functions and wrap them into a process. When we start creating our user processes, we will need to read from the block device and start executing instructions. That's quite a ways a way, since we will need system calls and so forth. In the past few months I’ve been working with Red Sift on RedBPF, a BPF toolkit for Rust. Red Sift uses RedBPF to power the security monitoring agent InGRAINd. Peter recently blogged about RedBPF and InGRAINd, and ran a workshop at RustFest Barcelona. We’ve continued to improve RedBPF since, fixing bugs, improving and adding new APIs, adding support for Google Kubernetes Engine kernels and more. We’ve also completed the relicensing of the project to Apache2/MIT – the licensing scheme used by many of the most prominent crates in the Rust ecosystem – which will hopefully make it even easier to adopt RedBPF. In this post I’m going to go into some details into what RedBPF is, what its main components are, and what the full process of writing a BPF program looks like. As a follow up to my post on distribution packaging, it was commented by Fraser Tweedale (@hackuador) that traditionally the “security” aspects of distribution packaging was a compelling reason to use distribution packages over “upstreams”. I want to dig into this further. This post gives an overview of the recent updates to the Writing an OS in Rust blog and the used libraries and tools. I moved to a new apartment mid-October and had lots of work to do there, so I didn't have the time for creating the October status update post. Therefore, this post lists the changes from both October and November. I'm slowly picking up speed again, but I still have a lot of mails in my backlog. Sorry if you haven't received an answer yet! Microsoft can't throw away old Windows code, but the company's research under Project Verona is aiming to make Windows 10 more secure with its recent work on integrating Mozilla-developed Rust for low-level Windows components. A few years back, I wrote up a detailed blog post on Docker's process 1, orphans, zombies, and signal handling. The solution from three years ago was a Haskell executable providing this functionality and a Docker image based on Ubuntu. A few of the Haskellers on the FP Complete team have batted around the idea of rewriting pid1 in Rust as an educational exercise, and to have a nice comparison with Haskell. No one got around to it. However, when Rust 1.39 came out with async/await support, I was looking for a good use case to demonstrate, and decided I'd do this with pid1. After the addition of the NVMe driver a couple months ago, I have been running Redox OS permanently (from an install to disk) on a System76 Galago Pro (galp3-c), with System76 Open Firmware as well as the un-announced, in-development, GPLv3 System76 EC firmware . This particular hardware has full support for the keyboard, touchpad, storage, and ethernet, making it easy to use with Redox. Moonrise is a Linux init system written in Lua with Rust support code. An init system is a software suite responsible for bringing the userspace components of an operating system online and, in most cases, managing long-running components such as background services. When I was writing a fingerd daemon in Rust (why? because I could), one thing that took me a little while to figure out was how to drop root privileges after I bound to port 79. Neotron is an attempt to make computers simple again, whilst also taking advantage of the very latest in programming language development. It is based around four simple concepts: The ARM Thumb-v7M instruction set, A standardised OS interface, A standardised BIOS interface, and Use of the Rust Programming Language. Today I’m releasing a library called iou. This library provides idiomatic Rust bindings to the C library called liburing, which itself is a higher interface for interacting with the io_uring Linux kernel interface. Here are the answers to some questions I expect that may provoke. What is io_uring? io_uring is an interface added to the Linux kernel in version 5.1. Concurrent with that, the primary maintainer of that interface has also been publishing a library for interacting with it called liburing. This blog describes part of the story of Rust adoption at Microsoft. Recently, I’ve been tasked with an experimental rewrite of a low-level system component of the Windows codebase (sorry, we can’t say which one yet). Instead of rewriting the code in C++, I was asked to use Rust, a memory-safe alternative. Though the project is not yet finished, I can say that my experience with Rust has been generally positive. It’s a good choice for those looking to avoid common mistakes that often lead to security vulnerabilities in C++ code bases. During the product development process monitoring our pipelines proved challenging, and we wanted more visibility into our containers. After a short period of exploration, we found that eBPF would address most of the pain points and dark spots we were encountering. There was one catch: no eBPF tooling would help us deploy and maintain new probes within our small, but focused ops team. BCC, while great for tinkering, requires significant effort to roll out to production. It also makes it difficult to integrate our toolkit into our usual CI/CD deployment models. Faced with this dilemma, we decided the only option was for us to write our own Rust-based agent that integrated well with our testing and deployment strategies. I have come to the point with C++/WinRT where I am largely satisfied with how it works and leverages C++ to the best of its ability. There is always room for improvement and I will continue to evolve and optimize C++/WinRT as the C++ language itself advances. But as a technology, the Windows Runtime has always been about more than just one language and we have started working on a few different projects to add support for various languages. None of these efforts could however draw me away from C++… that is until Rust showed up on my radar. Rust is an intriguing language for me. It closely resembles C++ in many ways, hitting all the right notes when it comes to compilation and runtime model, type system and deterministic finalization, that I could not help but get a little excited about this fresh new take on language design. And so it is that I have started building the WinRT language projection for Rust. Recently, a new Linux kernel interface, called io_uring, appeared. I have been looking into it a little bit and I can’t help but wondering about it. Unfortunately, I’ve had only enough time to keep thinking and reading about it. Nevertheless, I’ve decided to share what I’ve been thinking about so far in case someone wants to write some actual code and experiment. Basically, I have an idea for a crate and I’d love someone else to write it 😇. In the world of systems programming where one may find themselves writing hardware drivers or interacting directly with memory-mapped devices, that interaction is almost always through memory-mapped registers provided by the hardware. We typically interact with these things through bitwise operations on some fixed-width numeric type. QEMU and libvirt form the backend of the Red Hat userspace virtualization stack: they are used by our KVM-based products and by several applications included in Red Hat Enterprise Linux, such as virt-manager, libguestfs and GNOME Boxes. Play with Linux process termination exploring such interesting features as PR_SET_CHILD_SUBREAPER and PR_SET_PDEATHSIG. RISC-V ("risk five") and the Rust programming language both start with an R, so naturally they fit together. In this blog, we will write an operating system targeting the RISC-V architecture in Rust (mostly). If you have a sane development environment for RISC-V, you can skip the setup parts right to bootloading. Otherwise, it'll be fairly difficult to get started. This tutorial will progressively build an operating system from start to something that you can show your friends or parents -- if they're significantly young enough. Since I'm rather new at this I decided to make it a "feature" that each blog post will mature as time goes on. More details will be added and some will be clarified. We designed a framework to help developers to quickly build device drivers in Rust. We also utilized Rust’s security features to provide several useful infrastructures for developers so that they can easily handle kernel memory allocation and concurrency management, at the same time, some common bugs (e.g. use-after-free) can be alleviated. We demonstrate the generality of our framework by implementing a real-world device driver on Raspberry Pi 3, and our evaluation shows that device drivers generated by our framework have acceptable binary size for canonical embedded systems and the runtime overhead is negligible. The Redox official website Over the past few months, System76 has been developing a simple, easy-to-use tool for updating firmware on Pop!_OS and System76 hardware. Today, we’re excited to announce that you can now check and update firmware through Settings on Pop!_OS, and through the firmware manager GTK application on System76 hardware running other Debian-based distributions. In the last few weeks, I've been working on a new solution to firmware management on the Linux desktop. A generic framework which combines fwupd and system76-firmware; with a GTK frontend library and application; that is written in Rust. The Redox official website This week I’ve decided to skip trying to get GDB working for now (there are so many issues it’ll take forever to solve them), and instead decided to finally give focus to the final concerns I had about ptrace. Most changes this week was related to getting decent behavior of child processes, although the design feels… suboptimal, somehow (not sure why), so I feel I must be able to improve it better later. Another change was security: Tracers running as a non-root user can now in addition to only tracing processes running as the same user, only trace processes that are directly or indirectly children of the tracer. In the future this can easily be allowed with some kind of capability, but currently in Redox there isn’t a capability-like system other than the simple (but really powerful) namespacing system which sadly I don’t think can be used for this. Once again, last weeks action was merged, which means the full ptrace feature was merged, and it’s time to start tackling the final issues which I have delayed for so long. But, before that, I decided to try to get some basic ptrace compatibility in relibc, so we could see just how far away software like gdb is from being ported, and what concerns I haven’t thought about yet. redox-nix update: That said, I took a little break from the madness, to instead lay my focus on another interesting problem: Newer redoxer couldn’t be compiled using carnix, because of some dependency that used a cargo feature carnix didn’t support. Let me first explain what carnix is, and why this is a problem. Wrapping up the Ion as a library project. It is now possible to embed Ion in any Rust application. Ion takes any Read instance and can execute it (so yes, it is possible to run Ion without ever collecting the script’s binary stream). It takes care of expanding the input and managing the running applications in an efficient manner, with a comprehensive set of errors. Ion is now the rust-based, pipe-oriented liblua alternative. Before I dive in to this week’s actions, I am pleased to announce that all the last weeks’ work is merged! This merge means you can now experiment with basic ptrace functionality using only basic registers and PTRACE_SYSCALL/PTRACE_SINGLESTEP. I have already opened the second PR in the batch: Ptrace memory reading and floating point registers support which will supply the “final bits” of the initial implementation, before all the nitpicking of final concerns can start (not to underestimate the importance and difficulty of these nitpicks - there are some areas of ptrace that aren’t even thought about yet and those will need tending to)! I will comment on these changes in this blog post, as there are some interesting things going on! The next step in the journey of ptrace was to bite the bullet (or at least I thought) and implement system-call tracing. Since the kernel must be able to handle system-calls of processes, it’s quite obvious that the way to set a breakpoint should involve the kernel, running in the context of the tracee, should notify the tracer and wait. So the biggest challenge would be to figure out how kernel synchronization worked. This post adds support for heap allocation to our kernel. First, it gives an introduction to dynamic memory and shows how the borrow checker prevents common allocation errors. It then implements the basic allocation interface of Rust, creates a heap memory region, and sets up an allocator crate. At the end of this post all the allocation and collection types of the built-in alloc crate will be available to our kernel. After having a pretty clear goal to meet specified by the RFC, time to get things moving. I started with what I thought would be low hanging fruit: Reading the registers of another process. It ended up being more difficult than I thought, but it ended up being really interesting and I want to share it with you :) How to fetch batteries information from the macOS APIs with Rust I will quickly show how I got bindgen (https://rust-lang.github.io/rust-bindgen) to generate the bindings to Fuse (libfuse) with the current stable1 release of Rust. By doing so, this should demonstrate how to bootstrap writing your own Fuse file system in Rust. I do realise that there are some crates that already exist that aid in making Fuse drivers in Rust, but this was more or less an excuse to also try out bindgen, which I don't believe those existing libraries utilise. Manticore is a research operating system, written in Rust, with the aim of exploring the parakernel OS architecture. The OS is increasingly a bottleneck for server applications that want to take maximum advantage of the hardware. Many traditional kernel interfaces (such as in POSIX) were designed when I/O was significantly slower than the CPU. However, today I/O is getting faster, but single-threaded CPU performance has stagnated. For example, a 40 GbE NIC can receive a cache-line sized packet faster than the CPU can access its last-level cache (LLC), which makes it tricky for an OS to keep up with packets arriving from the network. Similarly, non-volatile memory (NVM) access speed is getting closer to DRAM speeds, which challenges OS abstractions for storage. To address this OS bottleneck, server applications are increasingly adopting kernel-bypass techniques. For example, the Seastar framework is an OS implemented in userspace, which implements its own CPU and I/O scheduler, and bypasses the Linux kernel as much as it can. Parakernel is an OS architecture that eliminates many OS abstractions (similar to exokernels) and partitions hardware resources (similar to multikernels) to facilitate high-performance server application with increased application-level parallelism and predictable tail latency. This repository contains a simple KVM firmware that is designed to be launched from anything that supports loading ELF binaries and running them with the Linux kernel loading standard. The ultimate goal is to be able to use this "firmware" to be able to load a bootloader from within a disk image. This post explores unit and integration testing in no_std executables. We will use Rust's support for custom test frameworks to execute test functions inside our kernel. To report the results out of QEMU, we will use different features of QEMU and the bootimage tool. Recently, x86_64-unknown-uefi target was added into Rust mainline (https://github.com/rust-lang/rust/pull/56769). So, I tried to write UEFI application with this update. There exists an awesome crate, uefi-rs, which provides Rust interface for UEFI application. However, this is my first time to write UEFI application, so to understand what happens in it, I didn’t use any existing crate. It has been one year and four days since the last release of Redox OS! In this time, we have been hard at work improving the Redox ecosystem. Much of this work was related to relibc, a new C library written in Rust and maintained by the Redox OS project, and adding new packages to the cookbook. We are proud to report that we have now far exceeded the capabilities of newlib, which we were using as our system C library before. We have added many important libraries and programs, which you can see listed below. This post shows how to implement paging support in our kernel. It first explores different techniques to make the physical page table frames accessible to the kernel and discusses their respective advantages and drawbacks. It then implements an address translation function and a function to create a new mapping. This will be the first in a series of weekly updates on progress made in the development of Pop!_OS. Thus, this will only contain content pertaining specifically to Pop!_OS, though at times there may be some overlap with the hardware side of System76. I’ve decided to take a look at Minix, which is an interesting microkernel OS. Naturally after building Minix from git, the first thing I decided to try was porting Rust’s std to Minix so I could cross-compile Rust programs from Linux to run under Minix. Okay, I suppose I could have started with something else, but porting Rust software and modifying the platform-depending part of std is something I have experience with from working on Redox OS. And Rust really isn’t that hard to port. We are going to make a demo linux web-server with systemd, config file and installable .deb binary in Rust. This post introduces paging, a very common memory management scheme that we will also use for our operating system. It explains why memory isolation is needed, how segmentation works, what virtual memory is, and how paging solves memory fragmentation issues. It also explores the layout of multilevel page tables on the x86_64 architecture. It has been a long-standing tradition to develop a language far enough to be able to write the language's compiler in the same language, and Rust does the same. Rust is nowadays written in Rust. We've tracked down the earlier Rust versions, which were written in OCaml, and were planning to use these to bootstrap Rust. But in parallel, John Hudge (Mutabah) developed a Rust compiler, called "mrustc", written in C++. mrustc is now good enough to compile Rust 1.19.0. Using mrustc, we were able to build Rust entirely from source with a bootstrap chain In this post we set up the programmable interrupt controller to correctly forward hardware interrupts to the CPU. To handle these interrupts we add new entries to our interrupt descriptor table, just like we did for our exception handlers. We will learn how to get periodic timer interrupts and how to get input from the keyboard. Stratis 1.0 was quietly released last week with the 1.0 version marking its initial stable release and where also the on-disk meta-data format has been stabilized. Red Hat engineers believe Stratis is now ready for more widespread testing. Time for me to pack up and never ever contribute to Redox ever again… Just kidding. This isn’t goodbye, you can’t get rid of me that easily I’m afraid. I’ll definitely want to contribute more, can’t however say with certainty how much time I’ll get, for school is approaching, quickly The previous blog post discusses how raw disk reads were implemented in the loader stub. The next step was to implement a clean read API which can be used by different filesystem libraries in order to read their respective filesystems. Since the raw reads from the BIOS interrupt had a granularity in terms of sectors(each sector being 512 bytes), the reads had to be translated in order to provide byte level granularity. The clone_from_slice function ensures that a direct call to memcopy is not required. The refined read function is here. At the time of writing the previous blog the plan was to target the Raspberry Pi 3 (Cortex A53) as a development platform because of its availability, popularity and community. Sadly, it seems that Broadcom went through a lot of shortcuts while implementing this specific design, which means features like GIC are half-there or completely missing, like in this case. After a discussion with @microcolonel, he proposed and kindly sent me a HiKey960 reference SoC from the awesome Linaro 96Boards initiative. The quality of this board is definitely a lot better than the Raspberry Pi and the documentation is detailed and open. Great stuff. With the recent addition of Rust 1.27.0 in the HaikuPorts repository, I thought it would be good to do a short, public write-up of the current state of Rust on Haiku, and some insight into the future. This is the second blog post about implementing a FAT32 filesystem in Redox. As promised in the previous article (thanks for all the valuable feedback ‒ I didn’t have the time to act on it yet, but I will), this talks about Unix signal handling. Long story short, I wasn’t happy about the signal handling story in Rust and this is my attempt at improving it. Over the last couple of weeks, Nebulet has progressed signifigantly. Because of that, I think it’s time to talk about why I made certain decisions when designing and writing Nebulet. All excited. A first calendar entry to describe my attempt on arm64 support in Redox OS. Specifically, looking into the Raspberry Pi2/3b/3+(all of them having a Cortex-A53 ARMv8 64-bit microprocessor, although for all my experiments I am going to use the Raspberry Pi 3b. In this post we explore double faults in detail. We also set up an Interrupt Stack Table to catch double faults on a separate kernel stack. This way, we can completely prevent triple faults, even on kernel stack overflow. In this post, we start exploring CPU exceptions. Exceptions occur in various erroneous situations, for example when accessing an invalid memory address or when dividing by zero. To catch them, we have to set up an interrupt descriptor table that provides handler functions. At the end of this post, our kernel will be able to catch breakpoint exceptions and to resume normal execution afterwards. In this post we complete the testing picture by implementing a basic integration test framework, which allows us to run tests on the target system. The idea is to run tests inside QEMU and report the results back to the host through the serial port. Last week I ended off stating that the redox netstack might soon switch to an edge-triggered model. Well, I ended up feeling bad about the idea of letting others do my work and decided to stop being lazy and just do it myself. Rust is an extremely interesting language for the development of system software. This was the motivation to evaluate Rust for HermitCore and to develop an experimental version of our libOS in Rust. Components like the IP stack and uhyve (our unikernel hypervisor) are still written in C. In addition, the user applications are still compiled by our cross-compiler, which is based on gcc and supports C, C++, Fortran, and Go. The core of the kernel, however, is now written in Rust and published at GitHub. Our experiences so far are really good and we are looking into possibly new Rust activities, e.g., the support for Rust’s userland. A first calendar entry to describe my attempt on ARM64 support in Redox OS. Specifically, looking into the Raspberry Pi2/3(B)/3+ (all of them having a Cortex-A53 ARMv8 64-bit microprocessor, although for all my experiments I am going to use the Raspberry Pi 3(B)). This is a blog post about the work which I have done so far in implementing a FAT32 filesystem in Redox. Currently the Redox bootloader as well as the userspace filesystem daemon supports only RedoxFS. This is the weekly summary for my Redox Summer of Code project: Porting tokio to redox. Most of the time was spent on one bug, and after that one was figured out and fixed it ended up being relatively easy! As of now, 11⁄13 tokio examples seem to work on redox. The remaining examples are UDP and seem to fail because of something either with the rust standard library or my setup. This post explores unit testing in no_std executables using Rust's built-in test framework. We will adjust our code so that cargo test works and add some basic unit tests to our VGA buffer module. Redox OS is running its own Summer of Code this year, after the Microkernel devroom did not get accepted into GSoC 2018. We are looking for both Students and Sponsors who want to help Redox OS grow. At the moment, Redox OS has $10,800 in donations from various platforms to use to fund students. This will give us three students working for three months, if each student requests $1200 per month on average as described in Payment. In order to fund more students, we are looking for sponsors who are willing to fund RSoC. Donations can be made on the Donate page. All donations will be used to fund Redox OS activities, with about 90% of those over the past year currently allocated to RSoC. Our second iteration of the 18.04 ISO is ready for testing. Testing the new installer and Optimus switching is our priority for this test release. Please test installing on a variety of hardware and provide feedback on any issues you encounter. If you run into any bugs, you can file them at https://github.com/pop-os/pop/issues. Installing a toolchain for Rust is very easy, as support for CloudABI has been upstreamed into the Rust codebase. Automated builds are performed by the Rust developers. As there hasn’t been a stable release of Rust to include CloudABI support yet, you must for now make use of Rust’s nightly track. Over the past six months we've been working on a second edition of this blog. Our goals for this new version are numerous and we are still not done yet, but today we reached a major milestone: It is now possible to build the OS natively on Windows, macOS, and Linux without any non-Rust dependendencies. Writing eBPF tracing tools in Rust I have been playing with eBPF (extended Berkeley Packet Filters), a neat feature present in recent Linux versions (it evolved from the much older BPF filters). It is a virtual machine running in th…
<urn:uuid:25477a64-f4b8-4307-a31f-96dc90954d9f>
CC-MAIN-2020-34
https://readrust.net/operating-systems
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739347.81/warc/CC-MAIN-20200814160701-20200814190701-00170.warc.gz
en
0.945269
12,373
2.609375
3
When Nalini Nadkarni was a young scientist in the 1980s, she wanted to study the canopy – the part of the trees just above the forest floor to the very top branches. But back then, people hadn't figured out a good way to easily reach the canopy so it was difficult to conduct research in the tree tops. And Nadkarni's graduate school advisors didn't really think studying the canopy was worthwhile. "That's just Tarzan and Jane stuff. You know that's just glamour stuff," Nadkarni remembers advisors telling her. "There's no science up there that you need to do." They couldn't have been more wrong. Over the course of her career, Nadkarni's work has illuminated the unique and complex world of the forest canopy. She helped shape our understanding of canopy soils — a type of soil that forms on the tree trunks and branches. The soil is made up of dead canopy plants and animals that decompose in place. The rich soil supports canopy-dwelling plants, insects and microorganisms that live their entire life cycles in the treetops. If the canopy soil falls to the forest floor, the soil joins the nutrient cycles of the whole forest. She also discovered that some trees are able to grow above-ground roots from their branches and trunks. Much like below ground roots, the aerial roots can transport water and nutrients into the tree. During Nadkarni's early work as an ecologist she began to realize something else: There weren't many women conducting canopy research. Nadkarni was determined to change this. In the early 2000s, she and her lab colleagues came up with the idea of TreeTop Barbie, a canopy researcher version of the popular Barbie doll that could be marketed to young girls. She pitched the idea to Mattel, the company that makes Barbie. "When I proposed this idea they said, 'We're not interested. That has no meaning to us," says Nadkarni. "We make our own Barbies." Nadkarni decided to make them herself anyway. She thrifted old Barbies; commissioned a tailor to make the clothes for TreeTop Barbie; and she created a TreeTop Barbie field guide to canopy plants. Nadkarni sold the dolls at cost and brought TreeTop Barbie to conferences and lectures. Her efforts landed her in the pages of The New York Times, and word eventually got back to Mattel. The owners of Barbie wanted her to shut down TreeTop Barbie due to brand infringement. Nadkarni pushed back. "Well you know, I know a number of journalists who would be really interested in knowing that Mattel is trying to shut down a small, brown woman who's trying to inspire young girls to go into science," she recalls telling Mattel. Mattel relented. The company allowed her to continue her small-scale operation. By Nadkarni's count, she sold about 400 dolls over the years. Then in 2018, more than a decade after Nadkarni started TreeTop Barbie, she got an unbelievable phone call. National Geographic had partnered with Mattel to make a series of Barbies focused on exploration and science. And they wanted Nadkarni to be an advisor. "I thought, this is incredible. This is like full circle coming around. This is a dream come true," says Nadkarni. For its part, Mattel is "thrilled to partner with National Geographic and Nalini," a spokesperson told NPR. Nadkarni knows that everyone might not approve of her working with Barbie. Barbie's role in creating an unrealistic standard of beauty for young women has been debated. Nadkarni has also wrestled with how she feels about it. "My sense is yes she's a plastic doll. Yes she's configured in all the ways that we should not be thinking of how women should be shaped," says Nadkarni. "But the fact that now there are these explorer Barbies that are being role models for little girls so that they can literally see themselves as a nature photographer, or an astrophysicist, or an entomologist or you know a tree climber... It's never perfect. But I think it's a step forward." Nadkarni is an Emeritus Professor at The Evergreen State College, and currently is a professor in the School of Biological Sciences at the University of Utah. Nalini Nadkarni's story has appeared in The Washington Post, Time Magazine, Taiwan News, News India Times, Philadelphia Inquirer, National Geographic, The Guardian, Science Friday, San Francisco Chronicle, India Today, India Times, KSL News, Salt Lake Tribune, USA Today, BBC, The Morning Journal, CNN, UNEWS, Star Tribune, National Science Foundation, Continuum, TreeHugger, and many others.
<urn:uuid:a95c698e-9670-489e-a4c7-61dbd98a23fd>
CC-MAIN-2020-16
https://science.utah.edu/2020/02/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371886991.92/warc/CC-MAIN-20200410043735-20200410074235-00353.warc.gz
en
0.967522
1,017
3.015625
3
Bananas are the fourth most important food crop after rice, wheat and corn, and eaten around the world. Given their ample nutrient and fiber content, as well as low cost and wide availability, the popularity of bananas is not surprising. Eating fiber-rich bananas can be beneficial to your health, as studies have shown that those who consume more fiber have reduced risk of chronic disease. How Much Fiber? All bananas contain fiber, and the amount varies with the banana's size. A small banana of 6 to 7 inches in length contains 2.6 grams of fiber; a medium banana contains 3.1 grams of fiber; and an extra large banana -- over 9 inches long -- contains 4 grams of fiber. While the average daily fiber intake in the United States is a mere 15 grams per day, dietary guidelines recommend that women consume 25 grams of fiber per day and men consume 38 grams. Types of Fiber Fiber is a carbohydrate that your body is incapable of digesting. Fiber comes in two varieties, soluble and insoluble, and both are favorable for your health. Soluble fiber dissolves in water and slows digestion, helping to lower glucose levels and cholesterol. Insoluble fiber doesn't dissolve in water and helps move food through your digestive system. A medium banana that contains 3.1 grams of fiber is made up of 1 gram of soluble fiber and 2.1 grams of insoluble fiber. Why Fiber is Good It's best to get your fiber from plant sources directly, as few fiber supplements have been examined for effectiveness. Fiber helps your body regulate its use of sugar, keeping your blood sugar in check. Fiber also has a favorable effect on the risk factors for numerous chronic diseases, including heart disease and diabetes. In addition to helping reduce the risk of chronic disease, high-fiber diets are more satiating and affiliated with lower body weight. Banana's Other Benefits The nutritional benefits of bananas extend far beyond fiber content. One cup of sliced banana contains more than 500 milligrams of potassium, which helps to counterbalance some of sodium's harmful effects and reduce your blood pressure. Given their sweetness, bananas make an excellent sweet snack or dessert. Use them to make a healthy smoothie or slice them atop breakfast oatmeal or cereal to naturally sweeten it up.
<urn:uuid:272ff9b1-86de-4fb0-8377-448ed7592043>
CC-MAIN-2017-43
https://www.livestrong.com/article/367008-are-bananas-a-source-of-fiber/
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823229.49/warc/CC-MAIN-20171019050401-20171019070401-00737.warc.gz
en
0.937136
464
3.171875
3
People - Ancient Greece: Polyperchon (394–303 BC) Ancient Macedonian general who served under Philip II and Alexander the Great. Polyperchon in Wikipedia Polyperchon (Greek: Πολυπέρχων) (394–303 BC), son of Simmias from Tymphaia in Epirus, was a Macedonian general who served under Philip II and Alexander the Great, accompanying Alexander throughout his long journeys. After the return to Babylon, Polyperchon was sent back to Macedon with Craterus, but had only reached Cilicia by the time of Alexander's death in 323 BC. Polyperchon and Craterus continued on to Greece, helping Antipater to defeat the Greek rebellion in the Lamian War. Polyperchon remained in Macedon and, following the First War of the Diadochi remained home as regent of Macedon while Antipater travelled to Asia Minor to assert his regency over the whole Empire. Upon Antipater's death in 319, Polyperchon was appointed regent and supreme commander of the entire empire but soon fell into conflict with Antipater's son Cassander, who was to have been his chief lieutenant. The two fell into civil war, which quickly spread among all the successors of Alexander, with Polyperchon allying with Eumenes against Cassander, Antigonus and Ptolemy. Although Polyperchon was initially successful in securing control of the Greek cities, whose freedom he proclaimed, his fleet was destroyed by Antigonus in 318 BC, and Cassander secured control of Athens the next year. Shortly thereafter, Polyperchon was driven from Macedon by Cassander, who took control of the weakling king Philip Arrhidaeus and his wife Eurydice. Polyperchon fled to Epirus, where he joined Alexander's mother Olympias, widow Roxana, and infant son Alexander IV. He formed an alliance with Olympias and King Aeacides of Epirus, and Olympias led an army into Macedon. She was initially successful, defeating and capturing the army of King Philip, whom she had murdered, but soon Cassander returned from the Peloponnesus and captured and murdered her in 316, taking Roxana and the boy king into his custody. Polyperchon now fled to the Peloponnesus, where he still controlled a few strongpoints, and allied himself with Antigonus, who had by now fallen out with his former allies. Polyperchon soon controlled much of the Peloponnesus, including Corinth and Sicyon. Following the peace treaty of 311 between Antigonus and his enemies, and the murder of the boy-king Alexander and his mother, Polyperchon retained these areas, and when war again broke out between Antigonus and the others, he sent Alexander's natural son Heracles to Polyperchon as a bargaining chip to use against Cassander. Polyperchon, however, decided to break with Antigonus and murdered the boy in 309. He retained control of the Peloponnesus until his death a few years later, but played no further role in politics. He had a son named Alexander who was a noted general in the Wars of the Diadochi. Polysperchon in Harpers Dictionary of Classical Antiquities (1898) (Πολυσπέρχων). A Macedonian, and a distinguished officer of Alexander the Great (Arrian, Anab. iii. 11). In B.C. 323 he was appointed by Alexander II. in command of the army of invalids and veterans, which Craterus had to conduct home to Macedonia. He afterwards served under Antipater in Europe, and so great was the confidence which the latter reposed in him, that Antipater on his death-bed (319 B.C.) appointed Polysperchon to succeed him as regent and guardian of the king, while he assigned to his own son Cassander the subordinate station of Chiliarch. Polysperchon soon became involved in war with Cassander, who was dissatisfied with this arrangement. It was in the course of this war that Polysperchon basely surrendered Phocion to the Athenians, in the hope of securing the adherence of Athens. (See Phocion.) Although Polysperchon was supported by Olympias, and possessed great influence with the Macedonian soldiers, he proved no match for Cassander, and was obliged to yield to him the possession of Macedonia about 316. For the next few years Polysperchon is rarely mentioned, but in 310, he again assumed an important part by reviving the long-forgotten pretensions of Heracles, the son of Alexander and Barsiné, to the throne of Macedonia. Cassander marched against him, but distrusting the fidelity of his own troops, he entered into secret negotiations with Polysperchon, and persuaded the latter, by promises and flatteries, to murder Heracles (Diod.xx. 28). From this time he appears to have served under Cassander; but the period of his death is not mentioned. If you notice a broken link or any error PLEASE report it by clicking HERE © 1995-2017 Bible History Online
<urn:uuid:97646838-ec3e-41ac-854e-6c8a32cd7c2a>
CC-MAIN-2017-26
http://www.bible-history.com/links.php?cat=48&sub=4348&cat_name=People+-+Ancient+Greece&subcat_name=Polyperchon+
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321426.45/warc/CC-MAIN-20170627134151-20170627154151-00036.warc.gz
en
0.966311
1,124
3.421875
3
Aside from being incredibly nutritious, oats may also help improve the digestive system and lower blood sugar Photo by Foodie Factor/Pexels Oats are some of the healthiest and most versatile foods out there. Besides being the base for oatmeal, they can also be served in a variety of ways such as breakfast bars and smoothies. Here’s why you might want to consider eating more of this cereal grain regularly: Oats significantly benefit the digestive system Oats are some of the most nutrient-dense foods in your kitchen. For example, half a cup of dry oats (78 grams) is already composed of 51 grams of carbs, 13 grams of proteins, five grams of fat, and eight grams of fiber. That they are a great source of fiber (including fiber beta-glucan, which is strongly linked to improving cholesterol levels) and carbohydrates is enough reason to add more of it to your diet. They keep you full for longer The reason why oatmeal is often eaten during breakfast is that it’s filling and consists of vitamins that give you all the energy you need for the day. The purest form of oats are also gluten-free and because they’re whole grain, they can keep you full throughout the day. Beta-glucan, a type of sugar found in oats, affects the release of peptide hormones (produced in the gut in response to eating), which helps you consume less calories. They keep your heart healthy Eating oatmeal regularly is proven to lower cholesterol levels or hyperlipidemia—one major cause of heart disease. Studies have shown that beta-glucan may increase cholesterol-rich bile, which in turn prevents cholesterol from circulating in the blood. As such, oats help avoid inflammation in the arteries and may hinder the risk of heart attacks and strokes. If consumed in a low saturated fat meal, three grams of soluble fiber from oats may also prevent high cholesterol and heart disease.
<urn:uuid:b2207755-dbb2-49a8-af35-65c934244c4f>
CC-MAIN-2023-14
https://multisport.ph/38556/heres-why-you-should-eat-oats-regularly/
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945242.64/warc/CC-MAIN-20230324020038-20230324050038-00098.warc.gz
en
0.956204
404
3.046875
3
- (1599–1660), Spanish painter, court painter to Philip IV; full name Diego Rodríguez de Silva y Velázquez. His portraits humanized the formal Spanish tradition of idealized figures. Notable works: Pope Innocent X (1650), The Toilet of Venus (known as The Rokeby Venus, circa 1651), and Las Meninas (circa 1656). More definitions of Velázquez, DiegoDefinition of Velázquez, Diego in: - The US English dictionary
<urn:uuid:cb80c2f8-95f7-4c0d-88e7-5146f0457fcb>
CC-MAIN-2014-15
http://www.oxforddictionaries.com/definition/english/Vel%C3%A1zquez-Diego
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00480-ip-10-147-4-33.ec2.internal.warc.gz
en
0.798491
111
2.96875
3
Comparative Analysis in Retail Marketing Choose two retailers in the same product sector and compare and contrast their approach to retail marketing. You should choose two retailers that have a different market position and target consumer. Your report should address the following questions: • Briefly summarise the value proposition that each of the two retailers are offering to their specific target consumers. • Critically compare and contrast how the two retailers design their stores – in terms of layout, atmospherics and merchandising – in order to engage with consumers. Provide diagrams and photographic images to support points made. • Critically compare and contrast how these retailers use TWO of the following additional marketing elements to appeal to and convert target consumer – location, pricing, added value services, and/or marketing communications Briefly summarise the value proposition that each of the two retailers are offering to their specific target consumers. What combination of features makes the retailer attractive to customers? Critically compare and contrast how the two retailers design their stores – in terms of layout, atmospherics and merchandising – in order to engage with consumers. • Refer to the SOR Model to help structure your review • Provide diagrams and photographic images to support points made. Critically compare and contrast how these retailers use TWO of the following additional marketing elements to appeal to and convert target consumer: • Added Value Services • Marketing Communications Ghana was the primary nation in dark Africa to accomplish freedom, on March 6, 1957. It demonstrated the path to whatever is left of Africa to free themselves from the colonization which was spread wherever on the landmass. Kwame Nkrumah was the person who roused by India's freedom turned out with the Convention People's Party (CPP) and conveyed supreme Britain to leave the Gold Coast (Ghana before the autonomy) just because of political means. After this triumph, Kwame Nkrumah turned into the image of an age as the counter colonialist battle, perceived by the most as the principal genuine African dissident successful over the expansionism. The autonomy of Gold Coast had an extent everywhere throughout the mainland and was at the premise of the numerous patriot activities acknowledged a while later. After the freedom of Ghana : No one questioned about the brilliant monetary eventual fate of Ghana as it is the principal cocoa exporter worldwide and was delivering huge amount of gold (around one tenth of the world's generation). Ghana was additionally full with yields, woodlands and even gemstones.Finally, numerous in Ghana were accomplished and a fourth of Ghana 's populace was proficient. Additionally, Nkruhmah was ending up increasingly valued by individuals on account of the motivation he gave to them. He got the substantial duty of revamping again this nation, to unit its habitants despite the fact that they shared not very many things in like manner and still, the colonization wound up as of late. To be sure, in this year, numerous gatherings still remain hostiles toward each other from the many years of wars and of slave exchange. Strains were as yet present as Ghana was attempting to change its face. The nation wasn't steady yet, the populace could have been impacted by others subsequently Nkrumah chose that all the political gatherings whether territorial or innate arranged were illegal keeping in mind the end goal to keep any inside issues caused by sentiments of patriotism. 1958 was a dull year for Ghana which was not any more the world's biggest cocoa provider. Shockingly, the nation was confronting a monetary downturn which made a social emergency. Nkrumah's administration lost its fame toward the mass and the rustic populace. The administration's reaction exacerbates things: Indeed, Nkrumah wound up oppressive and took numerous hard measures against the indications and to any individual who disdained his legislature. While, he said once: "On the off chance that we get self-government, we will change the Gold Coast ( Ghana ) into a heaven in 10 years" Strikes were viewed as unlawful and extremely rebuffed. He executed of a law which without trial permitted to capture anybody suspected being against the state for a long time which swung to be ten years after the fact on. Every single political gathering were restricted. Thus: Nkrumah proclaimed himself president forever; Ghana as a one-party state lastly accomplished to turn his nation as terrible for Ghana 's laborers. In 1960, Nkrumah is assigned leader of republic. The president had high anticipation for Ghana and began numerous costly and eager ventures without tragically getting benefit from them. Truly, Nkrumah needed to utilize the assets of Ghana to advance the business improvement and the financial development for the nation. Ghana had a considerable measure of bauxite and that could guarantee a decent ascent of the segment particularly on account of the fabricate of aluminum, by sending out around the world. However to begin these undertakings, the requirement for power turned into a need. As a result, the procedure of industrialization started, prompting the Volta Dam venture. The task was just half fruitful the same number of others Nkrumah had run however no one could question the great aims behind them. The farming division stayed unnoticed though it speaks to the reason for a creating nation and by and large for Ghana as it discards a great deal of regular assets. As a result, the economy began to turn terrible and Ghana gotten an obligation which was expanding exceptionally. The positive state of mind in the current past years which tend to remain sure about Ghana 's improvement finished and incited a major change in the political atmosphere. Later on, in 1962, the financial circumstance advanced so severely that every single remote speculator and industry were in the commitment by law to contribute again in excess of 60 percent of their additions inside Ghana . The president had no way out than to drive his financial specialists as he did with the populace to keep giving cash to its framework that nobody else put stock in any more. As though the fall was interminable, in 1964, one year after W.E.B kicked the bucket; (he was the primary African American who graduated at Harvard and to acquire a doctorate. He was otherwise called a lobbyist against bigotry and the isolation. After the autonomy of Ghana , he was welcomed by Nkrumah to live in Ghana ); the president Nkrumah suspended the constitution and subsequently the majority rule government. Ghana was at last authoritatively perceived as a one-party state managed by a despot. Once more, the West responded subsequent to acknowledging to what circumstance Ghana fell after the Independence . Scrutinized by western social orders, Nkrumah started to work with comrade nations, for example, chiefly the Soviet Union . Around then, Ghana 's financial aspects' emergency has achieved its peak: The nation is wild and the general population continue getting poorer. The tyrant is absolutely disagreeable due to its past activities against his kin. The economy is crazy and the populace is getting poorer. Nkrumah is not any more a famous pioneer as he hits hard on exhibits and captures anybody in resistance. The primary overthrow On the 24thof February, 1966: A military takeover happened in Ghana, it didn't make any enormous misfortunes as it was wanted to happen while Nkrumah was from the nation going to his companion President Sékou Touré in Guinea. The military upset was acknowledged by British-prepared officers who had the aspiration to stop the hard lead of Nkrumah and his administration. Along these lines, while the president was away everything about statues in Accra were brought around the general population. The new military government called itself the National Liberation Council (NLC). They proclaimed that their expectation was to ward off defilement and to roll out some improvement in the constitution so Ghana could return once again to a vote based framework. Informally, Britain was mediating in Ghana in view of the introduction the nation was attempted amid the most recent years of Nkrumah's tyranny towards the socialist nations. To be sure, it was the cool war, the world was separated in two and the tentatives to draw in nations to the other side or the other weren't uncommon; by and large in these immature nations. As an outcome, the NLC's committee had a tendency to be preferably more traditionalist than communist and consequently, it kept under a strict control all government officials and ideologues whether they were either communists or communists. All associations with the Soviet Union were broken and experts from USSR and China were removed so as to dispose of any impacts that could lead Ghana to socialism. Ghanawas having his possibility some other time, to the eyes of the West, Ghana was taking another way, an appropriate one to vote based system and self-supportability. Following three years of temporary driving: The NLC authorized some other time the investment of products political gatherings. At long last, new decisions were declared for September 1969 which denoted the start of the second republic. Another regular citizen government is made by Dr. Kofi Busia and the Progress Party. His gathering got a decent begin as the national economy recovered quality on account of the high costs on the cocoa showcase. Quickly, costs drop once more, the financial circumstance of Ghana go from awful to most noticeably bad in 1971. In fact, a political choice has been made to devaluate the Cedi which prompted higher costs and to showings, clashes with brutality by the populace. In 1972, Kwame Nkrumah kicks the bucket, notwithstanding his political disappointment, African masses still find in him an overcome dissident, the image of the battle hostile to colonialist and as the organizer of Ghana . On the 13thof January, 1972: Once again an overthrow happened, acknowledged by powers of the armed force, for a difference in government. This time, The National Redemption Council chose to force a pioneer for Ghana . In this way, they picked Colonel Ignatius Acheampong to run the state. Be that as it may, the leader of the state doesn't have enough involvement in any spaces whether they are political or practical. An absence of vision from Acheampong prompted an ascent of defilement from the premise to the highest point of the general public and the legislature. As a result, huge strikes are sorted out by the young in the nation to assert their mistake toward the basic circumstance the nation was led. After one year, the economy was relatively tumbling to pieces and no assention could have been found with NRC-government.Acheampong stepped up with regards to put a conclusion to the administration and executed the Supreme Military Council (SMC) constituted of a little gathering of seven people picked without anyone else. The SMC ruled the nation in a generally way: Any restricting to the administration was casualty of products mistreatments and notwithstanding imprisoning with no sentence. On the 5thof July, 1978, Acheampong was in the commitment to leave while the general William Akuffo was taking the charge of the "Preeminent Military Council II". He connected with himself to change a non military personnel government, to consider some other time the political gatherings in Ghana . At long last he pronounced that he would set a date for new decisions. Later on, on the 4thof June in 1979, after a first fizzled overthrow around the same time, Jerry John Rawlings a flight Lieutenant arranged a takeover some days prior to the arranged race. He was at long last triumphant, the Armed Forces Revolutionary Council accomplished to take control. His thoughts were basically enlivened by communism whether they are political or practical. His objective was to discover an issue to co>GET ANSWER
<urn:uuid:7441756d-934f-412e-937c-002549feddae>
CC-MAIN-2020-24
https://acedessays.com/distribution-and-retail/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347415315.43/warc/CC-MAIN-20200601071242-20200601101242-00458.warc.gz
en
0.972039
2,369
2.640625
3
This Thursday marks the 48th anniversary of Malcolm X's assassination at the age of 39. In most parts of the country the date will be observed with little fanfare. Today Malcolm X is too often relegated to little more than a footnote in American history, despite the recent controversy and attention generated by Columbia University professor Manning Marable's Pulitzer Prize-winning biography, "Malcolm X: A Life of Reinvention." It's time to restore Malcolm X's proper place in history. He remains, in my view, the greatest American leader of any color in the past century. His unflinching courage and conviction, open and evolving mind, and journey from street hustler and prisoner to international voice for the oppressed give him an unparalleled place in U.S. history. Too often we equate leadership with elected office. Malcolm X was not "political" in the corrupting, compromising and ultimately cowardly way we've come to define the word. He sought justice "by any means necessary," and he wasn't interested in turning the other cheek. Even critics of his fiery rhetoric, however, acknowledged that the fear it created made it easier for others who professed nonviolence, including Rev. Martin Luther King Jr., to push the nation forward. A spellbinding orator, Malcolm X took complex ideas and made them plain. During the tumultuous 1960s, his call for self-determination resounded in the nation's burning ghettos. Toward the end of his life, he carried the struggles of African-Americans into the global arena, redefining the movement as one of human -- not just civil -- rights. He changed how the world viewed African-Americans and how African-Americans viewed themselves. Malcolm X wanted black people to love themselves as much as he loved them. He was a rare cat -- someone who could 'fess up to mistakes, change his mind when presented with new information and not dodge his blemishes and blunders. After a pilgrimage to Mecca in 1964, he altered his views on whites and race relations, affirming that we are all part of the human family. But he remained a black nationalist, understanding that African-Americans must control their institutions and economies to gain real equality in America. The nation has changed enormously since Malcolm X was assassinated by Nation of Islam gunmen in Manhattan's Audubon Ballroom on Feb. 21, 1965. Even so, America's progress remains uneven, its promise of justice unfulfilled. Concentrated poverty and gun violence are more insidious today than they were a half-century ago. America's first African-American president presides over a country with nearly 1 million black men locked up in its jails and prisons. The nation needs a new urban agenda that traditional civil rights groups must get squarely behind. It's a movement Malcolm X would have helped define. He spoke for the poor and disenfranchised in America's central cities. A former prisoner who cleaned himself up, Malcolm X spoke with authority about the criminal justice system. As someone who regularly speaks to prisoners, I know he remains an icon and inspiration behind the walls. Malcolm X never sought national honors while he lived, nor would he expect them after his death. Still, acknowledging his life in a more concrete and enduring way could encourage others to carry on his work. That would be the best way to honor this fearless and uncompromising freedom fighter. Nearly 50 years after his death, Malcolm X continues to remind us not of how far we've come, but of how far we have yet to go.opinion_commentary Jeff Gerritt is deputy editorial page editor of The Blade, the Post-Gazette's sister newspaper in Toledo, Ohio ([email protected], 419-724-6467). Follow him on twitter @jeffgerritt.
<urn:uuid:b11dcd7c-4c94-4553-9834-6a450fcb255e>
CC-MAIN-2014-15
http://www.post-gazette.com/Op-Ed/2013/02/18/Malcolm-X-s-place-in-history-He-deserves-more-credit-for-his-courage-in-fighting-racism/stories/201302180317
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00458-ip-10-147-4-33.ec2.internal.warc.gz
en
0.966652
781
2.796875
3
People who keep their teeth and gums healthy with regular brushing may have a lower risk of developing dementia. Researchers followed close to 5,500 elderly people over an 18-year period. Women who reported brushing their teeth less than once a day were up to 65 percent more likely to develop dementia. Inflammation from gum disease-related bacteria impacts heart disease, stroke and diabetes. Gum disease bacteria might get into the brain causing inflammation and brain damage, researchers told Reuters Health. Participants ranged in age from 52 to 105, with an average age of 81. All were free of dementia at the outset, when they answered questions about their dental health habits, the condition of their teeth and whether they wore dentures. Researchers followed-up 18 years later, using interviews, medical records and in some cases death certificates to determine that 1,145 of the original group had been diagnosed with dementia. Men were less affected. The less frequent brushers were 22 percent more likely to have dementia than those who did brush daily. Statistically, however, the effect was so small it could have been due to chance, the researchers There was a significant difference seen between men who had all, or at least most, of their teeth, or who wore dentures, and those who didn't - the latter group were almost twice as likely to develop dementia. That effect was not seen in women. So, brush your teeth, floss, gargle, use one of those tongue doohickeys and see your dentists regularly.
<urn:uuid:e3a92b68-c77e-44bf-af2e-a12c142afed3>
CC-MAIN-2013-48
http://www.wellsphere.com/aging-senior-health-article/dental-health-affects-dementia/1781550
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345762220/warc/CC-MAIN-20131218054922-00085-ip-10-33-133-15.ec2.internal.warc.gz
en
0.969656
332
3.328125
3
Colorado Forests Need You Now If you live in or have been to Colorado you know that wild forests make the state magical. Right now, many of those forest lands are at risk. The 2001 federal 2001 Roadless Area Conservation Rule protects nearly 60 million acres of pristine forests throughout the nation. Yet the state of Colorado is moving forward with a proposal that would exempt the state from the national rule, replacing it with a weaker version that could damage some of the most beautiful countryside in Colorado. Please take action today by contacting President Obama and asking him to direct Agriculture Secretary Tom Vilsack (who oversees the US Forest Service) to suspend Colorado's efforts. Currant Creek, high above the North Fork of the Gunnison River, is one of the places that could be ruined. This distinctly remote and unaltered landscape spans diverse mid-elevation forest landscape hosting aspen, oak and serviceberry. This area is essential to elk calving, mule deer rearing, migration and other seasonal wildlife habitat issues. Under the rule Colorado is proposing, Currant Creek would be opened to coal mining and a network of new roads - all far from any existing coal portals and transportation networks. You can read more here about Currant Creek and what is happening in Colorado. Please help Colorado's forests (and avoid setting a bad precedent for other states) by sending your note today and letting friends know. I'm writing you today to ask that you direct the U.S. Secretary of Agriculture to immediately suspend the state of Colorado's effort to approve a state-based roadless rule that would open up national forests in Colorado to road building and other destructive activities. Instead, Colorado's forest should be afforded the same protection as national forests across the country under the national 2001 rule. You have already expressed great support for protecting roadless forests and we ask that you keep fighting for this worthy cause. Roadless forests play a critical role in the health of our planet and our communities. They protect sources of drinking water, serve as home to limitless recreational opportunities, provide habitat for wildlife, and help defend us against the impact of global warming. Please direct the U.S. Secretary of Agriculture to suspend the state's rulemaking efforts and instead support the national rule to protect Colorado' national forests to the standard they deserve. P.S. Please also eliminate the Bush-era exemption to roadless protection for the Tongass National Forest in Alaska. As you know, America's largest national rain forest is indispensable to salmon fishermen, native cultures, and local economies. The Wilderness Society started this petition with a single signature, and now has 1,239 supporters. Start a petition today to change something you care about.
<urn:uuid:7faac1d5-8d1f-46cb-ab22-4deb66a6f8fc>
CC-MAIN-2017-30
https://www.change.org/p/colorado-forests-need-you-now
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549429485.0/warc/CC-MAIN-20170727202516-20170727222516-00379.warc.gz
en
0.931238
553
2.75
3
CCR has been fighting for racial justice since our first day. We organized legal support for and defended marchers who were arrested on the historic Selma-to-Montgomery march in 1965; litigated scores of Voting Rights Act cases; led challenges to de facto segregation that held states responsible for affirmative duties to racial equality; and established a national Anti-Ku Klux Klan Network in the late 1970s. In recent decades, our Telephone Justice Campaign challenged the exploitive phone rates New York State prisoners had to pay, and we supported public school teachers of color. Now, from taking on the FDNY’s discriminatory hiring and the NYPD’s stop-and-frisk practices to providing legal support to the Black Lives Matter movement, CCR continues the unfinished work of the Civil Rights Movement. We use the landmark legislation passed during the 1960s to challenge both intentional discrimination and the discriminatory impacts of government practices and policies, and we work closely with grassroots organizations that are driving the demand for reform. Racial injustice is deeply intertwined with many other injustices the Center is fighting —from abusive immigration practices to Muslim profiling to mass incarceration – and we explicitly make connections among different struggles. Above all, CCR is committed to addressing the structural and systemic nature of racism in our society.
<urn:uuid:a3139cd7-e27c-41d0-945b-35639f864962>
CC-MAIN-2017-26
https://ccrjustice.org/home/what-we-do/issues/racial-injustice
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323730.30/warc/CC-MAIN-20170628185804-20170628205804-00328.warc.gz
en
0.950459
256
2.5625
3
Welcome to Bugbrooke Village The village, named in the Domesday Book of 1086 AD as "Buchebroc", is situated on the Hoarestone Brook, which flows through the village from south to north. The name of the stream is supposed to be a corruption of Horse-stone, as an old packhorse route crossed the brook by a simple slab bridge just outside the village. When the stream was widened in the 1970s, the last of the mediaeval slabs was damaged beyond repair, but the pillars are still intact. The brook meets the River Nene near Bugbrooke Mill. The first mill on the site was established in 800 AD and by the time of the Domesday Book was the third-highest rated mill in England. It is now the site of Heygate's flour mill, whose large central tower can be seen for several miles around. Heygate's trucks, with their distinctive maroon markings, can frequently be seen rumbling along Bugbrooke's main road.
<urn:uuid:446e011d-657a-4bc9-98c6-95d2703a8e81>
CC-MAIN-2014-10
http://www.bugbrooke-village.co.uk/
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999653106/warc/CC-MAIN-20140305060733-00088-ip-10-183-142-35.ec2.internal.warc.gz
en
0.979329
210
2.515625
3
- To understand the Software Project Planning and Evaluation techniques. - To plan and manage projects at each stage of the software development life cycle (SDLC). - To learn about the activity planning and risk management principles. - To manage software projects and control software deliverables. - To develop skills to manage the various phases involved in project management and people management. - To deliver successful software projects that support organization‘s strategic goals. UNIT I PROJECT EVALUATION AND PROJECT PLANNING 9 Importance of Software Project Management – Activities – Methodologies – Categorization of Software Projects – Setting objectives – Management Principles – Management Control – Project portfolio Management – Cost-benefit evaluation technology – Risk evaluation – Strategic program Management – Stepwise Project Planning. UNIT II PROJECT LIFE CYCLE AND EFFORT ESTIMATION 9 Software process and Process Models – Choice of Process models – Rapid Application development – Agile methods – Dynamic System Development Method – Extreme Programming– Managing interactive processes – Basics of Software estimation – Effort and Cost estimation techniques –COSMIC Full function points – COCOMO II – a Parametric Productivity Model. UNIT III ACTIVITY PLANNING AND RISK MANAGEMENT 9 Objectives of Activity planning – Project schedules – Activities – Sequencing and scheduling –Network Planning models – Formulating Network Model – Forward Pass & Backward Pass techniques – Critical path (CRM) method – Risk identification – Assessment – Risk Planning –Risk Management – – PERT technique – Monte Carlo simulation – Resource Allocation – Creation of critical paths – Cost schedules. UNIT IV PROJECT MANAGEMENT AND CONTROL 9 Framework for Management and control – Collection of data – Visualizing progress – Cost monitoring – Earned Value Analysis – Prioritizing Monitoring – Project tracking – Change control – Software Configuration Management – Managing contracts – Contract Management. UNIT V STAFFING IN SOFTWARE PROJECTS 9 Managing people – Organizational behavior – Best methods of staff selection – Motivation – The Oldham – Hackman job characteristic model – Stress – Health and Safety – Ethical and Professional concerns – Working in teams – Decision making – Organizational structures – Dispersed and Virtual teams – Communications genres – Communication plans – Leadership. TOTAL 45 PERIODS At the end of the course, the students should be able to: - Understand Project Management principles while developing software. - Gain extensive knowledge about the basic project management concepts, framework and the process models. - Obtain adequate knowledge about software process models and software effort estimation techniques. - Estimate the risks involved in various project activities. - Define the checkpoints, project reporting structure, project progress and tracking mechanisms using project management principles. - Learn staff selection process and the issues related to people management 1. Bob Hughes, Mike Cotterell and Rajib Mall: Software Project Management – Fifth Edition,Tata McGraw Hill, New Delhi, 2012. 1. Robert K. Wysocki ―Effective Software Project Management‖ – Wiley Publication, 2011. 2. Walker Royce: ―Software Project Management‖- Addison-Wesley, 1998. 3. Gopalaswamy Ramesh, ―Managing Global Software Projects‖ – McGraw Hill Education (India), Fourteenth Reprint 2013.
<urn:uuid:d1810205-fb1e-4135-a7ed-b0f0ae9d3f3d>
CC-MAIN-2020-24
https://facultytalkies.com/regulation-2017-it8075-software-project-management-syllabus/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348509972.80/warc/CC-MAIN-20200606031557-20200606061557-00169.warc.gz
en
0.741446
691
2.84375
3
Barker recognized this knowledge as an opportunity and introduced the iPod touch to personalize student learning. “Students are already adept at using smart phone technology and they have a natural connection to small, hand-held devices,” said Principal Ron La Motte. “We are also seeing more and more applications being created for the iPods, which will assist with differentiation of instruction.” Barker attended two training workshops on how to best implement the devices in the classroom. “Technology is transforming the way we approach everything, including how we teach,” said Barker, who uses the tool across the curriculum. The iPods were made possible through a grant from the SchoolPower endowment.
<urn:uuid:c7b327d2-3ccb-47d1-888b-425d3c9db9ad>
CC-MAIN-2017-39
http://www.lagunabeachindy.com/integrating-innovation-classroom/
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688966.39/warc/CC-MAIN-20170922130934-20170922150934-00312.warc.gz
en
0.967765
147
2.703125
3
When I was in the classroom, I often was asked, “How do you get your student writers to be so good at writing?” Trust me, they were not good writers by accident! They were good writers because we wrote – A LOT! In my class, students wrote every single day. It was a requirement. They knew it from day one. At first, they grumbled a lot and some were quite resistant. After a week or two, it became something that they knew could not be avoided and they accepted it. After a month or so, they looked forward to writing time. How did we get to that point? By writing – A LOT! You see, as the eighth grade English/Language Arts/Reading teacher, I was charged with getting those students ready to write on-demand for the state assessment. The state writing assessment carried a lot of weight. It counted as 25% of the middle school state report card’s accountability score. Then, their state standardized ELA/Reading score counted 25% of the remaining 75% with math, science, and social studies scores rounding out the remainder of that 75%. Therefore, performance in my class was important for the entire school – for sixth and seventh grades as well as the eighth grade. So, in my class, students wrote – A LOT! Times have changed when it comes to accountability scoring but the emphasis on being a good writer is still important for all grade levels and accountability reporting. For that reason, students still need to write – A LOT! To this day, I always look for writing inspiration that I can share with teachers to help develop their student writers. Oh, there is more to developing writers than just challenging them to put words on a page, for certain. Students need a mini-lesson on a regular basis to help guide them in knowing the conventions of the language – grammar, spelling, mechanics, etc. They also need to read continually and especially to read and discuss good writing. After all, without a model, learning is just trial and error and schools today do not have time to allow for a lot of trial and error – some, yes, but not a lot. Guiding and developing student writers is an on-going task for any teacher in any content area. However, the most important ingredient to the recipe for developing good student writers is to have expectations for writing – A LOT. Today, I revisited a site that I have enjoyed periodically as inspirational – something that inspires me to be more appreciative of my blessings and encourages me to strive toward being a better person. As I read a couple of stories today, my thought was, “Wow! Wouldn’t this be a great model for student writers? Some student writers could use this as a model and run with it. They could become noticers and voices to tell bits of another’s story.” Sometimes what a person needs to rejuvenate his/her writing life is a bit of inspiration. So, my suggestion for today is, share Humans of New York with your students and maybe one or two might be inspired to develop his or her own Humans of… series, sharing an interesting bit of someone else’s story. And, write – A LOT!
<urn:uuid:a2cbb227-a423-4c50-a6ca-c24d99768db7>
CC-MAIN-2017-26
http://edublogs.wcs.edu/teachingfromoutoftheblue/2017/02/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320443.31/warc/CC-MAIN-20170625064745-20170625084745-00066.warc.gz
en
0.982123
665
3.125
3
If you happened to visit a vineyard during veraison, you’d be amazed by all the different colors, from green to deep purple. Veraison is the time when the berries start their ripening period, by changing their color from green to yellow or red. Each individual berry has their own speed to start this change. Moreover, each grape variety and each location show different dates of veraison. That’s why it’s possible to observe so many colors at once. Throwing back to my last veraison experience, Veraison in Bordeaux started in end of July-beginning of August for Cabernet Sauvignon in 2015, while Napa Valley showed its first red colored berry in second week of July in 2016. Nova Scotia, as one of the cool climate viticulture regions, has the seasons later than the other wine regions in general. Most of the vineyards were covered with snow until end of March this year, and the bud burst had finally started in the middle of May. Just this week veraison is on the way for most of the varieties. If you consider that most of the North hemisphere wineries have already started to harvest, you can understand how late the season is here. Lately, I was out there in Gaspereau Valley, one of the wine growing valleys of Nova Scotia, to observe the veraison in the hybrid grapes. Before getting into the topic, let’s open a parenthesis about what does hybrid grape mean. To understand what the hybrid is first of all we should take a look at the systematic classification of grapevine. I will just draw a scheme to explain it, since it is a very big family. The varieties that we know, like Cabernet Sauvignon, Merlot etc. are the cultivars of the Vitis vinifera sativa. So, they are all coming from the same species of Vitis vinifera. When different species are crossed, from the same of different series, it is called a hybrid grape. This crossing can be in between the same series, for example a crossing of Vitis riparia and Vitis rupestris, or it can be a cross from the different series, for example a crossing of Vitis vinifera and Vitis riparia. Maréchal Foch is a red hybrid variety developed by Eugene Kuhlmann (a French hybridizer) in Alsace, France. It was named after the French marshal (maréchal), Ferdinand Foch. Here you can see on the first photo that the veraison is on the half way. And in the second photo you can see the young leaves of the vine. Léon Millot is another hybrid developed by Eugene Kuhlmann and it gives red grapes. Again you can see here the lovely bunches which start to change their color and the young leaves. New York Muscat New York Muscat is a hybrid crossed by Cornell University. Being one of the more than 200 grapes which are named as Muscat, it shares the typical Muscat aromas and generally used in NS in the blends. As far as I read, it gives pink-skinned grapes when it’s mature, I cannot wait to actually see them mature soon. In the mean time, I am amazed by the lovely color of the leaves, green as usual on top but white and hairy inside. L’Acadie Blanc is crossed in Vineland Research and Development Center in Ontario, Canada and gives white grapes. You can read some more information on this variety on my previous article. As you can understand from the name, it gives white grapes so it might be difficult to observe the veraison. But if you look at the bunches closely, you will easily see the color change from green to yellow in some of the berries and when you touch those berries they will be softer, so there you will see that veraison is here! All images © 2017 by Neslihan IVIT. All rights reserved. - Peter, M. Pinhey, C. (2016). The Wine Lover’s Guide to Atlantic Canada. Halifax: Nimbus Publishing.
<urn:uuid:2d9762af-901e-4560-86ff-5d4e08577118>
CC-MAIN-2020-10
https://winesofnesli.com/2017/09/01/veraison-in-nova-scotia/
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145910.53/warc/CC-MAIN-20200224071540-20200224101540-00482.warc.gz
en
0.938175
875
2.53125
3
Gliocladium blight – Pink Rot of Palms - Pathogen: Gliocladium vermoeseni (fungus) (recently renamed Nalanthamala vermoeseni). - Hosts: Chamaedorea spp, Dypsis spp. (Areca palms, etc.), date palm, queen palm, Mexican fan palm, others. - Symptoms: Invasive rot of buds, petioles, leaf blades, and trunks/stems, dark brown necrotic areas near the base of the stem, gummy exudates, premature death of fronds, plant death. - Signs: Pink- to salmon-colored spore masses on the surface of diseased plants. - IPM: Minimize plant wounding, use of fungicides as prophylactic during transplanting, minimize water splashing between plants, remove dead leaves from plants, use increased plant spacing, provide air movement, decrease relative humidity, irrigate in the morning to avoid prolonged periods of wetness. - Fungicides: Dithane, Thiophanate methyl. Apply after removing diseased leaves. Source for photos: https://www.icloud.com/iphoto/projects/#7;CAEQARoQr81AB2fnB2ELWhcKQh0hsg;DBA775CB-0656-4F1C-98B1-661C17D5D646
<urn:uuid:d0e60aa9-5d98-4b52-b6d1-d1842b9a7aef>
CC-MAIN-2023-14
https://alohaarborist.com/may-2013-pest-of-the-month/
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948708.2/warc/CC-MAIN-20230327220742-20230328010742-00662.warc.gz
en
0.768548
377
2.53125
3
The Secret Code! Imagine that You and your friend, Statistics, are playing this game together. Ask him to secretly choose an important date in his life – maybe his birthday or a favorite holiday etc. Now he needs to perform the following steps in succession. |Sl. No.|| Steps to follow ||Example| |1||Ask him to write the number corresponding to that | month from the month chart given above. Let the date is 1st May. The number corresponding to the month of May is 5. Let it be N1 |May 1st | N1 = 5 |2||Next, multiply that number by 5.||5 × 5 = 25| |3||Add 6 to the answer.||25 + 6 = 31| |4||Multiply that result by 4.||31 × 4 = 124| |5||Add 9 to the total.||124 + 9 = 133| |6||Multiply that result by 5.||133 × 5 = 665| |7||Add the number of the day.| Here it is 1 (= 01) (1st May). Let it be N2 |665 + 1 | |8||Add 700 to the total.||666 + 700 | |9||Finally, subtract the secret code 865 from the result. | This result will be like N1N2. |1366 – 865 | = 5 01 An exception. When you subtract 865 and obtain four digits, the first two digits are the number of the month. This is a mathematical game that you can play with your friend Statistics several times. The final answer always ends up being the concatenation of two numbers, as discussed above.
<urn:uuid:cd02a107-dc94-49f7-a073-1226318e1d61>
CC-MAIN-2023-40
https://math1089.in/mathematical-games/
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511170.92/warc/CC-MAIN-20231003160453-20231003190453-00235.warc.gz
en
0.906671
380
2.890625
3
What is a ‘Financial Cycle’ and How Does it Affect Your Business? When it comes to business, economic ebbs and flows are part and parcel of the everyday running of the business. Figuring out how to anticipate these movements has become a core requirement, not just in the regular operation of the company, but also in its ability to survive these changes, and certainly to thrive in the long term. These predictions are made using tools and methods that have been devised by those with an in-depth knowledge of the workings of economies and their movements over time. One of the most common tools used in the forecasting of business trends and outlooks is the financial cycle. This concept has to do with the observation of stagnation and growth within an economic environment which then governs the actions to be taken to remain afloat. What Exactly is a Financial Cycle? By definition, a financial cycle is the observation and recording of the movement of the economy that covers the periods between prosperity or growth and slumps or recessions. Financial cycles are also referred to as economic cycles or business cycles. They take specific ranges of time into clusters to ease the study of economic activity, thus enabling the formation of predictions. These ranges of time were often established with a specific facet or focus in mind, for example, the business cycle for a particular commodity would span up to 7 years while one that focuses on technological trends would cover a longer period. How is a Financial Cycle Established? The establishment of a financial cycle is dependent on several factors. These include: - Overall public spending - Established interest rates - Gross domestic product (GDP) - Employment numbers For all of these factors, with interest rates as the exception, higher numbers tend to indicate the upper position of a cycle while declining figures signal that a period of recession is approaching. Understand that different factors may have different levels of influence over your financial cycle i.e your business may be more susceptible to macro-economic factors such as GDP. Financial cycles are also cyclical, with each cycle comprising of four stages that point towards different periods within. The stages follow the order below: - Expansion: Where growth is noticeable and significant. This can happen at a rapid pace, bringing with it increased activity through all sectors of the economy. - Peak: It is at this point where the growth that was observed during the expansion period hits its highest levels. - Contraction: Once a peak has been experienced, a slowdown will occur; often as a natural counter to the expansion that has taken place. - Trough: This is the lowest level of the cycle and will be signaled by stagnant activity. The onset of a trough will often lead to an upturn, thus bringing about growth and expansion once more. This starts the cycle again. How Financial Cycle Affect your Business? With relation to the ups and downs of the financial cycle, understanding and anticipation are vital. If you can have an idea of what is expected, then you can weigh and make prudent decisions that will maintain the health of your business, even in times of low economic activity. Take note that having the know-how that enables you to read and interpret financial cycles, in general, does not paint a full picture. You must go the extra mile in understanding the inner workings of the specific sector into which your business falls. Having this knowledge will enable you to identify the type of cycle that you must observe to have the best chance of gaining results that are a closer reflection of your position. Having intimate knowledge or insight into financial cycles can also help you to see the different stages of the cycle as they begin to occur. This will give you an edge on the running of your business as you seek to make adjustments accordingly so that your practices either help to mitigate the adverse effects of a trough or enhance the gains derived from an expansion or peak. The financial cycle can also be a useful tool in the management of your business’ debts; be it adding on to them, or making the effort to lower them as much and as fast as possible. Call a professional today to educate you and to consult with on the intricacies of financial cycle analysis. In the business environment today, having this comprehension could be what sets you apart from your competition when hard times come about. Embracing the understanding of financial cycles could very well be the key to ensuring that you get to enjoy the results evidenced by the longevity and endurance of your business.
<urn:uuid:a0949e41-438f-4d9c-bb1c-d0477e3717de>
CC-MAIN-2020-24
https://thetotalentrepreneurs.com/financial-cycle-and-how-it-affect-your-business/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347398233.32/warc/CC-MAIN-20200528061845-20200528091845-00262.warc.gz
en
0.961989
904
3.1875
3
Image of God, Who Am I? Kindergarten is a Catholic religion curriculum for children in kindergarten that presents the faith in a lively, colorful manner with colorful graphics. Founded on two unifying key truths: God and creation Stresses the dignity of each child made in God's image Presents the faith in terms children understand with activities that reinforce the lesson Each lesson includes Bible stories, "Concepts of Faith", home activities and fun worksheets The children's Workbook activities involve the children in coloring, cutting matching, drawing and printing to reinforce the lesson. Also included in the student workbook are several booklets for the students to assemble. Family Notes, appearing on the back of the workbook pages, provide take home material to furnish the basis for family faith discussion and activities. The teacher manual gives suggested time allotment for one-day-a-week and five-days-a-week programs. The lessons include correspondence to the Catechism of the Catholic Church, vocabulary words, a choice of activities to reinforce the lesson, Bible stories, and prayers to know. The appendix contains patterns for the various arts and craft projects, and additional worksheets. Click here to view sample pages
<urn:uuid:20ef3236-8166-4b18-9eb5-5e63ee740588>
CC-MAIN-2017-30
https://www.ignatius.com/IProducts/175609/who-am-i-kindergarten-student-workbook-2nd-edition.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424247.30/warc/CC-MAIN-20170723042657-20170723062657-00563.warc.gz
en
0.93827
247
3.203125
3
Pine Mountain Settlement School Series 22: Environmental Education ENVIRONMENTAL EDUCATION GREEN BOOK (Early manual for EE Program) TAGS: environmental education; Pine Mountain Settlement School; Green Book; education; Harlan County, KY; Afton Garrison; Candace Julyan; Mary Rogers; and John Rupe. The GREEN BOOK is a manual created by core staff. Using their combined experience they were instrumental in establishing one of the first Environmental Education outdoor programs in the state of Kentucky at Pine Mountain Settlement School. The Forward to the manual explains how the authors saw the basic purpose of the manual and use of their lesson plans. THE GREEN BOOK is a sequence of 12 sample lesson plans, each with background information preceding it. The lesson plans can be adapted to use with elementary, secondary, and older students. At the end of the guide is a supplementary reading list and some additional ways in which the immediate school environment can be used to study the way people relate to their surroundings. Environmental education has come to mean many different things. Sometimes, unfortunately, it has involved a separation of two kinds of learning: the learning of awareness and appreciation through the use of the senses; the learning of knowledge — specific truths — through experimentation. We hope through this guide to help bring the two areas together and to stimulate the learning of additional skills such as that of effective communication. Through ecological concepts, one realizes that “awareness” is necessary not only for aesthetic appreciation but also for the understanding of concrete situations. In THE GREEN BOOK we emphasize the importance of using concepts or frameworks, to understand the ways that plants, animals, water, soil, air, and people fit together. Each natural system “ecosystem” may be different, but most have underlying similarities. Unless we have a frame of reference for seeing those similarities, and unless we can see, describe, and differentiate among members of each ecosystem, (which plants, what kind of rock material, how big an animal population), we cannot understand how each affects the others. Principal authors of this guide were Peter Westover and Nat Kuykendall. Also contributing to the writing and editing Of the guide were Afton Garrison, Candace Julyan, Mary Rogers, and John Rupe. The ideas presented in this guide have been collected from many different sources, to whom we are indebted. assistance with several lesson plans we would especially like to thank the State of Kentucky Department of Education and the Title 111 Region 6 Office, E.S.E.A. Many areas of study were outside the scope of this guide, but are no less important to an understanding of our total surroundings. Other Pine Mountain llesson plans and additional ideas for using local resources study, see Appendix I and Appendix ll. This guide has not been copyrighted and we encourage others to use it reprint it in any way. This guide has not been copyrighted and we encourage others to use it or reprint it in any way. |TABLE OF CONTENTS| |1||Observation as an Art OUTDOOR LOG — Sample Lesson Plan MY TREE — Sample Work Sheet |2||Classification as a Tool MAKING A KEY — Sample Lesson Plan |3||The Ecosystem Concept AN AREA AS AN ECOSYTEM — Sample Lesson Plan |4||The Earth as Raw Material for Ecosystems THE CREATION OF SOIL — Sample Lesson PLan Stream Volume Table for Measuring Sediment |5||Natural Cycles: What Makes Them Go? RECYCLING OF LIFE — Sample Lesson Plan |6||Natural Succession: Mechanics and Terminology Shade Tolerance of Eastern Forest Trees NATURAL SUCCESSION — Sample Lesson Plan |7||Populations: Ups and Downs POPULATION — Sample Lesson Plan |8||Adaptation: How LIving Things Survive ADAPTATION TO CHANGING SEASONS — Sample Lesson Plan LIFE IN WINER — Sample Lesson Plan |9||People and Ecosystems MAKING AN IMPACT STATEMENT — Sample Lesson Plan DIARY OF A CREEK — Sample Lesson Plan STRIP MINE REPORT — Sample Work Sheet |10||30 Books — A Short Reading Guide for Teachers||40| |Appendix I: Using School Areas for Study — Addotopma; Suggestions||41| |Appendix II: Additional Pine Mountain Activities and Available Lesson Plans||41| GALLERY – THE GREEN BOOK: Teaching Ecological Concepts Outdoors, 1974
<urn:uuid:5844074f-98fc-4b0a-a342-e06a988a574b>
CC-MAIN-2017-34
https://pinemountainsettlement.net/?page_id=44088
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105970.61/warc/CC-MAIN-20170820034343-20170820054343-00185.warc.gz
en
0.879471
959
2.765625
3
What is so thought-provoking that it warrants the name “Aristotle’s Lantern”? It’s what the mouth-parts of urchins are called. Today while I was submerging, there was a dead Green Urchin floating at the surface, spines rotted off but mouth still intact. This allowed me to photograph the jaw parts outside the urchin’s shell (test). This is something I would not normally be able to photograph because I would have to lift an urchin to do so and I do not want displace the life I see. Also, because most dead urchins I find floating about have been “otterized”. An otterized urchin is where a predator has broken through the bottom of the urchin. Mammalian predators of urchins include River Otters, Sea Otters, Mink and humans. Wolf-Eels and Sunflower Stars are also predators. Another reason the mouth parts are difficult to photograph is because they can retract into the urchin when alive. The photos included below of the full mouth structure are from another dead urchin whose “Lantern” I preserved. See how complex it is? There are 5 jaws made from plates of calcium, which are held together by muscle. When wanting to chew away at seaweed / algae, the structure is pushed out whereby the mouth opens and the urchin can chew by moving the structure side-to-side. You can imagine that chewing would wear down the calcium but no worries – the Lantern grows from the tip, reportedly at 1 to 2 mm / week. Why are an urchin’s mouthparts called “Aristotle’s Lantern”? Because Aristotle is believe to have described them as “lantern-like” in Historia Animalium (The History of Animals) more than 2,340 years ago. Indeed, the “horn lanterns” used in Aristotle’s time looked like the mouthparts; having 5 panes covered with cow horn that had been boiled and shaped. BUT there are biologists who disagree, believing that there were “historical ambiguities with the original translation” and that Aristotle was referencing the WHOLE urchin’s shell as being lantern-like, rather than just the mouthparts. Oops – if that be true, Aristotle would not be happy that the mouthparts of urchins’ near relatives, sand dollars, are also referenced as “Aristotle’s Lanterns”. There don’t you feel better now knowing all of this? I am here striving to lighten and enlighten . . . lanterns and all. 💙☺️💙 Here’s a “Shape of Life” video of urchins feeding, narrated to be oh so dramatic: About regeneration, aging and life expectancy in sea urchins Like sea stars and other echinoderms, urchins can regenerate body parts e.g. their spines and tube feet. Research by Bodnar and Coffman (2016) found that this ability to regenerate lost or damaged tissues does not decrease with age in 3 local urchins species: the Variegated Urchin (Lytechinus variegatus), Purple Urchin (Strongylocentrotus purpuratus) and Red Urchin (Mesocentrotus franciscanus). This is of particular interest since the life expectancy of these three urchin species is very different; respectively 4 years, 50+ years, and 100+ years. Yet, “the fact that all species showed the same consistent ability to regenerate tissue despite age and life expectancy undermines the current evolutionary theories of ageing. It was previously expected that species with shorter lifespans would invest fewer resources in maintenance and repair, perhaps to invest greater energy in reproduction. So this study has shed light on a new, unexpected factor that contradicts the current theory.” Source: Biosphere. Regarding the life expectancy of Green Urchins (Strongylocentrotus droebachiensis) from Fisheries and Oceans Canada: “Aging techniques for B.C. Green Urchins are currently being developed by the Pacific Biological Station, but Green Urchins on the Atlantic Coast have been known to live from 20 to 25 years of age.” See below for images of urchins feeding and of “urchin barrens” . Urchins are an important part of the marine ecosystem but when we killed off Sea Otters who eat urchins, this led to too much kelp being eaten. The resulting “urchin barrens” are a loss of habitat and food for other organisms and result is less carbon buffering and oxygen production by kelp. Sea Star Wasting Disease, and specifically the devastation to Sunflower Stars (Pycnopodia helianthoides), has also led to urchin barrens because Sunflower Stars too are predators of urchins. For more information, see my blog “Wasted. What is happening to the sea stars of the NE Pacific Ocean? - Bodnar AG, Coffman JA. Maintenance of somatic tissue regeneration with age in short- and long-lived species of sea urchins. Aging Cell. 2016;15(4):778-787. doi:10.1111/acel.12487 - Mount Desert Island Biological Laboratory. “Is aging inevitable? Not necessarily for sea urchins: Study shows that sea urchins defy aging, regardless of lifespan.” ScienceDaily. ScienceDaily, 25 May 2016. 4 Responses to “Aristotle’s Lantern” Thank you for once again sharing your observations,novel and fascinating to me, in words and photographs. So many things happening under/in the ocean yet often unconsidered by humankind while we remain caught up in daily routines. I believe (hope) COVID19 is bringing more people to awareness/appreciation of the wonders of this world with help from people like you. Fascination with nature can easily be overshadowed by despair at the sight of so much harm done to the web of life. Your continued sharing of that deep cold amazing world you dive in ……and especially the last two paragraphs of your recent poetic offering…..renew my hope. In gratitude for all you do and are Your words motivate and help lift me away from the despair B.P. Jackie – Fascinating! Each time I read something written by you, I dive with a new perspective. And keep a keen eye out for something I may have merely glanced at in the past. Thank you! I appreciate this feedback so much. It adds to my motivation to keep at it.
<urn:uuid:935f87ce-8765-4cd2-b9bb-f67a6b16679a>
CC-MAIN-2023-23
https://themarinedetective.com/2020/09/13/aristotles-lantern/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652959.43/warc/CC-MAIN-20230606150510-20230606180510-00618.warc.gz
en
0.928703
1,444
3.328125
3
Early last year, TransCanada started up a pipeline called Keystone with little fuss or fanfare. It runs from Canada to Steele City, Neb., then east to Wood River and Patoka, Ill. And it got all the required permits from the State Department and other agencies. The company’s next pipeline, the Keystone XL, followed a different route on the ground — and in the political arena, kicking up controversy. Why didn’t TransCanada use the same route as it did for the Keystone line? The first pipeline entered the United States farther to the east and ran down the eastern edge of Nebraska, farther away from the state’s ecologically sensitive Sandhills and Ogallala Aquifer. TransCanada spokesman Shawn Howard explains the first Keystone line used its own “underutilized natural gas line that was converted to oil service [and] recertified for such use.” Converting an existing pipeline “meant that we were able to disturb less land and reduce the pipeline’s environmental footprint,” he said. For Keystone XL, he said, TransCanada looked for a more direct path. Howard said that “many of the activists who oppose Keystone XL throw [out] the idea of moving it ‘over there,’ knowing full well it starts the review process from scratch. Fourteen different route alternatives were examined as part of the original Keystone XL application. The route we applied for disturbed the least amount of land while minimizing water crossings and other sensitive areas that we could disturb as part of construction. Generally the biggest environmental impact a pipeline has is during construction.” Recently the company moved, for a second time, its proposed route through Nebraska to avoid more ecologically sensitive Sandhills and to move down gradient — the geological equivalent of downhill — from the drinking water supplies of three Nebraska towns. While TransCanada awaits permits for the northern leg of the Keystone XL pipeline, construction on the southern leg already has begun. Texas farm manager Julia Trigg Crawford, a high-profile foe the Keystone XL who lost an eminent domain case against TransCanada, says that about two weeks ago she “found TransCanada surveyors on my place.” “They have also been all over the county starting work on other tracts, many adjacent to mine,” Crawford wrote. “Lots of heavy equipment, and truckloads of trees being hauled out daily. . . . It is a madhouse out here.” She called it “a sad day indeed at Red’Arc Farm.”
<urn:uuid:114c9a66-8be6-4afe-9400-58ac3ca0a1e1>
CC-MAIN-2013-48
http://www.washingtonpost.com/business/keystone-xl-breaks-ground-in-texas/2012/09/21/7c68b22a-0370-11e2-8102-ebee9c66e190_story.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345772826/warc/CC-MAIN-20131218054932-00068-ip-10-33-133-15.ec2.internal.warc.gz
en
0.949069
530
2.875
3
According to the Surgeon General, quitting smoking is the single most important step a smoker can take to improve the length and quality of his or her life. As soon as you quit, your body begins to repair the damage caused by smoking. Of course it's best to quit early in life, but even someone who quits later in life will improve their health. It's expensive to smoke cigarettes. In some places, a pack of cigarettes costs more than $10—and prices keep rising. Even if a pack costs "only" $5 where you live, smoking one pack per day adds up to $1,825.00 each year. Smoking is a hassle. More and more states and cities have passed clean indoor air laws that make bars, restaurants, and other public places smokefree. Are you tired of having to go outside many times a day to have a cigarette? Is standing in the cold and the rain really worth having that cigarette? Wouldn't it be easier if you could choose to go outside only when you want to and not when you need to? Cigarette smoke harms everyone who inhales, not just the smoker. Whether you're young or old and in good health or bad, secondhand smoke is dangerous and can make you sick. Children who live with smokers get more chest colds and ear infections, while babies born to mothers who smoke have an increased risk of premature delivery, low birth weight and sudden infant death syndrome (SIDS). Both you and the people in your life will breathe easier when you quit. Ex-smokers don't carry the scent of smoke on their clothes and hair, and their homes don't smell like cigarettes. Better breathing can mean better sleep at your house: Not only are smokers more likely to snore, so are non-smokers who breathe secondhand smoke on a daily basis. Life is just better as a nonsmoker! Because smoking interferes with your sense of taste, food tastes better when you quit. Your sense of smell also improves, so get ready to really enjoy the scent of flowers or fresh-cut grass. You'll be able to make it through a long movie or an airplane flight without craving a cigarette. Within a few weeks after quitting, your smoker's cough will disappear and you'll have more energy. See how quickly your body responds to your decision to quit smoking on the benefits of quitting timeline.
<urn:uuid:974cda59-f5be-40d7-b110-4e2aff5b346b>
CC-MAIN-2017-47
http://www.easterseals.com/cvs-smoking/stop-smoking/the-best-reasons-to-quit-smoking.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934803906.12/warc/CC-MAIN-20171117185611-20171117205611-00222.warc.gz
en
0.959943
487
2.640625
3
Following the recent Supreme Court ruling, heterosexual couples will now be able to enter into a civil partnership. Although the Government is yet to make a change to the legislation, it will grant many couples more rights than those who are cohabiting. For many couples, the thought of marriage does not bring about equality due to the traditional gender roles and religious vows. This change therefore allows greater freedom for those who do not want to participate in a ceremony and exchange vows, although they can do so if they wish. So, what is the difference between a marriage and a civil partnership? A civil partnership is created by the signing of a document which includes the signatures of both parents of the couple and the couple are then known as civil partners, but they cannot say they are married for legal reasons. A marriage requires a formal ceremony to take place with vows, whereas a civil partnership only requires that a document is signed. In terms of legal rights, a civil partnership affords a very similar position to marriage, for example, the rights are the same for inheritance, tax and pensions. However, it could be argued that globally, marriage is recognised in a whole host of countries and civil partnerships are only recognised in a few. It appears that Scotland is also looking into this. Here at the Staffordshire University Legal Advice Clinic, we can advise on family related issues.
<urn:uuid:23b0305d-2d46-4597-8d57-dd1fc8e53697>
CC-MAIN-2020-24
https://blogs.staffs.ac.uk/law-policing-forensics/2018/11/16/is-marriage-or-a-civil-partnership-right-for-me/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413551.52/warc/CC-MAIN-20200531151414-20200531181414-00027.warc.gz
en
0.971947
274
2.96875
3
The term "fatty fish" may sound unappealing, but actually, these are the healthiest and most delicious foods from the sea. Oily fish such as salmon, tuna, sardines, mackerel and trout are full of omega-3 fatty acids—good fats, unlike the bad saturated fat you find in most meats. According to the American Heart Association, people should eat at least two servings weekly of lake herring, lake trout, mackerel, salmon, sardines or tuna for the healthy omega-3 fats they contain. 6 High-Fat Foods That Are Good for You It wasn't long ago that we blamed fat for all of life's ails. Sure, fat can make you gain weight and contribute to chronic diseases like heart disease, cancer and stroke. But not all fats were created equal. In fact, as you've probably heard, certain types of fat are actually good for your health. So which "fattening" foods should you be eating?
<urn:uuid:12d50057-b3a8-4bfa-9796-f8e4ab9a06a9>
CC-MAIN-2013-20
http://www.self.com/fooddiet/2011/08/6-healthy-high-fat-foods-slideshow?slide=7
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701670866/warc/CC-MAIN-20130516105430-00013-ip-10-60-113-184.ec2.internal.warc.gz
en
0.963304
206
2.625
3
Plasma technology is an extreme thermal process using plasma, which converts organic matter into synthesis gas, which is primarily made up of hydrogen and carbon monoxide. A plasma chamber powered by an electric arc, is used to ionize gas and catalyse organic matter into synthesis gas with non-injurious solid wax remaining as a by-product. This method is used commercially as a form of waste treatment and has been tested for effective disposal of Municipal solid waste, biomass, industrial waste, hazardous waste and solid hydrocarbons such as coal, oil sands, pet coke and oil shale. Process: The plasma chamber uses natural gas such as nitrogen. A strong electric current under high voltage passes between the two electrodes as an electric arc. Pressurized inert gas is ionized passing through the plasma created by the arc. The combination of controlled pressure and temperature contributes to the plasma reaction within the chamber. The waste is heated, melted and finally vaporized. Only at these extreme conditions can molecular dissociation occur by breaking apart molecular bonds. Complex molecules are separated into individual atoms. The resulting elemental components are in a gaseous and solid wax phase. Molecular dissociation using plasma is referred to as “plasma pyrolysis”. Feedstock: The feedstock for plasma waste treatment is most often municipal solid waste, organic Waste, or both. The feedstock may also include biomedical waste and hazmat materials. Content and consistency of the waste directly impact the performance of a plasma facility. Segregating and recycling Useful material before gasification provides consistency. Too much inorganic material such as metal and construction waste increases solid wax production, which in turn decreases syngas production. However, a benefit is that the slag itself is chemically inert and safe to handle (certain materials may affect the content of the gas produced, however). Shredding waste before entering the main chamber helps to increase syngas production. This creates an efficient transfer of energy which ensures more materials are broken down. Connect With Us
<urn:uuid:481bd7ae-60ad-4e38-a119-31666d48d755>
CC-MAIN-2023-23
https://www.aggrezzo.com/bio-medical-hazardous-waste-management-plasma-technology/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224654097.42/warc/CC-MAIN-20230608035801-20230608065801-00354.warc.gz
en
0.928296
408
3.859375
4
Every thyroid patient and doctor should become more informed about the challenges of diagnosing central hypothyroidism. Did you know that people can even have elevated TSH levels in central hypothyroidism? Wow, really? Yes, the research says so. A graph of research results reveals it clearly, too. Not only that, but here’s another shocker. As of 2019, it is acknowledged that not only low FT4, but low-normal FT4 can occur with central hypothyroidism. Such amazing things thyroid science can teach. The news can be so confusing for doctors who are taught a simple idea that Low TSH and Low FT4 need to coexist in central hypothyroidism! If you have “central hypothyroidism,” it means that not enough TSH hormone is being secreted given your level of thyroid hormones. It can also mean that TSH molecules are not as bioactive. Less bioactive TSH is incapable of functioning in TSH receptors normally. Here’s the main reasons you, my fellow thyroid patients, can benefit from learning about central hypothyroidism: If your doctors are trusting the TSH to diagnose you prior to therapy, and you have central hypo but don’t know it, you may remain undiagnosed as a person who suffers a genuine thyroid hormone deficiency. If you have central hypothyroidism while you’re on thyroid therapy, you may be perpetually underdosed by TSH-only monitoring if doctors mistakenly believe you have a simple case of primary hypothyroidism. This post, in fact, is an essential introduction to a future post in which I’ll ask the key question “Have you been screened for central hypothyroidism?” — because it can silently coexist with primary hypothyroidism. In this post, I will do the following: - Introduce you to the thyroid scientists who are the experts on this condition and whose 2017 and 2019 articles I rely on for a lot of my information (though of course I’ve read far beyond them). - Provide a summary of the basics and some anatomy diagrams that illustrate normal function, followed by authoritative definitions of central hypothyroidism. - Discuss an amazing graph showing how diverse the TSH-T4 relationships can be prior to therapy. - Outline Persani’s updated diagnostic criteria as of 2019, which tweak the 2018 ETA guidelines he co-authored. - Quote and discuss Beck Peccoz’s 2017 list of 6 diagnostic challenges. - Suggest 3 potential solutions, and recommend a free diagnostic app. - Conclude with more general thoughts on the systemic causes of neglected diagnosis of central hypothyroidism. Meet the scientific experts Today’s best research is often led by Paolo Beck-Peccoz and/or Luca Persani and coinvestigators. They both hail from the University of Milan in Italy. Some articles on central hypo are co-authored by both of them. As of writing in late 2019, Beck-Peccoz had no less than 51 scientific articles listed on Scopus database, a scientific citation-aggregation service by Elsevier. The majority of his publications are on advanced topics in central hypothyroidism, such as the 2013 European thyroid association guidelines for the diagnosis and treatment of thyrotropin-secreting pituitary tumors and a textbook chapter on TSH hormone. Persani had 269 scientific documents listed on Scopus. He focuses not only on central hypothyroidism but also problems with cortisol secretion, growth hormone, and sex hormones. One of the many causes of low cortisol is low pituitary secretion of ACTH hormone that stimulates the adrenal glands. He even deals with syndromes of excess secretion of TSH such as resistance to thyroid hormone (RTH). His scope of interdisciplinarity knowledge is utterly jaw-dropping. In this post I draw mainly on Beck-Peccoz’s 2017 article with supplementary info from Persani’s 2019 article, with support from additional sources. Notice the first title’s claim of neglect: - Beck-Peccoz, P., Rodari, G., Giavoli, C., & Lania, A. (2017). Central hypothyroidism—A neglected thyroid disorder. Nature Reviews. Endocrinology, 13(10), 588–598. https://doi.org/10.1038/nrendo.2017.47 - Persani, L., Cangiano, B., & Bonomi, M. (2019). The diagnosis and management of central hypothyroidism in 2018. Endocrine Connections. https://doi.org/10.1530/EC-18-0515 How rare is central hypothyroidism, really? The rates at which Central Hypo is diagnosed are likely lower than they should be. Persani’s team say this in their 2019 abstract: “Recent data enlarged the list of candidate genes for heritable CeH [Central Hypothyroidism], and a genetic origin may be the underlying cause for CeH discovered in pediatric or even adult patients without apparent pituitary lesions. … This raises the doubt that the frequency of CeH may be underestimated.” The focus in the past has been on “pituitary lesions,” but people can have genetically-driven central hypothyroidism without explosion of a cyst on the pituitary, and without any concussion or traumatic brain injury. There are some major obstacles to its diagnosis, mainly the “reflex TSH strategy” now overruling the process of screening of thyroid disease and monitoring thyroid therapy. Additional issues are raised by Beck-Peccoz and team in this 2017 article as well. Therefore it is likely an underestimate when Beck-Peccoz reports the official incidence rates: “The global prevalence of central hypothyroidism ranges from 1 in 20,000 to 1 in 80,000 individuals in the general population and it is a rare cause of hypothyroidism (1 in 1,000 patients with hypothyroidism).” Beck-Peccoz’s article’s title announced that central hypothyroidism is “a neglected thyroid disorder.” Neglect is a vicious circle. The insufficient medical education about the thyroid disorder leads to ineffective screening strategies. >> Ineffective screening leads to low rates of diagnosis. >> Low rates of diagnosis mean low statistics on rates of prevalence. >> The low statistics contribute to its continued neglect as a disorder. >> People mistakenly think that only a few patients are falling through the cracks. Notice what happens when you add one more variable to the screening test at one stage of life: Rates of diagnosis increase, and so do rates of prevalence! “the prevalence of congenital central hypothyroidism in the Netherlands increases to 1 in 16,000 newborn babies if the screening algorithm is based on the combined measurement of TSH, T4 and T4-binding globulin, which could be effective in diagnosing mild forms of the disease.” (Beck-Peccoz et al, 2017) What would happen if better screening for central hypothyroidism was offered to adults and newborns everywhere, not just newborn babies in the Netherlands? The standard TSH-based screening test will not flag central hypothyroidism because it is diagnosed by assessing the inappropriate relationship between Free T4 and TSH. It can’t be diagnosed by measuring TSH alone, or FT4 alone. Even if both TSH and FT4 are measured in the same blood test, their interrelationship will often be overlooked. Central hypo can’t be diagnosed by the usual way doctors diagnose things nowadays — by looking at these two numbers in isolation from each other and only in relationship to reference range boundaries. Often a history of many lab tests is necessary to see a general pattern. Anatomy and function To see the pattern of the pathology, we should start by understanding the basic theory underlying normal function. In the center of your head, the pituitary gland dangles down below the hypothalamus from the middle of your brain. In this visual model below, the hypothalamus and its TRH is colored red, and the pituitary and its TSH in green. (As a side note, the pituitary looks rather like a teeny weeny pair of testicles, and I wonder how that imagery has played into the impression of TSH’s dominance over thyroid hormones.) The hypothalamus secretes TRH to the anterior pituitary, where the TSH-secreting “thyrotrope” cells are located. TRH stimulates TSH secretion, which stimulates thyroidal T4 and T3 secretion. Multiple hormones, multiple functions The term “central” has arisen because the gland that has failed to secrete enough hormone is located in the place that endocrinology has declared the “center” of hormone control. These two “central” glands secrete many hormones that regulate other glands like the thyroid, adrenals, and gonads. I’ve highlighted in yellow the two hormones involved in the HPT axis in this image, the TRH and the TSH: You can see how important these glands are. It’s sometimes imagined that these central glands are part of the “brain,” but they are not. Despite their close proximity to the brain, most of the hypothalamus and all of the pituitary gland is not protected by the blood-brain-barrier (BBB) so that they can receive input from factors both inside and outside the BBB. Some of these factors can interfere with T4 and T3 co-regulation of TRH secretion. Definitions of hypothyroidism Hypothyroidism has tended to be defined by which gland has failed: - Primary hypothyroidism is failure of the thyroid gland to secrete enough T3 and T4 hormone in response to TSH. - Secondary hypothyroidism is failure of the pituitary gland to secrete enough TSH from pituitary “thyrotropes” (That’s why TSH is sometimes called “thyrotropin”), and - Tertiary hypothyroidism is the failure of the hypothalamus to secrete enough TRH (thyrotropin releasing hormone) to stimulate the pituitary gland to secrete enough TSH, or enough bioactive TSH. In central hypothyroidism, your hypothalamus or your pituitary gland (or both) has been compromised, and their co-regulation of TSH secretion from the anterior pituitary cannot be trusted. The end result is “defective TSH secretion,” which means TSH can be defective in both quality and quantity. Let’s look at what Beck-Peccoz and colleagues have to say about the definitions and language: - “Central hypothyroidism is characterized by a defect in thyroid hormone secretion, resulting from the insufficient stimulation of a healthy thyroid gland by TSH. This condition can be a consequence of an anatomic or a functional disorder of the pituitary gland and/or the hypothalamus. - Central hypothyroidism was formerly termed secondary hypothyroidism of pituitary origin or tertiary hypothyroidism of hypothalamic origin, resulting from insufficient TSH stimulation by TSH-releasing hormone (TRH). - These terms [secondary and tertiary], however, are no longer in common use because the disorders frequently affect both the hypothalamus and the pituitary gland and the common result is defective TSH secretion.” Persani et al, 2019 further breaks down the pathology of central hypo into 3 sub-factors: - a) “impaired [pituitary] thyrotrope stimulation” by TRH — either due to hypothalamus organ damage/defects causing less TRH secretion, or “TSH resistance” in pituitary thyrotrope cells due to genetic mutations. - b) “reduced pituitary TSH reserve” — anything causing a loss in the number of cells or sensitivity of TSH-producing thyrotrope cells. - c) reduced “bioactivity” of TSH molecules secreted by the pituitary. The additional component often forgotten is item C, which maps onto the lesser known phenomenon of defective TSH molecules. Look at how falsely elevated the TSH can be when non-bioactive TSH is detected by the TSH assay! Each dot in the graph above represents a single patient prior to thyroid therapy. You can see how diverse the condition is in its biochemistry. All of these dots are evidence of dysfunctional TSH secretion. Persani’s 2012 article, which also uses a version of this graph, explains that the higher TSH levels in this graph are from persons with a hypothalamic dysfunction officially diagnosed by TRH-TSH testing. It is largely non-bioactive TSH incapable of stimulating T4 secretion from the thyroid. The lower FT4 results are mostly people who have dysfunction centered more in the pituitary than the hypothalamus. Because of this, in central hypothyroidism, misdiagnosis is easy: - You can have lab results that look like subclinical hypothyroidism (moderately elevated TSH with low-normal FT4) when you are actually extremely hypothyroid. - You can have lab results that look like normal thyroid hormone health because TSH is “in reference range,” if FT4 is not tested. Beck-Peccoz has said this in 2017: “The diagnosis of central hypothyroidism is based on low circulating levels of free T4 in the presence of low to normal TSH concentrations.” But wait! It’s not that easy! You see the graph above! Persani edits the Free T4 generalization in 2019. Persani’s article mentioned that the new 2018 European Thyroid Association guildelines say this: “experts agreed that diagnosis of overt CeH should be considered in every subject with low serum concentrations of FT4, measured by reliable immunoassay and low or normal immunoreactive TSH concentration, confirmed on two independent determinations.” However, Persani’s Figure 1, supposedly a representation of the ETA guidelines, silently appears to amend the guidelines by saying not just “Low T4” but “Low or low-normal FT4,” both in the figure notes and in the figure itself. Persani is the lead author of the 2018 ETA guidelines. Persani’s 2019 article is also coauthored. One cannot assume they collectively made this edit as a mistake. As of 2019, therefore, Persani’s edited version of the ETA guidelines state a broader set of biochemical criteria prior to thyroid therapy: “low, or even low–normal, free T4 with inappropriately low/normal TSH.” This is the starting place, and additional investigations are recommended to confirm it. The bottom line is the word “inappropriately” — the TSH level does not constitute an appropriate response to the Free T4 level. Truly, even a low-normal FT4 ought to be diagnostic if it is mainly the “inappropriateness” of TSH secretion and its non-bioactivity that is at issue. Aren’t you left wondering why Persani didn’t edit the criteria to include an elevated TSH, given the graph above? You should be. Six barriers to diagnosis of central hypo Will it be easy to notice central hypo if the TSH has the largest voice in diagnosis? No. Will adding FT4 testing help enough to diagnose or adjust therapy? Not if technology or expertise are lacking, based on the barriers Beck-Peccoz has outlined. Let’s look at the short list of misunderstandings summarized in Beck-Peccoz’s abstract: “Obtaining a positive diagnosis for central hypothyroidism can be difficult from both a clinical and a biochemical perspective. “ Beck-Peccoz outlined 6 reasons for the neglect of a central hypothyroidism diagnosis: 1) methodological interference in free T4 [measurements] or 2) [methodological interference in] TSH measurements; 3) routine utilization of total T4 or T3 [rather than Free T4 or T3] measurements; 4) concurrent systemic illness that is characterized by low levels of free T4 and normal TSH concentrations; 5) the use of the sole TSH-reflex strategy, which is the measurement of the sole level of TSH, without free T4, if levels of TSH are in the normal range; and 6) the diagnosis of congenital hypothyroidism based on TSH analysis without the concomitant measurement of serum levels of T4.” False outcomes of inappropriate testing Item #6 above is the major barrier. But there’s a 7th barrier even if both TSH and FT4 are tested. - 7) Incorrect or outdated medical knowledge of the clinical and biochemical presentation and diagnostic criteria. If FT4 is not analyzed with knowledge of an “appropriate” TSH in mind, the diagnosis of “inappropriate” TSH secretion will be missed. TSH-reflex testing will often hide central hypo (CeH) in three ways: - If TSH is normal, the CeH patient can be misclassified as normal, euthyroid, and FT4 won’t even be tested to confirm it. False. - If TSH is mildly high, even if FT4 is tested by “reflex,” if it is just above the FT4 boundary the CeH patient can be misclassified as subclinical hypo and not yet deserving of thyroid therapy. False. - If TSH is low, if the reflex testing of FT4 or FT3 reveals them as also low-normal, the CeH patient can be misclassified as subclinical hyper because of a mistaken belief that TSH always trumps thyroid hormone levels. False. In comparison to the barriers caused by TSH monotesting policy (which they call a “reflex”), the methodological interferences they list first are relatively minor. That’s because you have to get past the barriers of lack of expertise and TSH-only reflex testing before you even get to the level of noticing that test interferences exist: - Because FT4 tests are subject to interference, they can yield falsely amplified T4 results that make it seem like the TSH level is not inappropriate. Do you know all the causes of FT4 test interference? Probably not. - Because TSH results can also be subject to interference, it can’t show how abnormal the TSH truly is in relation to FT4. Do you know all the causes of TSH test interference as well? There are more than most people know and more than are listed by Beck-Peccoz et al, 2017. - If TT4 is measured instead of FT4, changing ratios of bound versus free hormone can falsely inflate the results. Diagnosis should focus on the “free” fraction that can enter cells, including hypothalamus and pituitary cells. - If TSH and FT4 are interfered with by concurrent critical illnesses, it is difficult to separate permanent/preexisting central hypo from temporary, illness-induced central hypo. (NOTE: If central hypo is preexisting or permanent, you may have a harder time recovering from the low T3 levels in critical illness because recovery requires elevated bioactive TSH secretion to stimulate a healthy thyroid. But this is a fact Beck-Peccoz does not mention.) Educate. Diagnosis requires correct knowledge. I’m sure Persani and Beck-Peccoz would agree that we need to spread medical education about all causes of hypothyroidism, including central, primary, and peripheral thyroid dysfunctions. Educate doctors about the vulnerability of the HPT axis! Stop teaching them to mindlessly mumble the mantra that “the TSH is the most exquisitely sensitive test of thyroid function” as if saying it over and over gives TSH the power to trump contradictory thyroid hormone test results. Educate doctors to be humble and open minded to information from clinical data, new research and from scientifically educated patients. However, we can’t wait forever until all doctors become better educated, better critical thinkers, and more humble. Once beliefs become dogmas and presumptions, they can blind doctors and entire medical associations’ stances and guidelines for decades. Therefore, we must educate patients themselves and the public so that they can be armed with the science to dispel medical ignorance and advocate for themselves and their loved ones who suffer. Stop the mass institutionalization and enforcement of TSH reflex testing systems. They hijack clinical decision making, subvert ethical doctor-patient partnerships, keep sick people sick, and can lead to further costly testing and treatments for the supposed “nonthyroidal” causes of hypothyroid symptoms. I’m sure Beck-Peccoz and Persani would agree with this too. What about the loophole that “you can get around the TSH-reflex strategy if you write a valid reason for requesting a Free T4 test on the requisition, if you say you suspect central hypothyroidism”? (That’s what patients and doctors are being told in some Canadian provinces where this policy is enacted.) Well, that’s just not fair. The suspicion of a central hypothyroid diagnosis may not even enter a doctor’s or patient’s mind until full thyroid results are provided and end up being puzzling in the light of knowledge! Nobody should put thyroid hormone test prohibition powers in the hands of health care organizations and laboratories for the sake of saving $10-13 CDN per test. Let educated doctors and patients decide together on which collection of thyroid hormone and TSH tests are needed on a case by case basis, without interfering coercion from health administrators who declare FT4 knowledge “unnecessary” before anybody knows it. Provide calculations based on hormone relationships to improve diagnosis. This solution is something overlooked by both Persani and Beck-Peccoz’s articles. Not everyone is going to be an expert capable of looking at TSH and FT4 to immediately discern an “inappropriate” relationship between them, especially in the stress and multiple distractions of medical professional life. For some of our lab results, the physician is already provided calculations or ratios to aid diagnosis. Consider the way LDL / HDL cholesterol ratios are analyzed now. We even have liver test calculators and kidney failure calculations. Where are our thyroid hormone ratio calculations? Use the free SPINA-Thyr app Until laboratory test results give doctors an accurate FT4-TSH relationship calculation, consider downloading SPINA-Thyr. SPINA-Thyr has been developed and clinically tested by researching endocrinologists over the past 20 years to account for T4 hormone binding, normal T4 clearance rates, the normal statistical range of sensitivity of TSH secretion to T4, and the logarithmic nature of TSH measurement. See our SPINA-Thyr post for references and more information. Here is a screenshot of a sample SPINA-Thyr analysis of lab test results in a person before thyroid therapy. Signs of TSH secretion inappropriateness are the lower TSHI and the lower TTSI, both with asterisks showing they are borderline or low. - Its “TSH-index” can show how abnormally low the TSH is in relation to FT4 cases of central hypothyroidism. - The “TTSI” calculation can also show how abnormally high the TSH is in relation to the FT4 — it is mainly useful to diagnose cases of resistance to thyroid hormone or pituitary TSH-secreting adenomas. Signs of thyroid gland health are in the GT and GD structural parameters. Keep in mind these parameters can only be calculated for adults on no thyroid therapy (GT & GD) and/or LT4 monotherapy (GD). - The GT (thyroid gland secretion) normal reference range is 1.4 to 8.57, and a result of 2.7 is on the lower end of normal. TSH is not stimulating this gland to put out as much T4 as the average person, but it’s still in normal range. - The GD is the global deiodinase efficiency of T4-T3 conversion, range 20.0 to 40.0, showing a very high-normal efficiency, more FT3 is being converted from T4 than the average. Use lab test analysis tools even while keeping in mind that any test results may be subject to technical or biological interference. A neglect caused by disease invisibility and complexity To conclude, I’d like to suggest that the “neglect” of central hypothyroidism is larger than just a failure of diagnostic technology and screening guidelines that show up in Beck-Peccoz’s list. Diagnostic failure rests in human perception and psychology. Human strengths and weaknesses in reasoning are at the basis of scientific observation and institutional decision-making. In all aspects of medicine, our collective strengths and humility move us forward, but our weaknesses and arrogance hold us back. Arrogance regarding one’s present state of thyroid education and clinical experience can affect the most brilliant of doctors and scientists, and it can infect entire medical associations. The hypothalamus and pituitary and thyroid are as just as crucial as the heart and brain, and they are not disconnected from the rest of the body. The heart and the brain, and every other organ, require enough thyroid hormone to function. Even if the thyroid is healthy, inappropriately low TRH and TSH secretion can put T4 and T3 supply to heart and brain at risk. Whether it is temporary or permanent central hypothyroidism, a distinction made by Beck-Peccoz, does not make much of a difference to the glands and tissues that suffer and the patients at higher risk of death in Low T3 Syndrome / Nonthyroidal Illness (NTIS). Our society focuses on heart and brain diseases more than hypothyroidism largely because we have attributed to heart and brain diseases the manifest causes of death and crippling disease. We tend to focus on the manifest diseases and their final display before one’s demise, and we have invested a lot in the technologies of monitoring these pathologies once they get to the point of manifestation and especially hospitalization. The invisibility of hypothalamus / pituitary compromise is largely due to these glands’ tiny size, their unseen location in the middle of our skull, and their incredibly high level of sensitivity and complexity. But most of all, pathologies of inappropriate TSH secretion are bound to remain invisible in the medical culture’s standard view of thyroid hormone health and disease. Blinded by TSH worship, this culture fails to attend to abnormal TSH-thyroid-hormone relationships that can undermine overall human health and subvert recovery. Who that is steeped in TSH-T4 paradigm wants to admit that the TSH demigod is fallible and that the certainty built on belief in its omniscience is false? The short-term and superficial cost-saving nature of the TSH-only or TSH-reflex diagnostic strategy is seductive yet harmful. Boasts about reduced costs too often overwhelm the ethical and scientific arguments on behalf of the patients who suffer from misdiagnosis and non-diagnosis. Nobody can hold systems accountable for causing misdiagnosis or a missed diagnosis if doing so would require the evidence from test results that were discouraged or outlawed, right? They’ve forbidden the existence of incriminating test results. These anti-thyroid-testing campaigns and their oversimplified view of lab results are an insult to experts like Beck-Peccoz and Persani and all those who understand the crisis of central hypothyroidism in critical illness and early childhood development. Central hypothyroidism is truly more complex, diverse, widespread, and may I even suggest, more deadly and crippling than we realize. - Tania S. Smith Beck-Peccoz, P., Lania, A., Beckers, A., Chatterjee, K., & Wemeau, J.-L. (2013). 2013 European thyroid association guidelines for the diagnosis and treatment of thyrotropin-secreting pituitary tumors. European Thyroid Journal, 2(2), 76–82. https://doi.org/10.1159/000351007 Beck-Peccoz, P., Bonomi, M., & Persani, L. (2014). Thyroid-Stimulating Hormone (TSH). In Reference Module in Biomedical Sciences. https://doi.org/10.1016/B978-0-12-801238-3.00102-1 Beck-Peccoz, P., Rodari, G., Giavoli, C., & Lania, A. (2017). Central hypothyroidism—A neglected thyroid disorder. Nature Reviews. Endocrinology, 13(10), 588–598. https://doi.org/10.1038/nrendo.2017.47 Persani, L. (2012). Central Hypothyroidism: Pathogenic, Diagnostic, and Therapeutic Challenges. The Journal of Clinical Endocrinology & Metabolism, 97(9), 3068–3078. https://doi.org/10.1210/jc.2012-1616 Persani, L., Cangiano, B., & Bonomi, M. (2019). The diagnosis and management of central hypothyroidism in 2018. Endocrine Connections. https://doi.org/10.1530/EC-18-0515 Persani, L., Brabant, G., Dattani, M., Bonomi, M., Feldt-Rasmussen, U., Fliers, E., Gruters, A., Maiter, D., Schoenmakers, N., & van Trotsenburg, A. S. P. (2018). 2018 European Thyroid Association (ETA) Guidelines on the Diagnosis and Management of Central Hypothyroidism. European Thyroid Journal, 7(5), 225–237. https://doi.org/10.1159/000491388 Categories: Central hypo
<urn:uuid:32c2415b-9c36-48cc-8535-b4850197cea0>
CC-MAIN-2020-16
https://thyroidpatients.ca/2020/01/06/why-is-central-hypothyroidism-so-difficult-to-diagnose/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370504930.16/warc/CC-MAIN-20200331212647-20200401002647-00030.warc.gz
en
0.910231
6,524
2.546875
3
Nail means: To fasten, as with a nail; to bind or hold, as to a bargain or to acquiescence in an argument or assertion; hence, to catch; to trap. Nail means: To spike, as a cannon. Nailbrush means: A brush for cleaning the nails. Nailer means: One whose occupation is to make nails; a nail maker. Nailer means: One who fastens with, or drives, nails. Naileress means: A women who makes nailes. Naileries means: of Nailery Nailery means: A manufactory where nails are made. Nail-headed means: Having a head like that of a nail; formed so as to resemble the head of a nail. Nailless means: Without nails; having no nails. Zufolo means: A little flute or flageolet, especially that which is used to teach birds. Zuchetto means: A skullcap covering the tonsure, worn under the berretta. The pope's is white; a cardinal's red; a bishop's purple; a priest's black. Zuche means: A stump of a tree. Zubr means: The aurochs. Zoutch means: To stew, as flounders, eels, etc., with just enough or liquid to cover them. Zounds means: An exclamation formerly used as an oath, and an expression of anger or wonder. Zouave means: Hence, one of a body of soldiers who adopt the dress and drill of the Zouaves, as was done by a number of volunteer regiments in the army of the United States in the Civil War, 1861-65. Zouave means: One of an active and hardy body of soldiers in the French service, originally Arabs, but now composed of Frenchmen who wear the Arab dress. Zosterops means: A genus of birds that comprises the white-eyes. See White-eye. Zostera means: A genus of plants of the Naiadaceae, or Pondweed family. Zostera marina is commonly known as sea wrack, and eelgrass. Copyrights © 2016 LingoMash. All Rights Reserved.
<urn:uuid:b6a5af6e-e6ee-471d-b4f9-46247f46e311>
CC-MAIN-2017-43
http://lingomash.com/dictionary-definitions/n/6/next
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823997.21/warc/CC-MAIN-20171020082720-20171020102720-00592.warc.gz
en
0.945156
482
2.84375
3
Water Filtration Plant The mission of the water filtration plant is to serve and protect the public health and welfare by providing a safe and reliable water supply today and into the future. The City of East Chicago has been providing a safe drinking water supply for its citizens since 1918 when the original water filtration plant was built at the northern limits of the city. The original plant served a population of approximately 40,000 people. The filtration plant was upgraded in 1929 to increase capacity due to the growth of industry in the area. Then in 1964, a new state-of-the-art facility was built to meet the still increasing demand due to the expanding base of industrial manufacturing. This facility has now reached the end of its life-cycle. Due to the condition of the current facility and the more stringent requirements established by the Safe Drinking Water Act (SDWA) major upgrades are required to ensure future safety, reliability, and quality of the drinking water supply. The City of East Chicago has taken the first steps toward modernizing the filtration plant facility to ensure that both current and foreseeable future requirements can be met. The City of East Chicago began the construction of a new drinking water treatment plant in December 2009. The new treatment plant is being relocated to south of Cline Avenue to open up the lakeshore to additional uses for the community. As was the case in 1929, the City of East Chicago will utilize a state-of-the-art technology, membrane filtration, at the new treatment plant prior to disinfection and distribution.
<urn:uuid:c0389de7-28b3-41b5-8d31-24433f9294ba>
CC-MAIN-2020-34
http://www.eastchicago.com/257/Water-Filtration-Plant
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738674.42/warc/CC-MAIN-20200810102345-20200810132345-00564.warc.gz
en
0.966786
319
2.765625
3
Anatomy of the Respiratory System in Children What is respiration? Respiration is the act of breathing: Inhaling (inspiration). Taking in oxygen Exhaling (expiration). Giving off carbon dioxide What makes up the respiratory system? The respiratory system is made up of the organs involved in the interchanges of gases and consists of the: Mouth (oral cavity) Larynx (voice box) The upper respiratory tract includes the following: The lower respiratory tract includes the following: Airways (bronchi and bronchioles) Air sacs (alveoli) What is the function of the lungs? The lungs take in oxygen, which the body's cells need to live and carry out their normal functions. The lungs also get rid of carbon dioxide, a waste product of the cells. The lungs are a pair of cone-shaped organs made up of spongy, pinkish-gray tissue. They take up most of the space in the chest, or the thorax (the part of the body between the base of the neck and diaphragm). The lungs are enveloped in a membrane called the pleura. The lungs are separated from each other by the mediastinum, an area that contains the following: Heart and its large vessels The right lung has three sections, called lobes. The left lung has two lobes. When you breathe, the air: Enters the body through the nose or the mouth. Travels down the throat through the larynx (voice box) and trachea (windpipe). Goes into the lungs through tubes called main-stem bronchi: One main-stem bronchus leads to the right lung and one to the left lung In the lungs, the main-stem bronchi divide into smaller bronchi Then into even smaller tubes called bronchioles Bronchioles end in tiny air sacs called alveoli
<urn:uuid:00560d8f-0c31-48a8-9f95-543d96df4f9b>
CC-MAIN-2020-34
https://www.production.consumerportal.baycare.org/health-library/anatomy-of-the-respiratory-system-in-children
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737225.57/warc/CC-MAIN-20200807202502-20200807232502-00211.warc.gz
en
0.903723
418
3.53125
4
- Whatman, gouache, pencils, markers, photographs, colored paper, ribbons, fabric, cereals and other materials to create texture. Decide the theme of your wall newspapers.If the paper is intended for the sanctification of scientific achievements, strong statements of fact, when the design is appropriate to use the photos of the winners (prizes, as well as the image of associative images related to scientific achievement or victory). Use handy tools for the design of the wall newspaper.For example, for variety, you can make an interesting texture of the wall newspaper.To do this, use cereals (semolina, whea t grains and other improvised means): Apply to Whatman glue, taking the desired area, and sprinkle on top of the glue rump.Allow the glue to dry.Once, a coward extra residues from cereals Whatman well. Think of your paper original (catchy) name and creatively decorate it.To do this, take a stencil or paint on colored paper mockups of your letters and cut neatly Jumping out of colored paper or fabric.If this idea seems too time-consuming in the form to the title of the newspaper, the name of the drawing in gouache or markers (pencil). used for decoration drawing paper and a variety of tapes, decorative rope, fabric.These materials help create a special style of your future wall newspaper.You can with a cloth (for example, satin or lace), glued to the back side Whatman and change (round) shape Whatman but set the mood of your wall newspaper.So, add a romantic lace, regular grid - extravagance, originality. Making Whatman initially think over the strategy work.Determine the style and follow it throughout the creative process.It is best to use a pencil, draw the necessary sketches.Otherwise, you run the risk of ruin and material and mood. If you decide to use for decoration drawing paper cereals, the original will look a combination of several textures.For example, you can combine wheat groats and semolina.It is possible to paint the texture obtained by different colors (using gouache). - Practical advice for registration of wall newspapers.
<urn:uuid:b3251c15-42c6-4e01-bb54-dda86b2801b5>
CC-MAIN-2017-43
http://howded.com/en/pages/243013
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187827853.86/warc/CC-MAIN-20171024014937-20171024034937-00418.warc.gz
en
0.883256
458
3.1875
3
Developments in Microphone Technology What's interesting is that this matchstick-sized microphone can be attached to drones. Conventional microphones work when sound waves make a diaphragm move, creating an electrical signal. Microflown's sensor has no moving parts. It consists of two parallel platinum strips, each just 200 nanometres deep, that are heated to 200° C. Air molecules flowing across the strips cause temperature differences between the pair. Microflown's software counts the air molecules that pass through the gap between the strips to gauge sound intensity: the more air molecules in a sound wave, the louder the sound. At the same time, it analyses the temperature change in the strips to work out the movement of the air and calculate the coordinates of whatever generated the sound. EDITED TO ADD (10/6): This seems not to be a microphone, but an acoustic sensor. It can locate sound, but cannot differentiate speech. Posted on October 4, 2013 at 6:59 AM • 67 Comments
<urn:uuid:e8ca079c-2d39-4151-8835-a4e4d8c498bc>
CC-MAIN-2014-15
https://www.schneier.com/blog/archives/2013/10/developments_in_2.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00522-ip-10-147-4-33.ec2.internal.warc.gz
en
0.907139
207
3.5625
4
Davis and Hersh do for math what Hawking does for physics and Dawkins does for biology. I read this book in college where, admittedly, I was studying mathematics. But I loved the book for how it put into words the excitement of trying to solve a problem, and put into context the history of how we came to want to solve problems. There's music, art, nature, philosophy and more in the study of math, and the authors give fascinating pictures of mathematicians and their times, as well as their subjects. Different areas of math fill different chapters, and if you imagined all math is bound in the unimaginative half of the brain, this book will have you quickly imagining more. Knowing some basic math, sines and cosines perhaps, and the format of basic mathematical diagrams will help, but it's not essential to enjoying this book--there's even a chapter on symbols to help them make sense. And another on abstraction, formalization, algorithms... and fig leaves? What did you think of this review? Fun to Read About the reviewer Sheila Deeth (SheilaDeeth) Sheila Deeth's first novel, Divide by Zero, has just been released in print and ebook formats. Find it on Amazon, Barnes and Noble, Powells, etc. Her spiritual speculative novellas can be found at … more Consider the Source Use Trust Points to see how much you can rely on this review. We tend to think of mathematics as uniquely rigorous, and of mathematicians as supremely smart. In his introduction toThe Mathematical Experience,Gian-Carlo Rotanotes that instead, "a mathematician's work is mostly a tangle of guesswork, analogy, wishful thinking and frustration, and proof ... is more often than not a way of making sure that our minds are not playing tricks." Philip Davis and Reuben Hersh discuss everything from the nature of proof to the Euclid myth, and mathematical aesthetics to non-Cantorian set theory. They make a convincing case for the idea that mathematics is not about eternal reality, but comprises "true facts about imaginary objects" and belongs among the human sciences.--This text refers to an out of print or unavailable edition of this title.
<urn:uuid:489db7a5-288d-4e0f-ba13-3760ec89cb58>
CC-MAIN-2014-35
http://www.lunch.com/reviews/book/UserReview-The_Mathematical_Experience-1743792-208754-Yes_math_really_is_enthralling.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921957.9/warc/CC-MAIN-20140901014521-00440-ip-10-180-136-8.ec2.internal.warc.gz
en
0.949867
456
2.71875
3
We found 4 items that match your search This worksheet is designed to help students focus on the information presented during the first two minutes of the Heaven Will Protect the Working Girl documentary. These words and phases from the Heaven Will Protect the Working Girl documentary may be unfamiliar to students. In this activity, students watch short clips of the ASHP documentary Daughters of Free Men to learn about the experiences of Lowell mill girls in the 1830s. Students follow the life of Lucy, a young girl working in Lowell in 1836. After each clip, [...] In this activity, students watch the documentary Heaven Will Protect the Working Girl in sections, with documents and exercises designed to support and reinforce the film's key concepts: workers challenging the effects of industrial capitalism, the [...]
<urn:uuid:cf2c7127-edd6-482c-9496-3bea6e0e7005>
CC-MAIN-2020-24
https://herb.ashp.cuny.edu/solr-search?q=&facet=49_s%3A%22Gender+and+Sexuality%22+AND+49_s%3A%22Work%22+AND+tag%3A%22Active+Viewing%22
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347407001.36/warc/CC-MAIN-20200530005804-20200530035804-00407.warc.gz
en
0.905695
155
3.71875
4
No matter the safety precautions, spills will sometimes occur. Cleaning the soil afterwards is difficult, expensive and time-consuming. If you don’t clean the soil, the gas and oil will move from the soil and pollute nearby streams, rivers and lakes. Site owners often resort to digging up soil and dumping it an landfill. The digging approach is hugely destructive. Above-ground buildings and plants are destroyed to dig massive holes in the ground. The contaminated soil is hauled to a treatment facility or, more commonly, a secure landfill. Companies, government and the public like digging because it solves the local problem with a week or two of intensive activity. It’s also a visible commitment by the company and the government to manage the environment. What people don’t see is the environmental damage caused by removing the foundation of an ecosystem — the soil. They also don’t see the dangers to the workers and communities as toxic soil is moved through their towns and communities on the way to a landfill. Time and patience As we see in other spheres, individuals promoting simple solutions to complex problems are often lauded. But ecology is complex and it’s subtle. And the quick way to do things is often the wrong way to do things. Instead, why not nudge the natural soil ecosystem to clean itself? “In situ” remediation of an oil or gas spill — doing it on site — is not difficult, but there is a delicate art to achieving success. Soil bacteria and fungi will naturally degrade oil and gas if they have two things: fertilizer and energy. A mixture of nitrate and phosphate agricultural fertilizers used at very low concentrations is usually enough to meet the first requirement. For energy, bacteria use fertilizers like nitrate, iron or sulfate. Time and patience The combination of these energy sources, along with the naturally occurring oxygen, provides the bacteria and fungi all they need to degrade almost all of the oil or gas — as long as the temperature is above freezing. By adding a bit of this mixture over a few years, polluted soils will often restore themselves. Depending on where you are, this can be easy, if the soil is sandy, or very difficult, if the site is full of clay. Restoration over remediation Most surface spills — from gas stations with leaking tanks or at facilities where oil and gas may be transferred between vehicles — typically only pollute the upper six to eight metres of soil. There are plenty of natural organisms there ready to degrade these pollutants, and plenty of engineering solutions to get the nutrients to these organisms. The soil and ecosystem can heal itself over time if you’ve given it the right ingredients. It’s not unlike baking a cake: mixing the right proportions of the right ingredients and giving it time to bake. For example, slowly injecting low concentrations of fertilizers into an urban soil site degraded the gasoline. We’ve done this at six sites in Saskatchewan that have been polluted for over 20 years. We added very small amounts of fertilizers at a slow and steady pace across the sites for the past three years. After only two years, the amount of gasoline in the soil has been reduced by 90 per cent at all of the sites. Groundwater concentrations of gasoline are close to background levels in the nearby environment. We’re now adapting this approach for use in northern territories and provincial areas. But in situ remediation does take longer. A typical project will last two to four years — and, sometimes, it doesn’t work, which can add to the timeline or cost. Risk, tricks and money In situ remediation is not widely used because many companies feel it carries business risks and strains relationships. From an accounting perspective, it’s better for a company to postpone an expense, like cleaning a site, thanks to the “discount rate.” For example, spending $100,000 in the first year of an in situ remediation project and then monitoring it at a cost of $40,000 per year for the next three years is more expensive than spending $300,000 in the fifth year to dig up the site after the regulatory pressure has become too great. This accounting trick only works if the accounting team then ignores the remainder of the site liabilities a company holds, or assumes that they will clean up the site over a very long time frame so that the magic of discount rates can make their environmental liability manageable. The second risk is a relationship risk. Digging up soil is 100 per cent effective because it is possible to excavate directly to the property line, install a geo-technical membrane to stop pollutants from migrating. Although there is only limited data so far, in situ remediation is not 100 per cent effective. It’s easy to see why senior leadership teams often vote for the 100 per cent effective solution and then, using the right discount rate, their accounting teams can make it seem cheaper. This way, companies can assure the public, government and shareholders that the remediation plans will work. Yet in situ remediation can be far less pricey than excavation. “Dig and dump,” as it is often called, can cost $150 per cubic yard of soil or more ($300 per cubic yard) in remote areas. Others have found even higher costs. The pricetag for in situ remediation, on the other hand, can be as little as $20 to $80 per cubic yard. In addition, in situ remediation does not require the demolition of buildings or forests. Often only a small cargo container is all that’s needed to distribute the fertilizer and energy sources to a site of 10,000 square metres for three years. Such a small and portable option makes in situ remediation a promising technology for sites located along pipelines, railways or highways in the Rockies. While much attention is focused on the disastrous potential of spills into the tidewater, mountainous terrain is sensitive and difficult to preserve. There’s real potential for spills on land, but in situ remediation can mitigate those risks and help nature heal itself. Article re-posted on . View original article.
<urn:uuid:82b99e86-3644-4566-9169-605372012d47>
CC-MAIN-2020-16
https://sens.usask.ca/news-articles/2018/news-nature-can-heal-itself-after-an-oil-spill,-it-just-needs-a-little-help.php
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371861991.79/warc/CC-MAIN-20200409154025-20200409184525-00351.warc.gz
en
0.937358
1,275
3.71875
4
March 16, 2009 Stem Cells Crucial To Diabetes Cure In Mice More than five years ago, Dr. Lawrence C.B. Chan and colleagues in his Baylor College of Medicine laboratory cured mice with type 1 diabetes by using a gene to induce liver cells to make insulin. "Now we know how it works," said Chan, director of the federally designed Diabetes and Endocrinology Research Center at BCM and chief of the division of endocrinology in BCM's department of medicine. "The answer is adult stem cells."A gene called neurogenin3 proved critical to inducing cells in the liver to produce insulin on a continuing basis, said Chan and Dr. Vijay Yechoor, assistant professor of medicine-endocrinology and first author of the report that appears in the current issue of the journal Developmental Cell. The research team used a disarmed virus called a vector to deliver the gene to the livers of diabetic mice by a procedure commonly known as gene therapy. "The mice responded within a week," said Yechoor. The levels of sugar in their blood plummeted to normal and stayed that way for the rest of their normal lives. The quick response generated more questions as did the length of time that the animals stayed healthy. They found that there was a two-step response. At first, the neurogenin3 gene goes into the mature liver cells and causes them to make small quantities of insulin "“ enough to drop sugar levels to normal, said Yechoor. "This is a transient effect," he said. "Liver cells lose the capacity to make insulin after about six weeks." However, they found that other cells that made larger quantities of insulin showed up later, clustered around the portal veins (blood vessels that carry blood from the intestines and abdominal organs to the liver). "They look similar to normal pancreatic islet cells (that make insulin normally)," said Yechoor. They found that these "islet" cells came from a small population of adult stem cells usually found near the portal vein. Only a few are needed usually because they serve as a safety net in case of liver injury. When that occurs, they quickly activate to form mature liver cells or bile duct cells. However, neurogenin3 changes their fates, directing them down a path to becoming insulin-producing islet cells located in the liver. The mature liver cell cannot make this change because its fate appears to be fixed before exposure to neurogenin3. The islet cells in the liver look similar to those made by pancreas after an injury, said Yechoor. "If we didn't use neurogenin3, none of this would happen," he said. "Neurogenin3 is necessary and sufficient to produce these changes." Chan cautioned that much more work is needed before similar results could be seen in humans. The gene therapy they undertook in the animals used a disarmed viral vector that could still have substantial toxic effects in humans. "The concept is important because we can induce normal adult stem cells to acquire a new cell fate. It might even be applicable to regenerating other organs or tissues using a different gene from other types of adult stem cells," he said. Finding a way to use the treatment in human sounds easier than it is, he said. The environment in which cells grow appears to be an important part of the cell fate determination. However, he and Yechoor plan to continue their work with the eventual goal of providing a workable treatment for people with diabetes. Others who took part in this research include Victoria Liu, Christie Espiritu, Antoni Paul, Kazuhiro Oka and Hideto Kojima. (Kojima is now with Shiga University of Medical Science in Otsu, Japan.) Funding for this work came from the National Institutes of Health, the NIDDK-designated Diabetes and Endocrinology Research Center at BCM, the Betty Rutherford Chair in Diabetes Research (held by Chan), St. Luke's Episcopal Hospital, the Iacocca Foundation and the T.T. & W.F. Chao Global Foundation. On the Net:
<urn:uuid:41033ea9-4c7c-470b-b612-2251ab79f128>
CC-MAIN-2017-43
http://www.redorbit.com/news/health/1654898/stem_cells_crucial_to_diabetes_cure_in_mice/
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822513.17/warc/CC-MAIN-20171017215800-20171017235800-00899.warc.gz
en
0.960269
862
3.25
3
Geography: Made in Alexandria, Virginia, United States Dimensions: 13 ft. 9 in. x 24 ft. 2 1/2 in. x 50 ft. 9 1/2 in. Credit Line: Rogers Fund, 1917 Accession Number: 17.116.1 The rich woodwork from the ballroom of Gadsby's Tavern reflects the continuation of the Georgian decorative tradition into the early Federal period. Constructed in 1792–93, the ballroom originally stood on the second floor of the Federal-style City Tavern and Hotel. It was one of the most refined public spaces in Alexandria, Virginia. In 1798 and 1799, George Washington celebrated his birthnight balls at the tavern. During his 1824–25 tour of the United States, the marquis de Lafayette spent several days there. Tavern ballrooms like the one at Gadsby's were multipurpose, flexible spaces that received a great deal of wear. Often the most elegant room in an establishment, they played host to balls, concerts, lectures, club meetings, and other large gatherings. An 1802 inventory of this ballroom reveals objects related to lighting—chandeliers, looking glasses—and heat—fireplace implements. Seating, dining, or other furniture could be brought in or taken out as the situation dictated. Though it was the site of refined entertainments, the whitewashed walls and chair-rail-high dados reflected the practical side of the room's prominence. Higher up, away from the scuffs and kicks of hands and feet, the room's richly carved ornament is ordered, almost perfectly symmetrical, and reflects the principals and motifs of Georgian decoration despite its late date. The scrolled pediments over the doors and mantles, the crossetted moldings that embellish the doorways, windows, and mantles, the fretwork chair rail, and the dentil-molded cornice are similar to examples popularized by English pattern books of the 1740s and 1750s. The room's two chimney-breasts are a simplified version of Plate L from Abraham Swan's The British Architect (1758). Though Swan was most popular in the colonies in the period preceding the Revolution, his classically inspired designs adhering to the tenets of Georgian decoration remained in use in the decades following independence.
<urn:uuid:7d41de8e-2d42-4232-8cb6-4dbfaeaa9a65>
CC-MAIN-2017-30
http://www.metmuseum.org/toah/works-of-art/17.116.1-5/
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425193.20/warc/CC-MAIN-20170725122451-20170725142451-00137.warc.gz
en
0.933157
482
2.796875
3
This information is for reference purposes only. It was current when produced and may now be outdated. Archive material is no longer maintained, and some links may not work. Persons with disabilities having difficulty accessing this information should contact us at: https://info.ahrq.gov. Let us know the nature of the problem, the Web address of what you want, and your contact information. Please go to www.ahrq.gov for current information. Press Release Date: February 27, 2001 Black women at any age who have uterine myomata—commonly called fibroids—are more likely to have them surgically removed through a myomectomy, a procedure that preserves the uterus, than are white or Hispanic women with fibroids. While research by the Duke University Evidence-based Practice Center (EPC) on the management of uterine fibroids, sponsored by the Agency for Healthcare Research and Quality (AHRQ), confirmed earlier studies that showed that black women have higher rates of hysterectomy—surgical removal of the uterus—than any other racial group, the EPC found that black women also have higher rates of myomectomy. The incidence of fibroids is higher in black women than in other racial groups, and black women tend to have larger and more numerous fibroids when first diagnosed, so they are more likely to need treatment than any other racial group. The rate of hysterectomies among black women with fibroids is higher than that for white women (50 percent versus 30 percent), but the EPC found that black women are more likely than women of other racial groups to undergo surgery, either by myomectomy or hysterectomy, to treat their fibroids. The EPC researchers conclude that the high rate of hysterectomy among black women with fibroids does not appear to be because they are not offered more conservative surgery. The scope of the report did not include an examination of reasons why black women have so many hysterectomies in general. The EPC reviewed the available research on benefits, risks, and costs of commonly used medical therapies for uterine fibroids, and found that the majority of the published literature did not provide clear answers about optimal treatments. The researchers found tremendous differences in incidence and outcomes among racial groups, and they urge that more research be conducted to provide clear evidence to help women make informed decisions about the best treatment for their situation. The summary, Management of Uterine Fibroids, is available through the AHRQ Publications Clearinghouse, by writing to P.O. Box 8547, Silver Spring, MD 20907, by calling 1-800-358-9295, or through the Internet at http://www.ahrq.gov/clinic/epcsums/utersumm.htm. The summary also is available from the National Guideline Clearinghouse™ (NGC) at http://www.guideline.gov (select NGC Resources). The full report of the EPC should be available in late spring. For additional information, please contact AHRQ Public Affairs, (301) 427-1364: Karen Carp, (301) 427-1858 ([email protected]).
<urn:uuid:040c35c3-03d8-4ff6-a601-5ab4c7fca2b9>
CC-MAIN-2013-48
http://archive.ahrq.gov/news/press/pr2001/uterpr.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163052712/warc/CC-MAIN-20131204131732-00023-ip-10-33-133-15.ec2.internal.warc.gz
en
0.941981
682
2.5625
3
If you’re using a sub-Ohm tank and coils (sub-Ohm means coils with a resistance of less than 1.0 Ohm), it's important you understand a few basics to avoid liquid in your mouth, or liquid leaking out of airflow channels, etc. In addition to the technique required, you must ensure the battery (power source) you're using is capable of operating at the low resistance of your coil. The wrong battery with low resistance coils is a serious safety issue (overheating/explosion/fire) so if in any doubt get more information first. Sub-Ohm vaping has become popular with users who want increased vapour and/or more flavour. Sub-Ohm vaping is not for everyone so don't think this is or should be every vaper's goal. We've included some specifics about the Eleaf Melo for clarity, but this article is relevant to most sub-Ohm devices, you just need to fine tune everything to your device. All mechanical mods, variable voltage/wattage regulated mods and e-cigs are electrical devices powered by a battery. The practice of using or building low resistance coils is directly related to the principles of Ohm's and Joule's (electricity) laws which state that given a non-variable voltage source (such as the battery in a mechanical mod) you can increase the power output (wattage) by decreasing the resistance (Ohms). Low resistance/sub-Ohm coils are not always necessary when using a variable voltage/wattage device, but when they are required it is so that the device can operate at a higher wattage output. As the resistance decreases the amperage will increase (your device will use/need more amps to supply the current needed) which increases the strain on the battery, which in turn can increase the heat that the battery and coils generate. You must ensure that you never exceed the amperage limit of your battery. The concept of sub-Ohm vaping is simple, however, users must understand exactly what they're doing before use. Ensure you follow safety rules to avoid your device failing which could result in injury or property damage from fire or explosion. Ensure the power, whether it be a wattage or a voltage level, is in the correct range for your coil. Most coils have this range (their minimum and maximum) printed on them although it can be hard to read at times because it's very small. In the case of the Eleaf Melo 0.5 Ohm coils, the wattage range printed on the coils is 20-30W. For sub-Ohm vaping in general an average wattage output of 30-35W is probably a guide as to the sort of power required. The Melo is designed to match the Eleaf iStick 30 mod which has a range of 5-30W but if you set the wattage too low the liquid cannot vapourize, and instead it will gurgle away, the liquid may get darker in colour, and you’ll end up with it in your mouth or seeping out wherever it can because it has nowhere else to go every time the fire button is pressed. The actual range you are likely to need to vape the Melo at is about 25W. You also need to use a good quality e-liquid with more VG than PG (VG is thicker than PG and produces a lot more vapour, and reduced throat hit). An 80% VG ratio is often ideal for very low resistances but you may find anything from 70% VG to a 50% VG is the right choice for your device. Be aware that 6mg (0.6%) nicotine is probably an absolute maximum strength e-liquid to use in any sub-Ohm device. If you can’t take the hit from the increased vapour from the VG content of the e-liquid, turning the power down will cause more problems than it solves, instead you need to reduce the nicotine strength. You may also find higher strength e-liquids (above 6mg) taste burnt or harsh with a sub-Ohm coil. When you press the fire button you need to inhale directly into the lungs. Sub-Ohm vaping, which usually has a looser more airy draw, is designed for direct lung inhale, as opposed to the more traditional mouth to lung inhale. Don't let the juice build up in the coil (unused) because it will boil away like a kettle while the fire button is pressed - you want the juice to be taken into the coil and then vaped. As a guide a 4-5 seconds draw is about average. If the liquid builds up in and around the coil you will hear it crackling and popping away quite loudly. When you're using cotton coils, it's very important to ensure the coil is thoroughly saturated in e-liquid to avoid dry hits (to avoid the cotton inside the coil getting burnt). The burning releases unhealthy chemicals, a bit like burnt toast, and although different chemicals are produced/released, neither are considered good and should be avoided. Ensure you keep your mechanical mod clean - this includes all threads, vent holes, contacts, and the switch. If you're using a spring-loaded switch be mindful of how stiff it feels. If, over time, the spring seems to be getting softer to press it's likely wearing out and needs to be replaced before it fails. Consider upgrading to magnets next time round. Use your mod's locking mechanism when you're not using it. Never use an unvented mod to sub-Ohm vape (the more vents the better). You will also want a mod with low voltage drop. Voltage drop is the amount of voltage lost when the electricity travels from your battery through your device and to your coils. For this reason the most preferred are mods that feature a single piece tube, fixed position contacts, and magnetic switch. Spring-loaded or threaded contacts, multiple piece tubes, or telescoping tubes might be convenient but will raise voltage drop. While spring-loaded switches can wear out quickly or fail. E-LIQUID / JUICE INFO COILS / ATOMIZERS E-CIG TANK PROBLEMS E-CIG BATTERY HELP SUB OHM BASICS
<urn:uuid:ee0bf04c-17cf-483a-8daa-132c738d2906>
CC-MAIN-2017-26
http://www.electronic-cigarettesco.co.uk/content/49-sub-ohm-vaping-basics
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323895.99/warc/CC-MAIN-20170629084615-20170629104615-00588.warc.gz
en
0.936427
1,324
2.53125
3
Published on Wednesday, 30 November -0001 00:00 We know summer has arrived when we bite into a juicy sweet strawberry or tasty ripe raspberry. Food preservation season begins with preserving berries by freezing, canning, drying or as jams and jellies. The freezing of berries is a great place for a new food preserver to develop their preservation skills. Freezing saves time, nutrients, and can maintain the fresh taste and color of fruit. Preserve fruits as soon as possible after harvest and at the peak of ripeness. To clean, place the berries in a colander, dip in cool water and gently swish and drain. Do not soak berries in water. Fruit can be frozen with sugar, in a sugar water syrup, or unsweetened. Unsweetened fruits lose color, flavor, and texture faster than those packed in sugar or sugar syrups. Sugar substitutes, if used in freezing fruit, add a sweet flavor but are not as beneficial in preserving color and texture as sugar. A convenient way to freeze berries is to tray pack. Simply spread a single layer of berries on a shallow tray and freeze. When frozen, promptly package, label, and return to the freezer. Most frozen fruits maintain high quality for 8 to 12 months when frozen in quality freezer containers. Be sure to maintain your freezer temperature at 0¡F or below. Whether you have your own strawberry patch, visit a "pick-your-own," or stop by a farmers' market, you have wonderful access to berries, and that is a "berry" good thing.
<urn:uuid:ac21bcf3-8f78-46e9-a735-92b8292f16ec>
CC-MAIN-2014-10
http://www.kimballarea.com/2011-08-07-02-52-18/109-community-events/144-organization-meetings/18847-kingston-legion-meeting
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999654440/warc/CC-MAIN-20140305060734-00077-ip-10-183-142-35.ec2.internal.warc.gz
en
0.936333
322
3.125
3
Biology constitutes the relation with every aspect of human and every sciences. The study of biology needs experiences of almost all the branches of science including chemistry, physics, sociology, geology, climatology etc. Relation of biology with chemistry - Metabolism is the chemical reaction occurring in living organisms. The synthesis of a complex organic compound from raw materials is called anabolism. The breaking down of the complex substance into a simpler form called catabolism. Metabolism includes anabolic and catabolic activities which are purely chemical phenomenon. - Living organisms have organic and inorganic chemical substances which influence the life of living organism. - DNA, RNA are the genetic materials which are also made up of a chemical substance. - Energy transferred to the body by the organic chemical substance like protein, fats, carbohydrate etc. - Enzymes, hormones, and other body fluids are exclusively chemical substance. - Mutation, variation, genetic recombination etc are of chemical basis. Relation of biology with physics - Some physiological activities such as transpiration, evaporation, conduction of water and salts are the physical phenomenon. - The most important life process in the green plant is photosynthesis which is concerned with sunlight, a physical factor. - Most of the biological instruments and techniques like microscope, X-ray, chemotherapy etc have physics application. - Some of the physics factors like a force, energy pressure etc have biological application. Relation of biology with sociology - Sociology is the study of society with social institutions and social relationship. - Anthropology is the branch of sociology which deals with the human related with origin, distribution, relationship, culture etc. Relation of biology with geology - The study of soil rock's type will be incomplete without the study of fossils found on them. Relation of biology with climatology - Study of climate at a particular place goes simultaneously with the study of the distribution and adaptational features of animals. Relation of biology with mathematics - Census of wild animals is based on the application of mathematical and statistical application. Such application is helpful in data compilation and analysis of living organisms.
<urn:uuid:afc92c88-74f2-4670-bd4d-013fb43722e8>
CC-MAIN-2020-05
https://kullabs.com/classes/subjects/units/lessons/notes/note-detail/1765
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251678287.60/warc/CC-MAIN-20200125161753-20200125190753-00117.warc.gz
en
0.917139
444
3.0625
3
9,300 LBS OF CO2 ARE SAVED AS WELL AS 1.6 TOE (TONS OF OIL EQUIVALENT) GAHPs use natural gas + renewable energies(1) for heating. Using up to 40% of renewable energy (air, ground or water), GAHP technology can help increasing the share of renewable energy usage in the USA. With a GAHP, every year 9,300 lbs of CO2 emissions are saved(2) A GAHP may save 1.6 Toe (Tons of oil equivalent) every year. With impact on global warming close to zero (GWP - Global Warming Potential), the GAHP technology is the best solution to the problem of global warming due to greenhouse gases. The GAHP technology is the best option to meet the objective of reducing energy consumption. (1) Assuming that the specific heating capacity is 120,000 Btu/h with only 95,500 Btu/h of thermal input thanks to the use of renewable energy. (2) With GAHPs, 77,500 ft3 of natural gas are saved every year (1 ft3 of natural gas produces 0.12 lbs of CO2), assuming 1,000 running hours per year.
<urn:uuid:343b4dbd-17d2-4a51-9b76-d8378740a563>
CC-MAIN-2017-39
http://www.roburcorp.com/technical_dossiers/gas_absorption_heat_pumps_advantages/gas_absorption_heat_pumps_are_environmentally_friendly
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688671.43/warc/CC-MAIN-20170922055805-20170922075805-00612.warc.gz
en
0.855824
260
3.078125
3
In early November of 2012, I spent a week camping at Molera State Park, at Big Sur, CA. During the trip south from Oregon, I did some online research and read an account of how the Keeling Curve came to be. The story was that Charles David Keeling, a graduate fellow at Caltech, developed some atmospheric testing equipment and went camping in 1955, somewhere along the lower reaches of the Big Sur River. It was evident from the reading that the data collected during that camping trip was something of a ‘Eureka!’ moment for Dr. Keeling. This meant that the first air samples, which helped the world to begin to measure the problem of excessive CO2 in our atmosphere, were taken at either Pfeiffer Big Sur State Park, or at Molera State Park. And, to me, this meant that Dr. Keeling did his work at one of my favorite places on Earth … possibly even where I was headed to camp. I researched a bit further and found that Dr. Keeling’s son Ralph has followed his father, and today is a scientist at Scripps. Well, not just a scientist, but the Director at the CO2 Program at Scripps, and thus one of the world’s leading scientists studying the CO2 issue. I found Ralph Keeling’s email address and contacted him, asking if he could help me identify where the first air samples were taken, during the 1955 camping trip. Ralph kindly sent an email reply, then talked to his mother, and also consulted his father’s notes. In a subsequent email, he shared this one note: “May 18, 55 Big Sur State Park, Air Samples, Camp 55, Site Half way down S.W. bank of river” With this info, and with the additional details his mother recalled, it was clear the campsite was not at Molera, but was at Pfeiffer Big Sur State Park, and I hoped I could find it. I suspected, though, that the park had long ago been redeveloped, and I likely would not be able to identify the campsite. With a little luck, my fears were soon dashed. As it happens, my arrival at Big Sur was shortly after completion of a new vehicle bridge into the campground. In recent years, severe weather events had caused substantial erosion and flooding. The original bridge location was at a choke-point, that intensified flood damages. Also, over many decades, popular use of this park had caused substantial vegetation damage along the river banks. The state had decided to invest in improvements, to repair all this damage, and to ensure the park could withstand future severe weather events. I visited the campground that week, picked up a map showing the campsite numbers, and shot this photograph of campsite #55. My timing was extremely lucky. How lucky? Well, just days after I shot the photo of campsite #55, the number was removed. With recent completion of the new bridge (crossing the river to the east of campsite #27 on this map), and with the permanent abandonment of a few campsites close to the river (the area marked ‘closed’ on this map), State Parks renumbered all the campsites. The only remaining question, then, was whether this site had been renumbered in the past sixty years. I went to the small library in Big Sur and found some records. I made this copy: I also found a kiosk near the Big Sur Lodge. It was in a hole, surrounded with new elevated trails. It looked like the plan was to remove the kiosk soon. I took photos, including closeups of a geologic map of the state park. The most recent year on the geologic map was located in a statement that read: “Geology Surveyed by Gordon B. Oakeshott 1950”. The map showed the same campground and road configuration as was being redeveloped in 2012. It also showed many of the improvements constructed by the CCC in the 1930’s, which were now being repaired or removed. Here is a copy of the geologic map: It appears that this 1950 map accurately reflects the campground configuration from 1955, when the first CO2 samples were taken. It also appears that state parks made no substantial changes until just the last few years. The new vehicle bridge was constructed at essentially the same location as the old footbridge, which is in close proximity to the former campsite #55. < <> <<>> <> > Here are some other links…
<urn:uuid:23ac0fc9-e9f3-47b3-9a39-7adac5c3889e>
CC-MAIN-2017-34
http://aireform.com/big-sur-state-park-where-the-keeling-curve-was-born/
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104636.62/warc/CC-MAIN-20170818121545-20170818141545-00469.warc.gz
en
0.985218
949
2.546875
3
On December 5, 1848, in his annual message to Congress, President James Polk stated that gold had been found in California, and in large quantities. Specimens were sent to the Philadelphia Mint for analysis where the assayers found the metal to be of unbelievable quality. The California Gold Rush was on! Immigrants coming to California by land and sea brought pans, shovels, containers for gold, contraptions for processing gold, etc., but few thought to bring much in the way of money. Thus, by the summer of 1849 there were tens of thousands of newcomers in California, but not enough coins to go around. Trade in the early days was primarily conducted in gold dust, such as a “pinch” of dust as payment for a saloon libation. Early California was, indeed, a “gold dust” economy. Obviously, gold dust was not an ideal form of money. It was difficult to count dust and also impossible to accurately measure value because of the varying purity of raw gold. Historians estimate that during those times, gold in dust form would buy half as much as if it were in the form of coin issued by the United States Mint. Paper money was not used in the West and was certainly not accepted in foreign exchange. Gold coins and bullion bars were money. The shortage of coins in the West continued throughout the mid-1850s. Private minters and assayers, including Kellogg & Co., Moffat & Co. and F.D. Kohler, fulfilled the need for the manufacturing of money, transforming the area’s wealth of raw gold ore, dust and nuggets into rectangular gold ingots or coins. Much like a bank, the reputation of the coin or ingot’s maker was a guarantee of value. To be successful, an assayer had to have a completely unblemished reputation, his word to be, literally, “as good as gold.” As gold poured forth from the rivers, streams and hills of California, it made its way to assayers in San Francisco, Sacramento and Marysville and from there to distant points around the world, most notably New York and London. While coins were needed primarily for the rapidly expanding western economy, both small and large monetary ingots, which were stamped with the assayer’s name and the weight, fineness and U.S. dollar value, were used to settle large domestic transactions and for international exchange. In 1851, Moffat & Co. implemented a contract with the federal government to establish what became known as the U. S. Assay Office of Gold, with Augustus Humbert, a New York City watchmaker of excellent reputation, employed under the title U. S. Assayer of Gold. Beginning in 1851, impressive, large, eight-sided gold “slugs,” often called “adobes” (for the adobe bricks used in construction), were made in quantity, affording a convenient way to transport gold from place to place. In addition, rectangular ingots were made for larger transactions and export. In 1854 the San Francisco Mint opened, using facilities formerly occupied by the U.S. Assay Office of Gold. During the Mint’s first year, gold coins were struck in $1, $2.50, $5, $10, and $20 denominations. The highest value coin, the $20 Double Eagle, became the denomination of choice, as it was the largest regular federal piece issued. It took a year or so for the new San Francisco Mint to become fully operational, and until 1855, private minters still flourished in California. Firms such as Kellogg & Co. and Wass, Molitor & Co. remained active and continued to produce mainly $20 gold pieces, although Wass, Molitor & Co. produced some impressive $50 coins. After 1855, with the San Francisco Mint then in full production, new private coinage all but disappeared in California. Monaco Rare Coins is one of the nation’s leading dealers in California Gold Rush numismatic artifacts, including gold dust, gold nuggets, assayer and territorial coinage, assayer “slugs” and ingots and early San Francisco Mint coinage. We maintain what may well be the nation’s largest inventory of these rare California Gold Rush treasures and encourage you to speak with one of our Account Representatives about the current availability of the kinds of rarities of greatest interest to you. Want a copy of our latest Gold Rush Treasures catalogue that will show you the broad range of California Gold Rush rare coins, ingots and other numismatic items available from Monaco Rare Coin? Simply give us a call and we’ll send your free catalogue out to you in the mail. In addition, you should know that Monaco has a wide range of supplemental brochures, books, video and DVD programs and other material available on California Gold Rush numismatic rarities for those interested in learning more about these amazing artifacts. Call us and we’ll be happy to send this material to you with no cost or obligation whatsoever. DID YOU KNOW??? The Mint Cabinet at the Philadelphia Mint, which had been set up in June 1838 to display the nation’s coinage, received a single striking of the first Double Eagle, an 1854-S, from the new San Francisco Mint, but did not receive such issues as the 1855-S, 1856-S or anything later, as incredible as that may seem today. In fact, the gem condition 1857-S Double Eagles recovered from the 1857 shipwreck of the SS Central America in the late 1980s have no counterpart in quality at the National Coin Collection in the Smithsonian Institution in Washington, DC or any other museum anywhere!
<urn:uuid:403fbdb8-04b4-4594-b72b-5ce7c8f4d9b3>
CC-MAIN-2017-30
https://www.monacorarecoins.com/top-buy/gold-rush-rarities/
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425254.88/warc/CC-MAIN-20170725142515-20170725162515-00393.warc.gz
en
0.968789
1,201
3.90625
4
Trust. Many people assume that just because something appears on the net, it’s accurate. They trust it. Problem is that’s not always true. Your parents said to “believe nothing of what you hear and only half of what you see.” The Secure Sockets Layer (SSL) system was developed for you to believe more than half of what you see. SSL certifies it ALL as believable. Why is that important? HACKERS. The advent of the net has produced computer-facile nuts to whom cracking passwords, stealing user codes, accessing business systems, phishing accounts, and stealing valuables has become the single-most destructive intellectual game in history. SSL is designed to correct that. SSL Certificate History SSL exists because of commonly-accepted standards adopted in 2003 by the US Government. Called the Advanced Encryption Standard (AES), it provides ways to encrypt the data in 128, 192, and 256 bit configurations. Do you think you have problems establishing an 8-character password (one capital, one digit, please)? At 8 bits per character, think about how much fun it would be to establish a 16, 24, or 32 position password and then have to remember it all. And then what it would take to do that to every character of your message? The AES standards do that for you, and in the higher range, data are sufficiently safe for SECRET communications. We want to keep our personal, professional, and financial data to ourselves. We don’t want people gaining access. Further, we want to depend on the security and confidentiality of what we read on the Internet, in our websites, in our messages, in our transactions. SSL isn’t just convenient nowadays; it’s absolutely necessary. If you want your site to be secured, you need SSL. If you want to protect your users with links, you need SSL. And certainly, if you’re passing private data, such as deposit or transfer of money transactions, you want your user to be confident that your security bases are covered. If you’re trying to do e-commerce at your site, you’ll be penny-wise and pound-foolish if you don’t part with the very minor costs to acquire certified SSL protection. SSL provides protection, not only for you, but also for your customers. HostPapa SSL Certificates Let’s be very candid. There are many places where you can purchase SSL protection, and for a wide variety of prices. It would also be fair to say that what many offer is the lowest possible level of protection for the least amount of money. Buy that and endure their constant prompts to upgrade to higher levels of protection. Buy the entire battery of services, and you could pay as much as $133 per year for exactly the same coverage you can obtain from HostPapa for less than $20. For that you get it all. HostPapa SSL Certificates Features Do you need unlimited server licenses? HostPapa has them through its partnership with Globalsign SSL certificates. So do others. The same is true for security seals, padlock symbols, “https” domains, 256 bit encryption — with the authority to increase that, 99.9% browser recognition, mobile device support, domain authentication, and round-the-clock customer service. They will sell multiple years, if you wish, and guarantee reissuance. Every one of those companies offers the same protection, albeit at varying prices. HostPapa offers something the others do not: malware protection. If you suddenly find your system bogged down with “adware and other aberrations,” that, alone, is worth the twenty bucks. But HostPapa offers yet something else not yet available with the others. The price covers one domain, without subdomains. They offer a separate option, still less expensive than their competition, which will support unlimited subdomains. It’s called the Wildcard option. Trust is the issue. HostPapa offers SSL certificates by Globalsign to their customers at prices far better than the competition. Read about it at http://www.hostpapa.ca/. Look for the closed padlock in the upper left hand corner. HostPapa is Canada’s Unlimited GREEN web hosting provider with available SSL protection. For more information about HostPapa’s SSL certificates, please visit their website. You don’t have to use HostPapa for your website to get the SSL, but who can beat $3.95/month for hosting and $19.99/year for a SSL certificate?
<urn:uuid:cfaf38a1-96bd-47bb-b679-d372127e63f6>
CC-MAIN-2017-26
https://www.webhostingratings.ca/hostpapa-ssl-certificates/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320887.15/warc/CC-MAIN-20170627013832-20170627033832-00251.warc.gz
en
0.950485
969
2.625
3
DAPPS Lovers, have you ever typed something on your computer only to realize that you made a mistake? Maybe you misspelled a word, accidentally deleted a sentence, or accidentally hit the caps lock key. In these situations, the undo function on your keyboard can be a lifesaver. In this article, we will go through everything you need to know about how to undo on keyboard. What is the Undo Function? The undo function on your keyboard allows you to undo any action you’ve taken on your computer. For example, if you were typing an email and accidentally deleted a sentence, you could use the undo function to bring the sentence back. How to Undo on Keyboard To undo something on your keyboard, there are a few different methods you can use depending on the application you are using. Here are a few of the most common methods: Method 1: Using the Ctrl + Z Keyboard Shortcut |Step 1||Select the text or action you want to undo| |Step 2||Press the Ctrl + Z keys simultaneously||👆🏽🎯| |Step 3||To redo, press the Ctrl + Y keys simultaneously||🔄| Method 2: Using the Edit Menu |Step 1||Click on the Edit menu at the top of your screen||🔍| |Step 2||Select the Undo option from the dropdown menu||👆🏽| |Step 3||To redo, select the Redo option from the Edit menu||🔄| Method 3: Using the Right-click Menu |Step 1||Right-click on the item you want to undo||👆🏽| |Step 2||Select the Undo option from the dropdown menu||🔍| |Step 3||To redo, right-click on the item and select the Redo option from the dropdown menu||🔄| The Strengths and Weaknesses of the Undo Function While the undo function can be incredibly useful, there are also some potential drawbacks to consider. Here are a few of the strengths and weaknesses of the undo function: 1. Undo Can Save Time By using the undo function, you can quickly undo mistakes rather than having to start all over again. This can save you a lot of time and effort. 2. Increased Productivity If you are a fast typer, the undo function can help you to increase your productivity by quickly fixing mistakes. 1. Can Encourage Sloppy Work Knowing that you can easily undo mistakes may encourage you to take shortcuts or be less careful when completing tasks. 2. May Not Always Be Available Not all applications or software programs have an undo function. If you are working in a program that does not offer an undo function, you may need to rely on other methods, such as backing up your work. Frequently Asked Questions 1. How can I tell if an application or program has an undo function? Most applications or programs that have an undo function will have the option highlighted in the Edit menu. Alternatively, you can try using the keyboard shortcut Ctrl + Z to see if the function works. 2. Can I redo an action if I change my mind? Yes, most applications or programs that contain an undo function will also offer a redo function. 3. How far back can I undo? The number of actions you can undo may vary depending on the application or program you are using. Some programs may allow you to undo only the most recent action, while others may allow you to undo several actions at once. 4. Can I undo something that I accidentally closed? Unfortunately, the undo function cannot be used to undo the closure of an application or file. If you accidentally close something, you will need to reopen it and start again. 5. Can I undo something that I did a while ago? This depends on the application or program you are using. Some programs will allow you to undo actions that you performed hours or even days ago, while others will only allow you to undo the most recent actions. 6. What happens if I accidentally use the undo function? If you accidentally use the undo function and undo something that you did not mean to, you can always use the redo function to bring the item back. 7. Is the undo function available on all keyboards? Yes, the undo function is a standard function on most computer keyboards. In conclusion, the undo function is a powerful tool that can save you time and increase productivity. However, it is important to use the function wisely and consider the potential drawbacks. The next time you make a mistake on your computer, remember all the methods and tip-offs we shared on how to undo on your keyboard. Use them with care, and you should find that they can be a real lifesaver. Happy typing! This article is for educational and informative purposes only. We do not encourage you to use the undo function as an excuse to be sloppy or careless in your work. It is important to always strive for excellence and take responsibility for your actions. Recommended Video About : How to Undo on Keyboard
<urn:uuid:b4481fac-4d9d-4b63-bc5b-d5b127eb6107>
CC-MAIN-2023-40
https://www.directagentsapps.com/how-to/how-to-undo-on-keyboard/
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510942.97/warc/CC-MAIN-20231002001302-20231002031302-00705.warc.gz
en
0.918791
1,125
2.671875
3
Are you confused by the Outlook Address book and Contacts folder? Think first of the address book in your mother's desk drawer. She has some addresses written on pages in the book, with scraps of paper or envelopes slipped between pages providing more addresses. Now apply that analogy to Outlook's address book. Both address book's hold many addresses from many sources. The Contacts folder, Global Address book (Exchange only), and other address lists are the pages and envelope scraps containing addresses, the Outlook Address Book is the container that holds everything together. Note that the Outlook Address Book displays only electronic addresses (email addresses and fax numbers). Use Find a Contact command or open the Contacts folder to see contacts that don't have electronic addresses. Published July 27, 2004. Last updated on September 9, 2011.
<urn:uuid:4bf26398-e188-4319-8b8e-13f37493b87e>
CC-MAIN-2020-16
https://www.outlook-tips.net/tips/tip-163-contacts-or-address-book/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371806302.78/warc/CC-MAIN-20200407214925-20200408005425-00150.warc.gz
en
0.949911
166
2.671875
3
The origins of the modern camera predate the birth of photography by over a thousand years. It can be traced back to the Ancient Chinese and Greeks both of whom used the camera obscura to project images into large, dark chambers that people could enter. Once inside they were treated to a magical and fabulous display: the outside world projected upside down and backward on the wall opposing the aperture. A number of important historical thinkers have examined and written about the device: Aristotle, Alhazen, Freiberg, Descarte, Kepler and many others. Notably, in the 13th Century English philosopher and Franciscan friar Roger Bacon published his work Perspectiva. Here he shared his belief that the devil was responsible for the function of the mystical device−a truly dark magic. A few hundred years later, however, Sir Isaac Newton published his ground-breaking work Opticks (1704) and demonstrated that the responsibility for the devices function, simply and far less sensationally, is due to the laws of physics and the rectilinear propagation of light, which simply means that under normal circumstances light travels in a straight line. This concept makes photography possible. Debatably, no device has changed more significantly over the course of photographic evolution than the camera. What began as a simple light tight box with an aperture at one end and an apparatus to collect light—sensitized paper, polished silver, glass plates or finally film—at the other is now a complex array of circuit boards, wires and computer parts all stuffed into increasingly smaller and smaller packages.
<urn:uuid:8fa6ccd0-6a05-495c-aa71-fcc0c9c404c6>
CC-MAIN-2013-20
http://phototechmag.com/building-a-large-format-camera-lens-part-ii/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382764/warc/CC-MAIN-20130516092622-00035-ip-10-60-113-184.ec2.internal.warc.gz
en
0.953868
315
3.765625
4
Missouri’s Task Force on the Prevention of Sexual Abuse of Children has made 22 recommendations to the Governor and legislature. The Task Force was created by lawmakers in 2011, and brought together advocates, legislators, educators and professionals to better protect children in Missouri from sex abuse. Joy Oesterly with Missouri Kids First says shortly after the task force was created, the sex abuse scandal at Penn State broke, bringing national attention to the problem. Oesterly says the recommendations focus on community-based support, mental health services, changes in statute and preventative education. She also stresses the importance of mental health services — both for child victims as well as youth who exhibit inappropriate or illegal sexual behavior. She says hope and recovery is possible for the future of both of them. AUDIO: Jessica Machetta reports (1:20) View the full report HERE. Here’s a list of the recommendations: 1: Community-based child abuse prevention education needs to be expanded and be comprehensive in nature. 2: All schools and youth-serving organizations should have specific child sexual abuse prevention policies. 3: Existing state child abuse prevention programs should include programing targeted at preventing child sexual abuse. 4: Expand home-visiting programs and specifically include child sexual abuse prevention in these programs. 5: Create and implement standardized training for all mandated reporters. 6: Fund the creation and implementation of standardized, discipline-specific training for members of the multi- disciplinary team (MDT) and judges. 7: Identify and fund discipline-specific expert technical assistance for MDT members. 8: Establish discipline-specific best practices or standards for multi-disciplinary teams, law enforcement, prosecutors and medical providers. 9: Establish mechanisms for addressing the secondary trauma experienced by individuals who work to address and prevent child sexual abuse. 10: Assess for and address domestic violence when investigating child sexual abuse and providing services to victims and caregivers. 11: Identify and fund evidence-based early intervention and treatment for youth with illegal/inappropriate sexual behaviors. 12: Identify and fund the expansion of mental health services to children who have been sexually abused. 13: Create and fund a child sexual abuse public awareness campaign. 14: The General Assembly should consider increased investment in preventing child sexual abuse in order to reduce the substantial financial, health and social costs associated with childhood trauma. 15: Private foundations in Missouri should increase funding to prevent and address childhood trauma. 16: Submit to Missouri voters a proposed constitutional amendment allowing evidence of signature crimes, commonly referred to as propensity evidence, to be used in child sexual abuse cases. 17: Modify 210.115 RSMo. to require mandatory reporters to directly report suspected child abuse and neglect to Children’s Division. 18: Clarify the term “immediately” in the mandatory reporting statute, 210.115 RSMo., and school reporting statute, 167.117 RSMo. 19: Clarify 544.250 RSMo. and 544.280 RSMo. to allow for hearsay evidence at preliminary hearings. 20: Amend 491.075.1 RSMo. to clarify that the statute allows for the use of child witness statements relative to prosecutions under Section 575.270. 21: Modify the definition of deviate sexual intercourse in 566.010 RSMo. to include genital to genital contact. 22: Modify 556.037 RSMo. to eliminate the statute of limitations for the prosecutions of first-degree statutory rape and first-degree statutory sodomy.
<urn:uuid:3bc8a164-fe9c-4271-9eaf-65d6aa1fefd4>
CC-MAIN-2017-43
http://www.missourinet.com/2013/01/03/task-force-makes-22-recommendations-to-governor-legislature-to-curb-child-sex-abuse/
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828411.81/warc/CC-MAIN-20171024105736-20171024125736-00170.warc.gz
en
0.926509
746
2.75
3
The theme of profound depression is illustrated in this poem, which is based on the Vietnam War. The poet depicts needless war and aggression, as well as a senseless desire to fight and be killed. At the start, there is a clever use of imagery that conveys the war's carefree mindset. That in reality, lives are lost, but to the participants, it is merely mathematics, simple body counts. This is evident when the author questions, "Who allowed such monstrous things to happen?" The poet evokes a sense of helplessness and impending doom, of breastless mothers and armless infants. The poem in its rhetoric makes one wonder why there had to be a war in the first place and for what gain. Questions such as Who made them crawl in the mud, who sent them to die or even worse live highlight the plight and a lack of hands on control of the destiny of a foot soldier. They who cause death but for reasons and gains they do not fully understand and profit from respectively. Desperation is so deeply engraved in this poem through effective application of enjambments. As a reader, one is captured in the emphatic diction of the inability to actually do something about the war and its consequences. For instance, the poet asks What must we do? This highlights the knowledge that something needs to be done but what exactly can yield results is unknown. At one point the writer trembles for their country and admits to the fact that they have to walk in shame for the failures and scars of war. It is excruciatingly painful to accept defeat yet it is the only way in the offing. The Vietnam War as detailed in this poem, is cruel, cold and crestfallen. It makes us question the need for a war if at all it yields this much despair. It further breaks the hearts of many who read of it and leaves the profound scars of the physical and psychological effects tattooed in the lives of those who lived it. Type your email
<urn:uuid:900a9615-50de-4767-8bca-68a71dd58e19>
CC-MAIN-2023-50
https://writinguniverse.com/nan-braymers-five-day-requiem-for-vietnam/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100399.81/warc/CC-MAIN-20231202105028-20231202135028-00179.warc.gz
en
0.967694
401
3.140625
3
Last month, we did a little work with Windows Management Instrumentation (WMI) to show you how PowerShell interacts with external providers. In that column, we did some disk inventory, then filtered and formatted the output, to make it show only a selected portion of the possible output. You may have noticed one glaring omission in the beginning of that exercise. Remember when we were starting out and I was able to get going because I knew the name of the class I needed to work with? As I said at the time (and as you'll see if you use the –list argument to get-wmi), there are hundreds of classes. I can't remember all the words to Joy to the World, so I plainly can't remember the name of every single WMI class out there. For the sake of that example, I had to cheat a little bit and use a class I knew before I started. Let's make it a little easier to browse WMI classes without cheating by filtering the content. The good news is that this is pretty simple—all it takes is a new cmdlet and working with variables. Basic text filtering Filtering output by text in VBScript requires messing around with regular expressions. While PowerShell supports regular expressions, it doesn't require them. Instead, you can use the Select-String cmdlet. At its simplest, it works like this: Feed this cmdlet the information to search, and it will find the text you're looking for: "PowerShell", "Seashell", "Monad" | select-string -pattern "PowerShell" In this command, you're creating three strings and piping them to Select-String, telling them that the string you're looking for reads "PowerShell". That is exactly what this command will do. Just be careful to type this command exactly as it's written here. Add extra spaces, and you may get an error. But looking for complete strings isn't always helpful. What if you want to find all strings that have the word "shell" in them? To do that, you'd amend the command slightly so that the argument to –pattern was "shell." "PowerShell", "Seashell", "Monad" | select-string -pattern "shell" That command will return both "PowerShell" and "Seashell." Select-String is case-insensitive by default, but doesn't have to be. Let's change this example slightly. "Powershell", "Seashell", "Monad" | select-string -pattern "shell" –casesensitive Although the word "shell" appears in both "PowerShell" and "Seashell," the above command will return only "Seashell." Filtering the text you just typed in isn't the most likely scenario. It's more useful to find entries that match when you don't know what's out there and it's a ton of data. You can use Google Desktop to find files on your computer. You can use Select-String as a simple variant of Google Desktop to search files. For example, say I'm back working on a script and need some code examples for Select Case. I know there's a repository of sample scripts, but I don't want to search through all of them when I don't know whether they contain the code I need. Therefore, I can use select-string to search those files in the directory containing the repository, as in the example below. (Had I wanted to search the current folder, I would not have needed to provide the path.) The output will be the relevant line from each script, as shown here. Select-string -path "c:\scripts\*.vbs" "Select Case" C:\scripts\concatenate.vbs:2:Select Case Wscript.Arguments(0 C:\scripts\enhanced.vbs:7:Select Case sInput Now I can open the script files to see how Select Case is used. Filtering variable content Now that you've got the idea, let's filter the results of a command. As we said at the outset, that's where we are with get-wmi –list. Type it, and you've got a ton of output—more than you can easily browse. To make it easier to deal with, we'll do that search for WMI objects, but save it to a variable. Variables in PowerShell begin with a dollar sign, i.e., $variable. To assign a value to a variable (we'll start with something simple), just set them equal to each other, like this: $myvariable = 123 $myvariable = "Now is the time for all good men" $myvariable = get-date to set the value of $myvariable equal to the current date. To view the contents of a variable, just type its name at the command prompt. As you might guess, it's easy to shove the excruciatingly long list you get from get-wmi –list into a variable. $wmi = Get-Wmiobject –list Now that that list is stored in the variable, let's cut it down to size a bit with Select-String before we have to look at it. $wmi | select-string -pattern "Win32_" This command will return all WMI classes with names including the string "Win32_". This narrows the field to the classes we'll use in Windows. But this is still a long list -- how about narrowing the field a bit more? Maybe we'd like to see only the results with the word "network" in them. We'll do exactly what we did before—assign the results of a command to a variable. Only this time our new variable, $w32wmi, will contain only the classes from our first search. Now that we've got that, we can begin running new searches on it: $w32wmi | select-string -pattern "Network" Now we're getting some manageable output: Having done all this, I can easily go through the output to find the class I need for my task. (If I'm not sure which one I need, I still need to look through them, but if I'm pretty sure and just needed to check spelling, this would help a lot.) If I'd like to expand my search, I can just add a second parameter to Select-String: $w32wmi | select-string -pattern "Network", "Disk" Adding a second search parameter makes the search act like a Boolean OR statement, not an AND statement. That is, this search will return all classes that include the word "Network" OR "Disk", not just the ones that contain both. In conclusion, this article has looked at one more essential part of working with PowerShell—searching within your results to find the exact information you needed. Filtering information with Select-String is easy. Get the data to filter, whether it's a string, a file or command output, and pipe it to the command for processing. This makes it much easier to work with WMI when you're not sure of the class names, and lets you easily find files with particular information.
<urn:uuid:cf514eee-bf14-4bdd-b4ac-43bc16032958>
CC-MAIN-2014-35
http://searchwindowsserver.techtarget.com/tip/Output-wrangling-with-PowerShell
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535922871.14/warc/CC-MAIN-20140901014522-00130-ip-10-180-136-8.ec2.internal.warc.gz
en
0.906626
1,503
2.59375
3
D is a common-function programming language with static typing, techniques-stage entry, and C-like syntax. Trade-offs from this best involve discovering sufficient programmers who know the language to construct a staff, the availability of compilers for that language, and the effectivity with which applications written in a given language execute. Introductory, self-paced courses can be found now to help you learn to code in many various languages. Folks wishing to enter the world of computer programming can select to specialize in any variety of in style programming languages and discover many entry-stage alternatives. A key attraction is that it is independent of the structure of any explicit machine, a proven fact that contributes to the portability of C packages. Programmers normally have a specialization in a single language, comparable to XML, PHP,Perl, HTML, or SQL. Then your coded program have to be keyed, probably using a terminal or personal laptop, in a kind the computer can perceive. C is a basic-purpose programming language used for wide selection of functions from Operating systems like Home windows and iOS to software program that’s used for creating 3D movies. If you want to go professional and become a full-time developer, an intensive and in-person coding bootcamp would possibly enable you to out, particularly in the event you be taught greatest in a structured setting with real individuals to motivate you. In this tutorial, all C packages are given with C compiler to be able to rapidly change the C program code. C language program is transformed into meeting code, it helps pointer arithmetic (low-level), but it’s machine independent (a characteristic of excessive-stage). C language is taken into account as the mother language of all the modern programming languages because a lot of the compilers, JVMs, Kernels, etc.
<urn:uuid:f75e92d2-2403-4da4-9b0a-c1418c951c7e>
CC-MAIN-2023-40
http://appdownloadreview.com/programming-tutorials-coding-problems-and-follow-questions/
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510781.66/warc/CC-MAIN-20231001041719-20231001071719-00518.warc.gz
en
0.919615
376
3.171875
3
Proton Pump Inhibitors for Gastroesophageal Reflux Disease (GERD) These medicines are taken by mouth (as a pill or liquid) once or twice a day. Some of these drugs are given intravenously (IV) in the hospital. Some of these medicines are available without a prescription. But if you have been using Prilosec OTC to treat your symptoms for longer than 2 weeks, talk to your doctor. If you have GERD, it could be causing damage to your esophagus. Your doctor can help you find the right treatment. How It Works Proton pump inhibitors reduce the production of acid in the stomach. This leaves little acid in the stomach juice so that if stomach juice backs up into the esophagus, it is less irritating. This allows the esophagus to heal. Why It Is Used Proton pump inhibitors are usually used: People with Barrett's esophagus are often treated with proton pump inhibitors. How Well It Works Proton pump inhibitors can heal the esophagus in about 8 out of 10 people who take them.1 Proton pump inhibitors also work to help symptoms of GERD. But the number of people who take PPIs and who have no GERD symptoms is usually less than 5 out of 10 people. That means that of the people taking PPIs, more than 5 out of 10 still have some GERD symptoms. Proton pump inhibitors work best when they are taken 30 minutes before your first meal of the day. If taking one pill before your first meal does not completely relieve your symptoms, talk to your doctor about taking another pill before your evening meal. Proton pump inhibitors may have more serious side effects, too: See Drug Reference for a full list of side effects. (Drug Reference is not available in all systems.) What To Think About For a very small number of people who take proton pump inhibitors, the medicines do not work well. For these people, other treatments for GERD can be tried. Sometimes proton pump inhibitors do not work well because people do not know when to take them. Proton pump inhibitors work best when they are taken 30 minutes before your first meal of the day. If taking one pill before your first meal does not completely relieve your symptoms, talk to your doctor about taking another pill before your evening meal. eMedicineHealth Medical Reference from Healthwise To learn more visit Healthwise.org © 1995-2012 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
<urn:uuid:4fe1f016-6fbc-4eea-88d6-6d93956611ad>
CC-MAIN-2014-23
http://www.emedicinehealth.com/script/main/art.asp?articlekey=130186&ref=130197
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510270528.34/warc/CC-MAIN-20140728011750-00422-ip-10-146-231-18.ec2.internal.warc.gz
en
0.932907
541
2.625
3