text
stringlengths
222
548k
id
stringlengths
47
47
dump
stringclasses
95 values
url
stringlengths
14
7.09k
file_path
stringlengths
110
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
53
113k
score
float64
2.52
5.03
int_score
int64
3
5
Cleaning and disinfection are two separate but equal processes in cleanroom compliance. Know these 8 steps. Our last two posts addressed some common concerns and issues in choosing cleanroom disinfectants. Achieving the appropriate microbiological cleanliness levels for a class of cleanroom is paramount to industries like medical device assembly and pharmaceutical manufacturing. Disinfectants and applicators are only two pieces of the puzzle. Here are 8 steps to keep in mind when ensuring your cleanroom is kept clean. - Know the difference between cleaning and disinfection. Cleaning surfaces is basically removing soil, like dirt, dust, and grease. Detergents are typically used in this process, and it must be completed before disinfection. Chemical germicides used to disinfect eliminate vegetative microorganisms. Both steps are important, and they must be done in the right order. - Choose the right agents. Between class, purpose, and equipment, cleanrooms differ in the kind of cleaning and disinfectant agents that are most appropriate for compliance and validation. We wrote this article about considerations for disinfectants. For your cleaning detergent, make sure: - It is neutral and non-ionic - It does not foam - It’s compatible with the disinfectant - Understand the difference among disinfectants. Specifically, you need to know how the agent acts against parts of the microbial cell. You need to know how to choose between non-sporicidal and sporicidal disinfectants, or non-oxidizing and oxidizing chemicals. - Validate your disinfectants. This is especially important for pharmaceutical facilities. You need to challenge the disinfectant solution as well as different surface materials and ranges of microorganisms. The manufacturer can perform some testing, but some should be performed in-house. - Know the factors affecting disinfection efficacy. There are a number of things that will challenge efficacy: - Number and type of microorganisms - Temperature and pH - Cleaning materials. We discussed applicators in this post. - Cleaning techniques. Whether it is the direction you sweep/mop, method of application, allowing product to dry, or any other protocol, be certain that you clean/apply exactly as is specified. - Monitoring efficiency. As with any protocol for a cleanroom, test and monitor your method with microbiological sampling of surfaces. If you need validation or certification, or if you have questions about these processes, consult Gerbig Engineering Company. Our 30 years of experience makes us experts at cleanroom construction, certification, and validation. Contact us at 888-628-0056 or [email protected].
<urn:uuid:c86d2237-f1e0-4087-87be-5c67d045c598>
CC-MAIN-2023-23
https://www.gerbig.com/8-steps-to-successful-cleaning-and-cleanroom-disinfection/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649439.65/warc/CC-MAIN-20230604025306-20230604055306-00796.warc.gz
en
0.914864
543
2.609375
3
Part 1: Metropolitan Regions CORRECTION: An earlier version of this article misstated the growth of the senior population. Between 2000 and 2010, the senior population grew at a 15 percent pace, not 20 percent, as stated, increasing the senior population by 5 million, not 40.2 million. The previous version said that during this decade, the Census Bureau projects the senior population to increase by another 60 million, which is incorrect. Census forecasts that the total senior population will reach 60 million by the end of this decade. The corrected version appears below. It has been widely reported that the population of seniors—i.e., those over 65—in the United States is growing rapidly. Less reported is the extraordinary acceleration in the rate of this growth. For instance, from 1990 to 2000 the senior population grew at 12 percent; over the next ten years, however, it grew at 15 percent to 40.2 million by 2010. During the current decade, the senior population is projected by the U.S. Census Bureau to grow by a stunning 55 percent, adding more than 20 million seniors to about 62 million in all. This growth is significant nationally, of course, and is already figuring in national debates about health care and Social Security, among other topics. At the local level, though, the impact will be even greater. A major reason for this is the wide variation in the rate of growth of the senior population around the country. Among the 50 largest metropolitan regions in the United States, the rate at which the senior population grew during the past decade ranged from a high of 50 to 60 percent down to a decline of 5 percent. While most regions did experience growth, three—Pittsburgh, New Orleans, and Buffalo—actually saw their senior populations decline. Among cities, the variation was even wider. In fact, in not quite half of the cities (43 percent) the senior population either stayed the same or declined. More will be said about this in Part 2 of this series. The wide disparity in growth rates among metro regions means the benefits and challenges of a growing senior population will hit each metro region differently. Local officials and planners, developers, and others working in the housing and retail sectors, and those providing services for aging seniors, should pay close attention. The Impact of Seniors |Ten metro regions with the largest number of new seniors |2.||New York City||167,000| Seniors benefit communities as well as present them with challenges. For instance, seniors are wealthier than any other age group despite the recession, their rates of homeownership are higher, they support the local economy, they pay local taxes, and they have a very low crime rate. They also are more politically conservative, and what they want in and need from a community is often quite different from what young families want and need. This is changing the local political climate in those suburbs where the growth of seniors is significant. Seniors are, for instance, pushing for more parks, open space, and libraries, often at the expense of funds for schools and playgrounds. While this is not in itself a problem as these are all desirable community amenities, in fiscally constrained suburbs where schools are overcrowded and teachers are being laid off, the educational needs of children are becoming a lower priority. Growing senior populations also present challenges and opportunities for housing developers, retailers, and service providers. Seniors often have the time and experience to lead local efforts to block new infill development. However, they also have the means to buy property or rent new apartments in suburban town centers if they are able and choose to sell their large suburban homes. To Where Are Seniors Moving? |The ten metro regions where the senior population grew the fastest between 2000 and 2010| |3.||Las Vegas||50 percent| So, where is the senior population growing the fastest? The ten metro regions with the largest number of new seniors are as follows (table 1). Absolute numbers, however, do not necessarily show where seniors may be having the greatest political and market impact. This is better indicated by the rate at which the senior population is growing. In other words, which communities are experiencing the fastest growth in their senior populations? For instance, while the number of new seniors in the New York City metro region was the second highest in the nation, the senior population grew at a rate of just 7 percent. Thus, its impact is likely to be less than that witnessed in, say, Raleigh, North Carolina, where the rate of growth was a surprising 60 percent. A fast rate of growth also indicates a region that attracted new seniors who migrated from other parts of the country. The census numbers, however, do not break down how much of this migration occurred before the housing crash; studies are showing that internal migration has slowed significantly since 2007, suggesting that these senior “hot spots” may have already cooled down. Only four of the metro areas with the fastest-growing senior populations (table 2) were also among the fastest growing in absolute numbers of seniors—Houston, Dallas, Phoenix, and Riverside; these clearly have been senior magnets. The question remains, however, whether the recession and the housing market crash will slow or stop this trend in many of these markets. From Where Are Seniors Moving? Where did those seniors who moved during the period from 2000 to 2010 come from? Following are the ten metro regions where the senior population declined or grew most slowly over the past decade. Note that in each of these regions where the senior population grew, it did so at a rate well below the national rate of 20 percent, suggesting that seniors were moving out of the area |Ten metro regions where the senior population declined| |St. Louis and New York City (tie)||7 percent| |10.||New Orleans||-5 percent| This list (table 3) confirms that during at least the early part of the last decade, seniors were moving from the cold Northeast and old industrial metropolitan areas to the warmer climes of the South, West, and Southwest. The New Orleans region is, of course, an exception, and the outward migration from there is largely—though not solely—attributable to Hurricane Katrina. Another exception is Tampa. Its low rate of growth mirrors Miami’s 8 percent rate of growth in the senior population. Florida did attract many seniors, only they moved to Orlando and Jacksonville instead, which had growth rates of 29 and 31 percent, respectively. And even though it would seem that many seniors moved away from the New York City metro area, it still added more seniors than anywhere else in the country but Los Angeles. Once again, in looking at these numbers and lists, keep in mind that the last decade spans two quite distinct periods—the housing boom through 2006 and the housing bust and recession from 2007 on. It may well be the case that the trends of the early years in the decade are obscuring the impact of the crash, and that the latter part of the decade may be a better indicator of what will be experienced in the years ahead. Such is the challenge of reading trends in uncertain times like the present. ULI–the Urban Land Institute
<urn:uuid:b8e4886b-03c5-40fa-953b-184c5036cf25>
CC-MAIN-2015-18
http://urbanland.uli.org/economy-markets-trends/seniors-in-which-metro-regions-are-they-living/
s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246639674.12/warc/CC-MAIN-20150417045719-00039-ip-10-235-10-82.ec2.internal.warc.gz
en
0.948403
1,472
2.6875
3
I don’t tweet. If you have been reading this blog for any length of time, you know why. I have a hard enough time constraining my writing to a few hundred words, much less 140 characters. Nevertheless, it is interesting to see how scientists use Twitter to communicate what they are doing. For example, I recently wrote about a particle physicist who used Twitter to explain to people what was happening in the Fukushima nuclear power plant disaster. He became a bit of a celebrity as a result and has used that platform to do some serious science related to the level of radioactive contamination in Japan’s food supply. Why in the world am I discussing a social media tool that I don’t even use? Because a reader posted a link on my Facebook page. It contains a series of tweets posted with the hashtag #overlyhonestmethods, which was started by a scientist known as “dr. leigh.” The hashtag has become a bit of a phenomenon. Essentially, scientists use it to explain the real reasons behind some of their methods. The picture at the top of the post is a classic example of a tweet that contains the hashtag. I immediately related to it, because as a nuclear chemist, I have built a lot of systems that blew up or failed in some other spectacular way. However, when I finally got a version of the system to work, I would refer to it as “representative.” That’s just the way it is done. If you have some time, scroll through a few of the tweets. Some of the language can be a bit foul, but the tweets give you a brief glimpse into the real world of scientific research. You might be a bit surprised at what you read!
<urn:uuid:3ee523e0-3d12-4440-941e-a5302e4c0049>
CC-MAIN-2020-29
https://blog.drwile.com/overly-honest-moments-in-science/
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655882051.19/warc/CC-MAIN-20200703122347-20200703152347-00096.warc.gz
en
0.972207
360
2.5625
3
Registered Nurse Job Description, Career as a Registered Nurse, Salary, Employment - Definition and Nature of the Work, Education and Training Requirements, Getting the Job Education and Training: College and possibly advanced degree Salary: Median—$52,330 per year Employment Outlook: Excellent Definition and Nature of the Work Registered nurses (RNs) work to promote good health and prevent illness. They educate patients and the public about various medical conditions; treat patients and help in their rehabilitation; and provide advice and emotional support to patients' families. RNs use considerable judgment in providing a wide variety of services. Many registered nurses are general-duty nurses who focus on the overall care of patients. They administer medications under the supervision of doctors and keep records of symptoms and progress. General-duty nurses also supervise licensed practical nurses (LPNs), nursing aides, and orderlies. RNs can specialize: (1) by work setting or type of treatment—critical-care nurses work in intensive care units, and psychiatric nurses treat patients with mental health disorders; (2) by disease, ailment, or condition—HIV/AIDS nurses care for patients with HIV infection and AIDS, and addictions nurses treat patients with substance abuse problems; (3) by organ or body system—nephrology nurses care for patients with kidney disease, and respiratory nurses treat patients with disorders such as asthma; and (4) by population—school nurses provide care for children and adolescents in school, while geriatric nurses provide care for the elderly. RNs may also work in combined specialties, such as pediatric oncology (the care of children and adolescents with cancer) or cardiac emergency (the care of patients with heart problems in emergency rooms). Some RNs choose to become advanced-practice nurses and get special training beyond their RN education. They are often considered primary health care practitioners and work independently or in collaboration with physicians. There are four categories of advanced-practice nurses: nurse-practitioners, clinical nurse specialists, Certified Nurse-Midwives, and Certified Registered Nurse Anesthetists. The duties of nurse-practitioners include conducting physical exams; diagnosing and treating common illnesses and injuries; providing immunizations; managing high blood pressure, diabetes, and other chronic problems; ordering and interpreting X-rays and other lab tests; and counseling patients on healthy lifestyles. They practice in hospitals and clinics and often deliver care in rural or inner-city locations not well served by physicians. Some have private practices. Nurse-practitioners can prescribe medications in all states, and in many states they can practice without the supervision of physicians. Clinical nurse specialists provide care in specialty areas, such as cardiac, oncology (cancer), pediatrics, and psychiatric/mental health. They work in hospitals and clinics, providing medical care and mental health services, developing quality assurance procedures, and serving as educators and consultants. Certified Nurse-Midwives provide routine health care for women, but their practices are focused on pregnancy and delivery of babies. They lead classes in childbirth, sibling preparation, and care of newborns. If pregnancies continue without complications, nurse-midwives provide all prenatal care, assist mothers during labor, and deliver the babies. Following the births, they make sure that mothers and newborns are well and provide follow-up care. If emergencies occur, nurse-midwives are trained to provide assistance until doctors arrive. Certified Registered Nurse Anesthetists receive special training in the use of anesthetics, which produce a state of painlessness or unconsciousness. They work under the supervision of anesthesiologists (physicians who specialize in anesthesia) or other physicians. Most work in operating rooms during surgery, but others administer anesthetics in delivery rooms, emergency rooms, and dental offices. Sometimes nurse anesthetists help to care for patients during recovery from anesthesia. Some experienced hospital nurses are head nurses or directors of nursing services. RNs also work as nurse educators or as researchers in hospitals. They may become forensics nurses, combining their nursing knowledge with law enforcement. They often work with victims who have been assaulted. Many registered nurses work in private doctors' offices or clinics. They may assist such physicians as obstetricians and dental surgeons. Education and Training Requirements To become registered nurses, high school graduates can earn associate degrees in two-year nursing programs at community colleges; earn diplomas in three-year programs offered by hospitals or independent schools of nursing; or earn bachelor of science in nursing degrees (BSN). BSN programs usually take four or five years to complete and combine liberal arts courses with scientific and technical training. All programs include practical experience. Those who have completed an approved program are eligible to take the national written licensing exam, which is administered by each state. All states require licensing. The profession is moving toward two levels of nursing: technical nursing, which requires associate degrees, and professional nursing, which requires bachelor's degrees. Under this system, only nurses with bachelor's degrees would be eligible for RN licensing. The American Association of Colleges of Nursing (AACN) and other leading nursing organizations recognize the BSN as the minimum educational requirement for professional nursing. While graduates can begin practice as RNs with associate degrees or hospital diplomas, the BSN is essential for nurses seeking to perform at the case-manager or supervisory level. Students desiring to become advanced-practice nurses must obtain master of science in nursing degrees (MSN). Some nurses go on to earn doctorates. Getting the Job Nursing school and college placement services can help graduates find jobs. Graduates can also apply directly to hospitals, clinics, nursing homes, and doctors' offices. Those interested in community health jobs can contact public health departments, home health agencies, and visiting nurse associations. The armed services also have openings for nurses. In addition, professional nurse registries list openings for private-duty nurses. Advancement Possibilities and Employment Outlook Advancement in nursing depends on education, experience, and place of employment. Registered nurses can become supervisors of departments or specialists in particular fields of nursing. Those with bachelor's or master's degrees are more likely to move into higher-level jobs. Many positions in research, teaching, and administration require master's degrees or even doctorates in nursing. According to the U.S. Department of Labor's Bureau of Labor Statistics, "Registered nurses are projected to create the second largest number of new jobs among all occupations; job opportunities in most specialties and employment settings are expected to be excellent, with some employers reporting difficulty in attracting and retaining enough RNs." Employment of registered nurses is expected to grow much faster than the average for all occupations through 2014. In part, increases in demand are due to technological advances in patient care, which permit a greater number of medical problems to be treated. In addition, the number of elderly people is projected to grow rapidly, which should spur demand for RNs in nursing homes and long-term care facilities. Employment in home health care is expected to increase the fastest; employment in hospitals is expected to increase the least, largely because patients are being released earlier to reduce costs. Also, technological progress is making it possible to bring complex treatments to patients' homes. Although working conditions vary with the place of employment, nearly all nursing jobs involve close contact with people. Good health and emotional stability are valuable assets. Nurses must be careful workers who take their responsibilities seriously. They must follow rigid guidelines to ensure the health and safety of themselves and their patients. Registered nurses generally work forty hours per week. They may have to work some night and weekend shifts, especially if they work in hospitals. Many nurses work part time. Earnings and Benefits Salaries for nurses vary with education, experience, and area of specialization. In 2004 the median annual salary of registered nurses was $52,330 per year. Benefits include paid holidays and vacations, health insurance, and retirement plans. Private-duty nurses generally charge a daily fee and must provide their own benefits. - Resources General—Career Information - Reflexologist Job Description, Career as a Reflexologist, Salary, Employment - Definition and Nature of the Work, Education and Training Requirements, Getting the Job
<urn:uuid:43807b11-d532-4c68-a334-a5260f961bd6>
CC-MAIN-2019-13
https://careers.stateuniversity.com/pages/496/Registered-Nurse.html
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201707.53/warc/CC-MAIN-20190318211849-20190318233849-00472.warc.gz
en
0.960767
1,680
3.453125
3
Great Romantic Composer-Pianists of the 19th Century What inspired Romantic composers of the 19th century to create the significant piano works that continue to speak profoundly to today’s audiences? Throughout the Romantic era the piano and the pianist-composers who wrote for it assumed an increasingly important role in European society. These pianist-composers and virtuosi fully explored the inner depths of their imaginations, and it is perhaps in the solo piano repertoire most of all that we as listeners become privy to their most passionate and idiosyncratic work. In this course we focus on the piano works of Felix Mendelssohn, Frederic Chopin, Robert and Clara Schumann, Franz Liszt, and Johannes Brahms – pianist-composers who embodied the Romantic spirit and pursued freedom from the constraints of their predecessors. We will read composers’ letters and first-hand accounts and current research, and of course, listen to performances. Professor Gibson will be using CDs and an electronic piano to illustrate her lectures. TANNIS GIBSON is Professor of Piano and Associate Director for the Fred Fox School of Music at the University of Arizona. She has performed and taught around the world, including Weill Recital Hall (Carnegie), the Kennedy Center, Merkin Hall, and the National Gallery of Art, as well as throughout Asia, Europe, and South America.
<urn:uuid:551754fc-9257-4923-971a-dbccf4573bfe>
CC-MAIN-2018-05
http://hsp.arizona.edu/course/great-romantic-composer-pianists-19th-century
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886397.2/warc/CC-MAIN-20180116090056-20180116110056-00061.warc.gz
en
0.941475
290
3.484375
3
- Resilience Builder - Selective Mutism - Contact Us 1. Interactive Didactic: Specific resilience, leadership and social competence skills are focused on each week using discussion, role plays, puppets, or other age appropriate techniques for emphasis. Topics we have covered include: personal space awareness, anger/anxiety management, self-regulation, taking turns, starting and maintaining conversations, intention and impact of behavior, etc. 2. Free Play/ Behavioral Rehearsal: Children practice friendship skills with other kids through play in group with the group leader available to coach them through specific struggles that arise, and praise the use of skills taught. 3. Relaxation/Self Regulation Techniques: Therapists help children increase awareness of the connection between body, emotions, and feelings through calm breathing, visualization (or imagination walks), mindfulness, progressive muscle relaxation, music, or yoga. 4. Generalization: Resilience Builder homework assignments and sportsmanship activities (e.g., using the Wii, or going bowling) are utilized to reinforce the positive gains seen in the group therapy setting and expand them to use in the world outside of group. 5. Parents as Active Partners: Therapists are continually informing, encouraging and assisting parents. Each week parents are given information about the specific social skill the group is addressing, ideas to encourage and foster development, and recommendations for additional readings. In addition, each month parents are invited to join their child for a part of group.
<urn:uuid:3944af3e-f192-4517-bb35-21680f20ea74>
CC-MAIN-2022-21
https://www.alvordbaker.com/rbp/
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545326.51/warc/CC-MAIN-20220522094818-20220522124818-00783.warc.gz
en
0.92237
319
3.21875
3
One of the most basic lessons in photography which every aspiring photographer should remember is perspective. Perspective is different for everyone, even for people standing at the same place, at the same time. Now that most of us are staying home, this rule on perspective is very important. Are we going to see this time as an opportunity to learn or as a ‘staycation’ of sort? Either way, we need images to document this period, not only as a record for posterity but as something to ponder on in the future. This is the best time to practice and hone your photography skills. If you do not have access to a DSLR camera, you can use your mobile phone. If you want to improve your images, our friends from Storyteller have provided some very important tips to keep in mind. What is photography? Photography is the art, application and practice of creating images by recording light or other electromagnetic radiation, either by means of an image sensor or by means of a light-sensitive material such as film. Simply put, photography is the art of creating images using a camera. Types of Camera Common Photography Terms Exposure is the amount of light allowed to fall on each area unit of a photographic medium (photographic film or Image Sensor) during the process of taking a photograph. Exposure is measured in lux/seconds and can be computed from exposure value (EV) and scene luminance in a specified region. Aperture In optics, an aperture is a hole or an opening through which light travels (LV or light value) Shutter Speed or exposure time is the effective length of time a camera shutter is open and is measured by time/msec. ISO Light Sensitivity Shutter Exposure and Subject Movement Aperture DOF (Depth of Field) - Move close to the subject - The subject should be off-centre - Take lots of images - Preset exposure and focus - Use a tripod - Use flash even during the daytime - Follow a theme - Always look for the light - Don’t rely too much on the pop-up flash - Predefine moods - Experiment on reflections - Make sure that you always have spare batteries - Always clean your camera Alan Desiderio of Storyteller has over 25 years of experience as a photographer, covering a wide range of work across publishing, advertising and event disciplines in the Philippines and the Middle East. Storyteller is a creative agency, production house and post-production house, rolled into one. For collaborations and projects, call 4418 6801 or visit their website Copyright © Marhaba Information Guide. Reproduction of material from Marhaba Information Guide’s book or website without written permission is strictly prohibited. Using Marhaba Information Guide’s material without authorisation constitutes as plagiarism as well as copyright infringement.
<urn:uuid:a9662aeb-bc78-4fdc-9499-e71357e4439a>
CC-MAIN-2021-10
https://www.marhaba.qa/photography-tips-from-storyteller/
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374217.78/warc/CC-MAIN-20210306004859-20210306034859-00314.warc.gz
en
0.941182
599
2.890625
3
While alcohol addiction is a devastating disorder that can ruin lives, certain people who have a problem with it manage to hold down substantial responsibilities and stressful careers. Externally, these so-called high-functioning alcoholics appear to have it all together. They could drive great cars, live in great areas, and make a significant income. However, just because they're high-functioning doesn't imply that they are suffering from the consequences of alcohol. They're still in danger of injuring themselves and those near them. For example, a pilot nursing a hangover, a doctor operating with unsteady hands, or a financier dealing with large amounts of money are each at-risk of triggering awful tragedies if they stay on their dysfunctional course. Here are some indications that can help in recognizing these ticking time bombs: 1. They consume alcohol instead of eating. Alcoholics will commonly replace dishes with a couple of cocktails, lose their appetite for food altogether, or use mealtime as a pretext to start drinking alcohol. 2. They may get out of bed without a hangover, even after a number of cocktails. Drinking alcohol regularly over a long period of time can cause the body to come to be dependent on alcohol. Routinely high-functioning alcoholics are able to drink a lot without the punishing hangover that torments the periodic drinker. 3. No alcohol makes them grouchy, worried, or uncomfortable. If an alcoholic is required to avoid drinking, his or her body oftentimes reacts negatively, as they are dependent on the sedative results of alcohol. Abruptly stopping can cause stress and anxiety, nervousness, perspiring, an elevated heart rate, and even seizures. 4. Their actions patterns transform substantially while under the influence of booze. When they drink, alcoholics might change considerably. For example, an usually pleasant person might end up being aggressive, or make spontaneous choices. 5. They can't have only two alcoholic beverages. An alcoholic has a problem quiting, and may even finish other people's' alcoholic beverages. Alcohol will never ever be left on the table, and there is always a pretext for "another round.". 6. Periods of amnesia or "blacking out" are commonplace Many people dependent on alcohol will participate in events that they cannot recall the following day. They may not appear significantly intoxicated at the time, however they're not able to recall things that took place. 7. Attempts to talk about drinking habits are received with hostility and denial. When faced with concerns involving their alcohol usage, hard drinkers will usually regress to denial or anger, making discussion challenging. 8. They consistently have a great explanation for the reason they consume alcohol. If flat denial or hostility is not the preferred means of evasion, many alcoholics will have an outwardly reasonable explanation for their actions. Anxiety at the office, problems at home, or a bounty of social activities are typical reasons to account for their harmful actions. 9. They hide their alcohol. Numerous alcoholics will drink alone, or slip drinks from a bottle in a desk or in their car. This type of concealed drinking is a significant red flag and there is no other explanation for this behavior besides alcoholism. Let's keep our society productive, safe, and sober by by being observant for problematic actions in an effort to get these distressed colleagues, family, and close friends the help they require. While alcohol addiction is a terrible illness that can ruin lives, some individuals who struggle with it are able to hold down huge duties and difficult careers. From the outdoors, these so-called high-functioning alcoholics seem to have it all together. They can drive great vehicles, live in terrific communities, and make a considerable earnings. Simply because they're high-functioning does not indicate that they're immune to the consequences of alcohol. A pilot nursing a hangover, a surgeon with unsteady hands, or a money-lender dealing with big amounts of cash are each at-risk of triggering horrible catastrophes if they stay on their unhealthy path.
<urn:uuid:201f92ba-90cc-4363-ba3d-b1f070a55783>
CC-MAIN-2018-43
http://frederickmonaghan72.onesmablog.com/Alcohol-Addiction-Is-A-Destructive-Illness-7575126
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516135.92/warc/CC-MAIN-20181023111223-20181023132723-00400.warc.gz
en
0.957137
841
2.609375
3
GLASS REINFORCED PLASTIC (GRP) WHAT IS GRP? Glass reinforced plastic (GRP) is a composite material made from plastic reinforced by fine fibres usually made of glass which has a variety of usage and is used widely throughout UK industries. Pultruded GRP comes in a range of sections similar to steel but can also be manufactured to specific detail and design. At only 25% the weight of steel but with comparative strength, GRP is easier and quicker to install and reduces manual handling. Due to its high resistance it is widely used in chemical plants and other corrosive environments. Being virtually maintenance free, GRP is installed in remote areas or areas hard to physically access which greatly reduces man hours and materials on repairs and maintenance which are normally required on steel products. Walkways, stairs, handrails, grating plates, duct covers, well covers, stair treads, support structures and bridges are only some of the areas where GRP is being used, overcoming hostile environments where corrosion, access, maintenance, repairs and safety all contribute to additional or ongoing costs on an annual basis. Fire retardant, non-conductive, self coloured, high strength to weight ratio, lightweight compared to steel and virtually maintenance free. These are just some of the benefits in comparison with steel and other structural products. The video below shows a section of steel and then GRP, similar to the molded image above, being subjected to a 175 kg weight being dropped on it from a height of approximately two metres. GRP DROP TEST EXCELLENT CHEMICAL & CORROSION RESISTANCEWithstands the most corrosive conditions to ensure solid structural integrity in tough environments. UNMATCHED IMPACT RESISTANCECan withstand major impacts with little structural damage and no failure. HIGH STRENGTH-TO-WEIGHT RATIOLess than one-quarter the weight of steel grating. ELECTRICALLY AND THERMALLY NONCONDUCTIVEEliminates electrical/thermal hazards. EASY TO FABRICATEDoes not need heavy lifting equipment or expensive tools; can be easily carried by intallation personnel; can be cut using standard circular or sabre saws fitted with abrasive blades, reduces installation costs. LONG MAINTENANCE-FREE LIFERequires no scraping, sandblasting or painting. Cuts life cycle costs dramatically. ELECTRONICALLY TRANSPARENTWill not affect electromagnetic/radio frequencies FIRE RETARDENTClass I (0-25) Flame spread rating SLIP RESISTANTImproves safety, reduces slips and falls. NO SCRAP VALUEThe increase in the cost of metal has stimulated the theft of lead flashing, man hole covers etc. Composites have no scrap value and therefore not targeted by thieves. GRP MARKET SECTORS Faux coving, columns, facades, canopies and even chimneys have all been fabricated from GRP to create a cost effective and easily fitted solution to improving and mimicking traditional architectural styles, avoiding the cost prohibitive avenue of natural stone. Due to advances in GRP technology, Briggs can seal and repair water course and culverts by lining them in resin impregnated glass fibre matting. This system can be installed in running rivers with no contamination to the water course. The traditional requirements for over pumping and fish passages are not needed, saving significant time, money and disruption to the water course. The product is lightweight, and quick to install. OIL & GAS Avoiding fire and explosions in these sectors is paramount and the non conductive and spark free qualities of GRP have made it a popular choice of material to use, particularly the offshore market Due to being only one quarter of the weight of steel, bridge operators in particular have been installing GRP access and inspection to minimise the imposed load on bridge structures while providing safe access for maintenance and inspections. CHEMICAL & INDUSTRIAL The chemical resistant, non conductive and spark free qualities of GRP have made it a popular choice of material to replace steel structures in these health and safety sensitive sectors. POWER & RENEWABLE ENERGY Lightness of material affording ease of installation and longevity appeals to wind farm operators, particularly the offshore market, where reducing all forward maintenance to a minimum is a key driver in this market. Platforms, duct covers and vertical ladders, all made from GRP, are used for wind turbines which require no maintenance. The cost involved with accessing offshore facilities and the limits imposed with weather windows means future proofing these structures is of paramount importance. GRP has been employed in the marine industry for decades from canoes to warships. The resistance to salt water and air makes it the preferred choice of material providing a long term solution to an aggressive environment. WATER & WASTE WATER Due to the corrosive nature of the chemicals used in water treatment facilities, GRP is replacing steel walkways, staircases, platforms and ladder access. Contamination from rusting steel structures to water reservoirs is leading to them being replaced by GRP bridges.
<urn:uuid:5ff6b5f2-8b66-47a9-9846-7fa4f7b0e35c>
CC-MAIN-2019-13
https://briggsbuilding.co.uk/products
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202199.51/warc/CC-MAIN-20190320024206-20190320050206-00459.warc.gz
en
0.911347
1,073
2.828125
3
If you’re creating a Word document containing sensitive information only to be viewed by certain people, you can add a password to the document so it can’t be opened by anyone who doesn’t know the password. We’ll show you two ways do this. The first method involves the backstage screen. Open the document to which you want to add an open password and click the “File” tab. On the “Info” backstage screen, click the “Protect Document” button and select “Encrypt with Password” from the drop-down menu. The “Encrypt Document” dialog box displays. Enter a password in the “Password” edit box and click “OK”. On the “Confirm Password” dialog box that displays, enter the same password again in the “Reenter password” edit box and click “OK”. The “Protect Document section on the “Info” screen is highlighted in yellow and a message displays telling you that a password is required to open this document. The second method of applying an open password to a Word document involves the “Save As” dialog box. Again, make sure the document to which you want to add an open password is open and click the “File” tab. On the backstage screen, click “Save As” in the list of items on the left. Select a folder where you want to save the password protected document. Either select the “Current Folder”, a folder under “Recent Folders”, or click “Browse” to select a folder not in the list. Navigate to the desired folder, if necessary. Then, click “Tools” next to the “Save” button and select “General Options” from the drop-down menu. On the “General Options” dialog box, enter a password into the “Password to open” edit box and click “OK”. On the “Confirm Password” dialog box that displays, enter the password again in the “Reenter password to open” edit box and click “OK”. Click “Save” to save the document with the password. The next time you open the document Word will ask you for the password before opening the document. When you enter an open password using either method, the password is entered in the other location as well. So, if you want to remove the password from your Word document, open the document, access either the “Encrypt Document” dialog box or the “General Options” dialog box as described above and delete the password. Then, save the document again.
<urn:uuid:c4ccd08a-499b-4bf2-b733-c0bbbc4f0ce7>
CC-MAIN-2020-10
https://www.howtogeek.com/225780/how-to-add-an-open-password-to-a-word-document/
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143815.23/warc/CC-MAIN-20200218210853-20200219000853-00304.warc.gz
en
0.844415
585
2.765625
3
A Sussex University study has shown how unused stockpiles of nuclear waste can be transformed into something useful. According to the scientists led by Professors Geoff Cloke, Richard Layfield and Dr Nikolaos Tsoureas, the waste product of nuclear power can be used to create valuable commodity chemicals as well as new energy sources. Depleted uranium (DU) is a radioactive by-product from the process used to create nuclear energy. With many fearing the health risks from DU, it is either stored in expensive facilities or used to manufacture controversial armour-piercing missiles. Sussex scientists have now managed to convert ethylene (an alkene used to make plastic) into ethane (an alkane used to produce a number of other compounds including ethanol) by using a catalyst which contains depleted uranium. This breakthrough could help reduce the heavy burden of large-scale storage of DU, and lead to the transformation of more complicated alkenes. “The ability to convert alkenes into alkanes is an important chemical reaction that means we may be able to take simple molecules and upgrade them into valuable commodity chemicals, like hydrogenated oils and petrochemicals which can be used as an energy source,” notes Layfield. “The fact that we can use depleted uranium to do this provides proof that we don’t need to be afraid of it as it might actually be very useful for us.” Working in collaboration with researchers at Université de Toulouse and Humboldt-Universität zu Berlin, the Sussex team discovered that an organometallic molecule based on depleted uranium could catalyse the addition of a molecule of hydrogen to the carbon-carbon double bond in ethylene – the simplest member of alkene family – to create ethane. “Nobody has thought to use DU in this way before,” contends Cloke. “While converting ethylene into ethane is nothing new, the use or uranium is a key milestone.” “The key to the reactivity were two fused pentagonal rings of carbon, known as pentalene, which help the uranium to inject electrons into ethylene and activate it towards addition of hydrogen.” Image and content: Getty Images via Forbes/University of Sussex
<urn:uuid:30fb1a1b-ebc7-4eea-b66e-027a1141e616>
CC-MAIN-2020-24
https://worldindustrialreporter.com/converting-unused-nuclear-waste-into-new-energy-sources-and-chemicals/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347458095.68/warc/CC-MAIN-20200604192256-20200604222256-00071.warc.gz
en
0.930689
472
3.765625
4
And whether you know... Fracture of edges Edges are the framework of a human trunk representing pair elements of axial part of a skeleton which connect to a backbone. They form a thorax in which the majority of vitals is located. A fracture of edges are one of the most widespread injuries of a thorax, the representing injury of costal bones with disturbance of their integrity. Fractures of edges lead to internal injuries of respiratory system, heart, pleura, intercostal vessels. The change in difficult cases can lead to a lethal outcome therefore you should not self-medicate. Injuries of a thorax happen open, arising owing to blow of a thorax, wound from firearms and others. The closed damages arise at blows owing to which bruises, hematomas, fractures of edges, and also squeezings of a thorax are shown. The fracture of edges is the most widespread disease among the closed injuries of a thorax (67% of the closed injuries). The IV-VIII edges of the back and average axillary line most often suffer from changes. Impacts on edges can life direct and indirect. At direct influences of an edge cave in inside, towards a pleural cavity, and its fragments can damage bodies, a lung or a pristenochny pleura. During indirect influence there is a squeezing of a thorax owing to what edges break in both parties from the place of squeezing. Usually break several edges at once. Bilateral changes lead to loss of stability of a thorax, and also to plevropulmonalny shock and disturbance of ventilation of the lungs. There are also fenestrated changes, i.e. on the one hand in two places. A fracture of edges are most often observed at people 40 years are more senior that is connected with age changes of a bone tissue. At children fractures of edges happen very seldom, because of elasticity of a thorax, and at elderly people small injuries can lead to multiple fractures. Divide an edge crack, a subperiostal change at which the bone tissue and a complete fracture of an edge which most often happens on site a bend of edges breaks, i.e. on lateral surfaces of a thorax, in all cases there can be same symptoms of a fracture of edges. The direct and indirect injury of a thorax, for example, falling on the acting subject is the reason of a fracture of edges, at a direct stroke, when squeezing a thorax, arrival of the car, a car accident. Symptoms of a fracture of edges The main symptoms of a fracture of edges are: - Severe pain at breath in the field of damage. At a pain breath acute, amplify at cough and deep breaths. - Swelling in the field of a change, morbidity of a palpation. - Hurried breathing, usually damaged part of a thorax lags behind in breath. There can be developments of stagnation that leads to development of posttraumatic pneumonia. - Formation of bruise on site change. - At palpation the poskripyvaniye of a fatty tissue is felt (crepitation). - Bleeding through upper respiratory bodies in a thorax that can lead to death. - At injury of a lung – a pneumorrhagia, hypodermic emphysema, accumulation of blood and air in a thorax. It is important to note that at a change of back departments of edges respiratory frustration are expressed poorly. Treatment of a fracture of edges To diagnose a fracture of edges, the doctor will carry out percussion and auscultation, and also will appoint a X-ray analysis. These researches will allow to exclude availability of liquid in pleural area. On a x-ray film it will be visible, there is a change or not. Treatment of a fracture of edges consists of several stages. First of all the damaged part of a thorax by carrying out novocainic (10 ml of solution of Procainum) or alcohol-novocainic (9 ml of solution of Procainum and 1 ml of alcohol) blockade is anesthetized. At a multiple fracture appoint juxtaspinal blockade Procainum solution, and in case of multiple bilateral fractures – skeletal traction for a breast. In difficult cases apply surgical treatment of a change – a medical immobilization. If there is a need – carry out a puncture for a conclusion of the accumulated blood. Absolute rest is shown to the patient, and usually for rehabilitation in uncomplicated cases it is required of four weeks. Expectorants, methods of physical therapy and respiratory gymnastics which is directed to the prevention of developments of stagnation in bodies of a thorax are appointed. At uncomplicated changes impose gypsum, and at multiple fractures carry out treatment in a hospital. If the patient is tormented by severe pains and to avoid development of posttraumatic pneumonia, novocainic and other blockade to the place of a change are appointed. It is not necessary to conduct treatment of a fracture of edges independently, however it is possible to give first aid. It is necessary to put ice to a sore point, to accept an ibuprofen, to make a compressing bandage of bandage and a towel, in the provision of "half-exhalation" and to go as soon as possible to the doctor. Take to hospital of the patient in the semi-sitting or lying situation. Injuries of a thorax lead to complications. Them are development of pneumonia, a hemothorax and pheumothorax, and also a bruise of a lung and heart. The fracture of the lower edges can lead to damages of abdominal organs, for example, of a liver and a spleen. Section: Orthopedics and traumatology
<urn:uuid:ef3e1969-bd14-42ee-97e3-f751d14575ec>
CC-MAIN-2020-40
https://medicalmed.us/perelom-reber.htm
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206763.24/warc/CC-MAIN-20200922192512-20200922222512-00566.warc.gz
en
0.927762
1,210
3.265625
3
The study of space weather is one of the newest fields in physics and astronomy, yet potentially one of the most dangerous. Our understanding of the mechanisms that drive the Sun and thus space weather systems is limited, which means are ability to forecast and defend against a possible disaster is hindered. This report will discuss the history of space weather events on Earth, and the damage they can inflict, as well as examining the technologies currently in place to monitor and analyse the Sun to forecast emanations. It will also explore future and proposed projects for defending against space weather, and assess what impact they may have in protecting the Earth from a potential Solar disaster. The report concludes that our current understanding and systems to protect against space weather would not be sufficient to mitigate the effects of the most ferocious of solar storms. However our forecasting ability and processes to prepare for a solar flare or CME are much more comprehensive than one may first imagine, and for the most part there is a structure in place to defend against most solar threats. Furthermore, the report highlights the commitment from governments across the globe to improving our ability to prevent a space weather disaster. It suggests that the on-going research and future proposed projects will undoubtedly lead to Earth being well guarded from a space weather disaster. To most, terms such as space weather, geomagnetic storms and solar flares are fictions, merely the domain of disaster blockbusters. I believe this is a gross misinterpretation of what is considered one of the highest priority natural hazards by the UK Government. A space weather disaster, I believe, is one of the most under-appreciated threats to life on Earth as we know it. Through this project, I wish to discover exactly what damage a space weather event can cause, what precautions we have in place to mitigate these effects, and overall whether the UK and indeed the world are prepared for a space weather disaster. Before we can discuss the impacts of space weather, we must first ascertain a grounding in what space weather is: fundamentally, the interaction between solar emanations and the Earth’s magnetic field. The Sun is a hot, massive ball of mainly hydrogen gas, so hot in fact that most of the atoms break apart into charged particles — a state known as plasma. These moving charged particles generate a magnetic field which in turn generate electric fields, creating a dynamo. This results in the Sun having a constant magnetic field which “leaks out” across the Solar System, and a constant, steady stream of charged particles emanating from the Sun in what’s called the solar wind. The Solar Wind would’ve stripped the Earth of its atmosphere long ago if not for its natural defence — a magnetic field acts as a barrier against the oncoming solar wind, funnelling charged particles along the filed lines and in toward the poles. These charged particles interact with the upper atmosphere and excite gas molecules, which in turn release energy in the form of a photon. With billions of these collisions occurring at a given time, enough photons are released for the effect to be visible in the night sky. The different gas molecules emit different wavelengths of photons, which we see as ribbons of green, blue and red. This is the aurora, the most obvious space weather phenomena, but only the surface of a much more complex system. Within the Sun, the high energy charged particle move randomly and each generate their own magnetic field, which combine to generate the Sun’s overall magnetic field. The random movement of the particles, however, forms “knots” (properly known as flux ropes) in the magnetic field where charged particles start to build up. Eventually these magnetic structures become too stressed and realign into a more natural configuration and release large amounts of energy and the trapped particles. The energy is released in a solar flare, a wave of electromagnetic radiation that races across the Solar System at the speed of light. This wave “sweeps up” the protons emanated in the solar wind and cause a solar proton storm. These protons, travelling at large fractions of the speed of light, can travel the 1.5×108 km from the Sun to the Earth in less than ten minutes, and provide an early warning for an even larger emanation headed toward Earth. The trapped charge particles now eject from the surface of the Sun in a Coronal Mass Ejection (CME), an immense cloud of hot plasma that is hurled in a single direction, travelling at over a million miles per hour. CMEs can take anywhere from one to three days to reach Earth, but when they do their effects can be far more devastating. The CME on impact will compress the Earth’s magnetic field, causing a shockwave, transferring large amounts of energy into the magnetosphere. The magnetic field of the CME and the magnetic field of Earth can overlap and stretch the field away from the Sun, all the energy and charged particles now being trapped within Earth’s magnetic field. Eventually too much stress is put on the magnetic field and it “snaps” back, explosively releasing the energy toward Earth, resulting in a geomagnetic storm. The study of space weather is still in its infancy, and humanity has only really experienced one large-scale solar storm — the 1859 Carrington Event, where a coronal mass ejection induced currents in the magnetic field that led to aurorae being recorded as far South as Cuba. In this era, only the telegraph system was affected, inducing electric shocks in anyone who tried to operate it due to the immense currents flowing through the wires, but if such an event was to happen today, how would our technology-dependent civilisation cope? I think it is vital to assess what impact a space weather disaster could have on the modern world, and what systems are in place to mitigate its effects. Although I knew I wanted to focus my EPQ on the study of space weather, it took some time to formulate my exact question. However, through reading articles and journals on the subject, it soon became apparent that there was a lot of mixed opinion on how prepared we, as a country and as a planet, are prepared for a space weather event. Could one of these cosmic events spell out our premature doom? I became intrigued with learning more about this and so pursued research how prepared we are for a space weather disaster. I began by watching a short film on the topic produced by the online scientific broadcaster, Kurzgesagt Videos, which gave a general overview of the topic of space weather and what threats it poses. Although this is a secondary source made for a layman audience, the researchers for the projects are very detailed and thorough and cited all of their sources used in the video. This then provided an opportunity to look in greater detail at some of the statistics and points raised in the film. One number that particularly shocked me was the probability of a solar flare or coronal mass ejection striking the Earth within the next fifty years was fifty-percent. I was keen to pursue where this number came from and its legitimacy before I cited it in my project, and this led me to the video’s associated references. I found that the particular website came from an article published by a NASA scientific outreach branch in 20149. The article itself is a second-hand source, using material originally published in the February 2012 issue of the scientific journal Space Weather — On the Probability of Occurrence of Extreme Space Weather Events. The Space Weather journal is a publication of the American Geophysical Union, an international and non-profit research group where all publications are peer-reviewed. The article now is eight years old, and indeed in July 2012 a Coronal Mass Ejection was a “near-miss” from hitting the Earth, and the data from this even surely would’ve had an affect on the author, Riley’s, calculations. Furthermore, records of space weather events have only been kept since the Carrington Event of 1859, and photographic monitoring of the Sun since the next year. This limited data set means Riley has had to extrapolate data to produce his final value, which inherently introduces error and uncertainty. That said, Riley combined multiple data sets to generate his value, taking observational data of solar flares and the velocities of Coronal Mass Ejections at different points in time, and combining them with data about he effects felt on Earth — namely geomagnetic storms and changes in nitrate levels in ice cores. Using multiple sources of data recorded by different organisations independently increases the credibility of his value, however Riley acknowledges not all the sources are perfect; there is still skepticism from ice-core chemists that fluctuations in nitrate levels are the result of space-weather events3. Overall, the probability is to be taken with some doubt, but equally provides a good indicator for how the chances of a space weather event are not to be taken lightly. Next I needed to find out more about the potential damage a space weather disaster may cause. Mainstream and popular media sites suggest the damage would be tremendous; the New Zealand publisher Stuff suggested the United States alone could face damages of anywhere from $500 billion to $2.7 trillion. However, this article provided no references for these estimations. The prestigious insurance agency, Lloyd’s, however, provided similar figures in its publication Solar Storm Risk to the North American Electric Grid, and this report was made in collaboration with Atmospheric and Environmental Research, an agency that consults with agencies such as NOAA and NASA as well as private firms to anticipates risk from climate and weather. These figures then can be taken to be reliable. The Lloyd’s publication also suggests that twenty-to-forty million people in the US alone could be affected by power outages for a period anywhere from sixteen days to two years6. The variation in these estimates is of course a result of not having any data or experience in recent history to base predictions off of, and so are highly speculative in their answer. I wondered, however, whether this quite pessimistic view off the potential effects of a space weather event on our electrical infrastructure was shared across the entire scientific community, and consulted with a space weather forecaster for the MetOffice to ask his opinion. He suggested that these figures are often skewed to highlight just how terrible a truly “worst-case scenario” would be (a more intense solar storm than the Carrington Event), however neglect to mention how small the probabilities of such an event occurring are. Furthermore, different nations have different procedures in place to mitigate such effects and will also experience different consequences based on their geography and infrastructure. Most articles and research into the effects of space weather tend to focus on the impacts of the United States, a reasonably Northernly country and so in closer proximity to a magnetic pole, and so subject to more intense effects of a geomagnetic storm. A country on the equator would not be so badly affected by a space weather event. I decided to research the UK’s planned response to a space weather event and consulted the Space Weather Preparedness Strategy, published by the UK Government Department for Business, Innovation and Skills. In this paper, they highlight the reoccurrence of a Carrington Event has 1% annual probability8 and how they have used less severe but more recent data points to generate this estimation. The report also emphasises the difficulty in generating useful numerical values associated with probabilities and impacts of space weather events and so as a source is more useful in showing its shortcomings. Finally, I had to look at sources discussing historical space weather events. I discovered spaceweatherarchive.com, a website featuring numerous articles discussing historical and more recent space weather events. The blogposts are detailed however all based off of secondary research of others. Furthermore, the site is operated and all articles published by only one author, so there is no peer-reviewing of pieces before publication. The author does however provide hyperlinks to all of his sources for further reading. Spaceweatherarchive.com informed me of lots of smaller space weather events which have happened in more recent history which I then researched individually. A 2017 article in The Economist, How To Predict and Prepare For Space Weather, discussed how in 2003 a minor solar storm had caused 4,096 votes to be added to a candidate’s total in a local Belgian election. The Economist is a respected and reputable financial magazine and in the article quoted Bharat Bhuva, a professor of electrical engineering speaking at an annual meeting of the American Association for the Advancement of Science. The article then is rooted in scientific research. I also discovered solarstorms.org, which contained a complete timeline of “Space Weather History”. Though this provides a comprehensive and detailed list of space weather events, no sources or citations are given with them to indicate where the author learnt of these events. Thus, the source is not entirely reliable. However, it provided a good starting point to then conduct my own research of some of the lesser known historical space weather events. I found drawings taken from multiple journals of eyewitness in Japan in 1770 where they saw bright red lights in the night sky. Although at this time the understanding of space weather and what was causing these lights was still a mystery, modern scientists have concluded that these lights seen in East Asia were very likely the result of a solar storm at that time. More detailed than painted depictions of the aurorae, Captain Cook observed these lights while in Indonesia and recorded the exact angular height of the aurora as “reaching in height about twenty degrees above the horizon”. Retrospectively, modern scientists have been able to use this data to estimate the strength and size of the 1770 solar storm, and though Cook’s figure is only an estimate, it provides more of an insight into solar storms of the past. I also needed to gather information on the richest space weather data source to-date, the 1859 Carrington Event. I searched for primary accounts of experiences of the events and found a report in the Baltimore American and Commercial Advertiser of a “magnificent display of the auroral lights”. The city of Baltimore is well out of the auroral zone, generally considered to be between sixty and seventy degrees of latitude North, so records of such bright and vivd auroral displays give a suggestion as to the intensity of the solar storm. Being a primary source, there is also no chance of misinterpretation by historians of events and so is a reliable indicator of how profound and noticeable an effect the CME had on the residents of the city. I also found a paper, Duration and Extent of the Great Auroral Storm of 18597, which took eye-witness accounts and records of the Carrington Event and fed them into an algorithm to generate a database of auroral sightings across the Earth at different points in time. This data was then used to generate a model of auroral visibility across the Earth during early September 1859 and suggests aurorae would’ve been visible in regions as far South as twenty-degrees North, in the region of Panama and Colombia. However, this model is taken by extrapolating data mainly recorded in Northern Europe and America so we cannot be sure the effects of the CME would be as far reaching as this. Furthermore, the authors acknowledge in their paper that 1859 was well before the unification of time across countries and the formation of definite timezones, therefore there is likely to be a lot of disparity in recorded times of events. Overall, any information or publications about space weather events and the potential impacts are severely limited by our lack of data on the subject; though we have sophisticated techniques for modelling possible effects, this is no substitute for raw observation. That said, there is a clear trend across sources that large-scale space weather events are possible and they are likely to happen again. For almost two-hundred-and-fifty years, humanity has grappled with the threat of space weather on technology and society. The Carrington Event of 1859 was the first solar storm to have a damaging effect on technology, causing telegraph systems to be overloaded and unusable, rendering long-distance communication impossible for a period. There were also reports from telegraph operators of the systems sparking and in some cases causing electrocution. In the mid 1800s, the telegraph system was the only large scale electrical infrastructure across the Earth, a so-called “Victorian Internet”, so though the effects were profound ultimately they were not life-changing. Indeed, in theory solar storms on a similar scale to the Carrington Event or larger have occurred multiple times throughout humanity’s history, however our technological abilities had only developed far enough for the damage to be noticeable in the mid-19th Century. The Sun is known to have an eleven-year cycle of activity levels, and though the link between this and large-scale solar storms isn’t yet understood properly, it can be inferred that solar storms are a regular occurrence and this then raises the question — when can we next expect one, and what will be the consequences? Our reliance on technology since the Carrington Event has increased exponentially, and a global overload of electricity grids caused by geomagnetically induced currents would be devastating. Smaller space weather events in more recent history can give us an indication as to how powerful a large-scale solar storm would be, but before we explore these it is important to have a grounding in how the effects of space weather are measured. Solar storms and space weather can be measured using the Disturbance Storm Time Index, or Dst. This is a value based on the stability of the Earth’s magnetic field recorded across by magnetometers in multiple nations on an hourly basis since 1957. The scale is measured in nano-Teslas, nT, the unit of magnetic field intensity. All values are generally negative, and generally the Earth experiences activity in the order of -50nT that cause aurorae. Any storm that results in a Dst value of -250nT or below is considered a “super storm”. To date, there have only been thirty-nine “super storms” by this scale, with the Carrington Event of 1859 the most intense with a minimum estimated Dst of -1710nT. Since 1859, Earth has encountered multiple, smaller solar storms but with effects far more pronounced due to our increasing reliance on technology. In May 1921, the United States was badly affected by a solar storm that generated currents great enough to spark fires, including in Grand Central Terminal. Modern estimates put the Dst index value of this storm as around -907nT, slightly less than that of the Carrington Event. However, sixty years of technological progress has made the impacts of the storm significantly more damaging: the telephone systems in New York City, still in their relative infancy, were damaged, as were the telegraph systems which the railways depended on. This disruption led to some sources referring to the event as the “New York Railroad Storm”. Approximately seventy years later, the Earth was struck by another solar storm in March 1989, with a minimum Dst index value of -589. Though with just over a third of the intensity of the Carrington Event, the 1989 solar storm had far less trivial effects. Aurorae were visible as far south as Texas and Cuba, and there was some fear that red glows in the sky were the result of a nuclear war. The most significant effects however were experienced by the Canadian province of Quebec, where on the 13th of March the Hydro-Québec power utility grid was overloaded. The geomagnetic-induced currents led to a sustained power-failure of nine-hours that affected six million people, and led to businesses and transport links closing, including Dorval Airport. If the effects had been felt a few degrees further south in some of the U.S’ East Coast cities, estimates suggest the storm could have caused $6billion worth of damage. The storm also had a noticeable effect on humanity’s infrastructure above the Earth’s surface: the GEOS-7 weather satellite experienced five-years of solar panel degradation in just seven days and lost half of its mission life. It also caused delays to the Space Shuttle Discovery mission STS-29, when a sensor in one of the tanks supplying hydrogen to the fuel cells showed unusually high pressure readings. The 1989 “Quebec Blackout”, as it is known, is perhaps the most severe solar storm in recent history, but not the only one. In 2003, a minor solar storm of Dst intensity −383, added 4,096 false votes to a local election in Belgium. Most ominously, in July 2012 the Earth had a “near-miss” from a super solar storm only comparable to the Carrington event, which would have registered -1200nT on the Dst index had it struck the Earth9. Fortunately, the storm was caused by a coronal mass ejection that passed through the orbit of the Earth but approximately nine days after the Earth was there — a close shave when one considers the Sun has an eleven-year cycle of activity which is unpredictable at best. The STEREO-A satellite observed the CME and recorded it as travelling between 1800 and 2200 miles per second, meaning it could travel the distance from the Sun to the Earth in as little as 12 hours. Our realistic window of time between detecting this CME and it colliding with Earth would be slightly shorter, raising the question are we prepared enough to react to such an event in such a small period of time? These past events paint a picture of humanity’s lack of preparation to mitigate the effects of a solar storm, however I think this is an unfair assessment. Figures such as the United States’ potential damages reaching $2.7trillion suggest that the effects of large-scale solar storm have to be dire, but work in the field of solar science is constantly evolving and there are now more systems in place than ever to analyse, forecast and mitigate the effects of space weather. We have only been monitoring solar activity and space weather intently since Richard Carrington’s initial observations in 1859, and so have only experienced 24 full, eleven-year solar cycles as of 2020. In this time, however, our methods and technology for analysis and forecasting has improved dramatically. Carrington observed the solar flare by projecting an image from his optical telescope, trained on the Sun, onto a white screen. Eighteen hours after the flare, the first sign of a geomagnetic storm was compass needles spinning erratically and the telegraph system failing. Since Carrington and his contemporaries made the link between solar activity and geomagnetic disturbances, systems have begun to be put in place to monitor both aspects. Now, nations operate a slew of magnetometer observatories across the world that all feed data back to generate real-time values for the Disturbance Storm Time Index. The British Geological Survey alone is responsible for nine observatories across the world that can generate near real-time data on geomagnetic activity. A magnetometer’s configuration is similar to that of a transformer: they contain two cores made of a ferromagnetic material, each with a coil of wire wrapped around them. Through one wire is passed an alternating current, which induces an alternating magnetic field in the other. The magnetic field should induce an identical alternating current in the second coil, however it never will due to the interaction between the induced magnetic field and the Earth’s magnetic field. The difference in phase and intensity between the two currents can then be measured and interpreted to understand the strength and characteristics of the geomagnetic field at that point. Magnetometers, then, rely on much the same physical principles as compass needles, but humans have developed the technology to be able to exploit this effect to generate tangible, qualitative data. One area of great activity in space weather forecasting is Antarctica. The Seventh Continent is “a natural reserve, devoted to peace and science” and “scientific observations and results from Antarctica shall be exchanged and made freely available”. To this end, many nations have set-up space weather monitoring facilities at their research stations in a joint effort to increasing our understanding of the subject. The Antarctic lends itself to the study of solar storms due to its proximity to the South Pole, as well as the lack of electronic signals to cause interference. The British Antarctic Survey operates its Space Weather Observatory programme from the Halley VI Research Station, situated on the Brunt Ice Shelf. Halley is the operation-centre for a dozen low-powered magnetometers, specially designed to run from solar-panels and detect currents in the ionosphere, where satellites orbit, located across the continent. Another vital piece of equipment stationed at Halley is the microwave radiometer. When the charged particles ejected from the sun “falls” into magnetic pole above Antarctica, it excites the gas molecules in the mid to upper atmosphere. The microwave radiometer focuses on the area 35-90km above the surface, where the charged particles can produce potentially harmful radicals that don’t occur naturally. One such is nitric oxide (NO) which can be very harmful if it reaches the ozone layer of the atmosphere, where it contributes to its breakdown. The NO molecules absorb certain frequencies of microwaves incident from space, and the radiometer can detect this as “absorption lines” on the spectrum. The unique absorption spectrum indicate to scientists the levels of NO in the atmosphere and with it tell us more about space weather activity. The radiometer itself relies on an incredibly sensitive detector that has to be supercooled to -269ºC. An ingenious method of monitoring space weather is using Very Low Frequency (VLF) radio, which is anywhere from 200 – 10000 Hz, but most activity is in the range of 400 – 5000 Hz. Nearly anything with an alternating current passing through it will produce a signal in these low frequencies, and so the currents induced in our upper atmosphere also do. Halley is home to the Halley VLF receiver, a dedicated series of antennae linked to a global network, tuned to listen to the currents in the upper atmosphere. These arrays record and map lightning strikes around the world and study changes in the upper atmosphere caused by interactions with the solar wind leading to induced currents. The Halley VLF receiver is just one part of a global spanning, with stations from Hawaii to the Lake District. These projects all display a widespread commitment to increasing our understanding of space weather and a realisation of how important the subject is to our future. However, Earth-based technologies are limited to being reactive systems to a solar storm — we can study the effects of the storm after the fact and use this knowledge to help forecast, but we can’t know for sure when a storm will hit. For that, multiple space agencies from across the world have invested in space-based technologies to allow us to detect coronal mass ejections and solar flares before they reach Earth. The Solar Terrestrial Relations Observatory (STEREO) mission launched in October 2006, comprised of two near identical solar observatories placed into heliocentric orbit. The first satellite, STEREO-A, was inserted into an orbit inside the Earth’s, meaning it moves faster, while STEREO-B’s orbit is outside the Earth’s, so it moves slower. This leads to the satellites being able to view two different areas of the Sun at the same point in time, allowing the first stereoscopic and 3D images of the Sun to be generated. This has allowed scientists to gain a much more detailed understanding of activity on the surface of the Sun and look for patterns in solar behaviour in relation to solar storms. Arguably the most vital contribution of the STEREO mission, however, came in 2012, when the STEREO-A satellite was directly struck by the largest solar storm since the Carrington Event9. As aforementioned, the data generated from this fortuitous impact (if ominous, considering STEREO-A orbits in the plane of the Earth, therefore if the coronal mass ejection had occurred as little as nine days earlier, it would’ve struck our planet) has generated invaluable data on the effects of coronal mass ejections. The STEREO missions, however, are just the beginning of humanity’s commitment to understanding our Sun and space weather. As evidence of how this is a global issue to be tackled, NASA and ESA have collaborated on the Parker Solar Probe/Solar Orbiter mission. The Parker Solar Probe has been specially designed to go closer to the Sun than any spacecraft before it, and collect data that is simply impossible to achieve from anywhere else. The PSP has already given us insights into the behaviour of the Sun’s magnetic field, revealing it to be far more volatile close to the surface, constantly switching in orientation. Furthermore, data from PSP is making scientists question our understanding of the solar wind; rather than the accepted model of a steady stream of charged particles, data suggests solar winds can flow in spikes and bursts. Placing cameras on a spacecraft at such proximity to the solar surface would be futile, so to give context to this data and provide visual clues to the Sun’s activity, a partner satellite is required with a larger orbit. ESA’s Solar Orbiter, launched February 2020, is a space-based laboratory dedicated to learning more about the mysteries of the Sun that drive space weather. The Solar Orbiter is home to a series of imagers and magnetometers that will be able to provide unprecedented detail in images of the solar corona and solar wind to compliment data from the PSP. The PSP and Solar Orbiter are a great commitment from their respective space agencies to preparing for space weather, coming at an estimated total cost of over $2 billion. The PSP and Solar Orbiter is the latest mission to study the Sun, but it won’t be the last. Space agencies across the world have committed to future missions to explore the Sun and its affects on Earth. Perhaps the most exciting future project to study space weather is the ESA Lagrange Mission. Aimed to play a major part in ESA’s Space Weather Network, the Lagrange Mission would insert a spacecraft in the L5 Lagrange Point, a point in space where the gravitational forces of the Earth and the Sun balance each other to create a stable point of orbit. This would mean the Spacecraft would have a constant, “side-on” (with respect to Earth) perspective on the Sun and so would allow us to detect any dangerous activity before that area of the surface rotates to be facing Earth. It would also allow us to see any Coronal Mass Ejections heading toward Earth with much greater clarity and earlier due to the unique perspective. These multi-national efforts to increase our understanding of space-weather highlight the widespread commitment to the study from governments and space agencies across the Earth. Indeed, in September 2019, the UK Government committed a £20 million funding boost to predicting severe space weather events and protecting British satellites. Furthermore, developments in machine learning techniques promise to increase our forecasting ability to up to seventy-two hours ahead. Machine learning programmes can recognise patterns and causality in large datasets that it would be simply impossible for a human to, extending our time to prepare for a space weather disaster from hours to days. Unfortunately, many of these projects are still merely proposals, or at the most in their infant stages of collecting data. The L5 Mission, for instance, is still yet to even be confirmed and developed, so it is not unreasonable to not expect a launch of the craft until the next decade, if it is approved for funding at all. That said, it is clear that recently there has been a real focus from governments and space agencies on solving the problem of space weather. Through a commitment to developing technologies to collect and pool data, we have learnt more about our Sun and space weather in the last twenty years than in the two millennia before. Before starting this essay, I was expecting that my research would uncover a complete lack of preparation on global governments’ parts for a space weather disaster. However, what I discovered was much to the contrary; despite what tabloid newspaper headlines suggest, there is a widespread acknowledgement of the potential severity of a solar storm to the modern world. Measurements and data collected on the Sun and its affects on the geomagnetic field have increased tremendously since the Carrington Event in 1859, especially in the last twenty years also. The various space-based missions deployed by global space agencies also demonstrate a great commitment (both with time and financially) to space weather as a study and preventing a possible disaster. That said, there is still a long way to go with the study of space weather. Our forecasting abilities are still relatively primitive, especially when compared to terrestrial meteorology, where we can predict weather systems with reasonable accuracy 10-14 days into the future. With space weather, we may currently have only as little as 15 minutes warning before the first signs of a solar storm heading toward Earth5. This is, of course, after the first solar flare has been produced by the Sun; we have no way of knowing before-hand for certain if a flare or CME will be produced and is Earth-bound. This is just a limit of the lack of data and sophistication of our models; we never know for certain about terrestrial weather, either, but the probabilities of our forecasts being correct are much higher than with space weather. The future of the study of space weather is bright, however, as proposed projects — including ESA’s L5 mission — are sure to increase our forecasting ability dramatically, perhaps allowing us to predict at a similar timescale to weather on Earth, providing ample time to prepare electrical infrastructure for a storm on the scale of the Carrington Event. Although, then, I would conclude that at the moment the Earth would not be prepared for a severe space weather disaster, that has to be followed by an assertion that technologies in this sector are only getting better, and space weather is being paid its due attention as a threat to our Earth. Though perhaps now we may suffer greatly from an extreme solar storm, our current systems are capable of predicting and mitigating the effects of most space weather threats. Furthermore, I am confident that over the next decades our understanding of the Sun and space weather will be such that we can forecast over a greater period of time to greater accuracy to predict and prepare for the strongest of solar storms in good time. I found the advice of MetOffice experts invaluable in guiding my research, providing a different, more informed perspective on space weather technologies than what can be found online. If I were to go back and complete my project again, I would certainly approach experts far sooner as their input allowed me to focus and narrow my argument and supporting research. Furthermore, I would’ve liked to explore the effects of low-level space weather, such as how astronauts and high-altitude pilots have an increased chance of cancer due to exposure to charged particles from solar flares and wind. This project has taught me valuable skills in the art of writing and research for reports; this was my first time using a referencing system for my sources, and having to conduct a research review. Moreover, writing about complex physics in such a way as to be accessible to any audience has improved my own understanding of the topics. I am now more confident in my own knowledge in space weather and feel much more informed about how we are working to protect our planet against it.
<urn:uuid:0db5b49b-c330-4825-9d0f-940ddd3ab289>
CC-MAIN-2021-49
https://jackpaton.com/epq/
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362605.52/warc/CC-MAIN-20211203060849-20211203090849-00306.warc.gz
en
0.956115
7,324
3.125
3
The ideal psychiatric interview/write-up/presentation is one in which the presenter is able to convey clinically relevant information in a clear, concise, organized manner. A good presenter will leave a "picture" of the patient being presented in the other's mind after the presentation is completed, making it easier to formulate a problem list and The following format is generally accepted, with mild alterations made per individual attending. I. Identifying Information Start the write-up/presentation with a clear statement about the patient which helps the listener/reader get a picture of the person. Example : 54 yo married white female who is 8 months pregnant. II. Chief Complaint This is the patient's chief complaint, and you should write down what the patient states is the reason for coming in to be evaluated. Do not use technical terminology unless the patient does - rather, put down exactly what the patient says usually in quotations. Example : Patient's chief complaint is : "I feel depressed;" patient's chief complaint is : "I need a refill of medicine." III. History of Present Illness Write down an organized, chronological history of what brings the patient into the hospital now, including all significant symptomatology, precipitating factors, etc... If the patient is presenting to you with a six month history of depression which started when the patient's father died, start six months ago with the death of the father and report what has been going on since then, in chronological order, up until the current time of the interview. Include significant modifiers of the illness, including possible organic factors, drug, and alcohol abuse. List all pertinent positive and negative symptoms, which will help you to make an accurate DSM-IV (differential) diagnosis. IV. Past Psychiatric History Put in all contact the patient has had with therapists (psychiatrists, psychologists, social workers, and counselors), inpatient units, and other outpatient experiences. Be sure to include prior rehabilitation programs. If the patient has been on psychotropic medications in the past, list these by date, how long the patient took each one, at what dose, and the effect the medication had on the patient. List any ECT the patient might have had. Also list prior suicide attempts and methods. V. Past Medical History List in this area any current medical problems the patient has, and then any past medical, surgical or obstetric problems the patient has had, in chronological order. List the hospitalizations. List all medications (including doses) the patient is currently taking. List any allergies the patient has and what the specific reactions to the medications were. VI. Family History A genogram is often useful here for clarity. List all illnesses that patient's family has had, including medical, psychiatric, and substance abuse history. Write down any psychotropic medications which have been beneficial in family members. Include suicide attempts or completed suicide in family members. Include whether the family members are currently living or are dead. Include patient's parents, siblings, and children. VII. Social History/Developmental History List all substances the patient currently is taking; drugs, alcohol, cigarettes. List how much the patient uses of each, how often, for how many years and in what form (smoke, IV, etc...). Document when the last time used. List patient's educational history, work history, and what the patient currently does to support himself/herself. Are there any ongoing legal issues, felonies, warrants, etc... Ask who the patient currently lives with. Ask about the patient's marital status, sexual orientation, sexual activity, children, VIII. Review of systems Put in this category any other information you might have received; i.e., the patient told you he is short of breath a lot, he has blurred vision. It is sometimes useful to ask a patient to tell you anything he considers important for you as the physician to know that you have not yet asked. IX. Mental Status Exam The mental status exam is extremely important. The best mental status exams allow the person listening to the presentation to develop a snapshot of the patient being presented. Appearance : Start out the mental status exam by giving a verbal picture of the patient, what the patient is doing, wearing, and how the patient looks. For example : 16 yo BM wearing age appropriate dress of clean jeans, a t-shirt, and sneakers with the laces undone. He was sitting on the floor playing with a train set. He looked up and smiled when the interviewer approached. 16 yo BM O X 3 is a lot less descriptive! After the initial description you have probably already taken care of the general appearance, alertness, hygiene and grooming part of the general description, but if not, include some information here. Look for use of grooming that might be suggestive of a mood state or disorganization. Don't use diagnostic labels, just describe what you see. Speech : volume, rate, idiosyncratic symbols or other odd speech, tone (include any accent or stuttering). Motor activity : rate (agitated, retarded), purposefulness, Mood : ask how they are feeling, usually put in quotes : "depressed," "sad," "great," etc... Affect : observable emotion (euthymic, neutral, euphoric, dysphoric, flat), the range (full, constricted, blunted), whether it fits appropriate to stated mood or content, lability. Thought process : organization of a person's thoughts (logical/linear, circumstantial, tangential, flight of ideas, loose associations or thought blocking). Thought content : basic themes preoccupying the patient, sucidality, homicidality, paranoia, delusions, ideas of reference, obsessions, compulsions. If there is suicidal or homicidal ideation, is there a plan, intent? Perceptual disturbances : hallucinations (auditory, visual, olfactory, tactile), illusions, de-realization/depersonalization. Cognitive : level of alertness and orientation. May want to perform full Folstein MMSE if concerned about dementia Insight : into level of illness and/or need for treatment/hospitalization. Judgment/Impulse control : best determined by history of patterns of behavior and current attitude. IX. Physical Exam Many medical diseases masquerade as psychiatric, and vice versa (pancreatic CA, hypothyroidism, brain metastases). Do a thorough PE including full neurological exam and document. This usually does not include a breast, pelvic, rectal, or genital exam X. Problem List XI. Differential Diagnosis XII. Plan Include biological (medications, labs, studies), psychological (individual therapy, group therapy, psychological testing), and social (housing, access to care, social services), MULTI AXIAL ASSESSMENT Other conditions that may be a focus of clinical attention General Medical Conditions influencing diagnosis, treatment, or prognosis of Axis I or II disorders Psychosocial and Environmental Problems i.e., problems with primary support group, problems related to the social environment, educational problems, occupational problems, housing problems, economic problems, problems with access to health care services, problems related to interaction with the legal system/crime, other. Global Assessment of Functioning This scale is for reporting the clinicians judgment of the individual's overall level of functioning. This information is useful in planning treatment and measuring its impact and in predicting outcome. The scale ranges from "0" - (inadequate information), to "100" (no symptoms and superior functioning in a wide range of activities).
<urn:uuid:0e984ed8-e254-494c-87a3-2fa8203b4415>
CC-MAIN-2016-30
http://www.mt911.com/site/neuro/psychiatric_interview_examination.asp
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824109.37/warc/CC-MAIN-20160723071024-00056-ip-10-185-27-174.ec2.internal.warc.gz
en
0.894271
1,686
2.59375
3
Healthy aquariums with conditions close to nature The right water values are dependent on the fish stock and plants in the aquarium. Even if the water looks clear it can be contaminated. With poor values diseases or algae can appear in the aquarium. To maintain a healthy aquarium with conditions close to nature it is important to check and adapt the water values regularly. With the JBL Testlab you can determine the 12 most important water values in your aquarium. Precise measurement of the following values: pH test: acidity of the water from 3.0 to 10 pH-Test: acidity of the water from 6.0 to 7.6 pH-Test: acidity of the water from 7.4 to 9.0 O2 test: determination of the oxygen content CO2 test: determination of the carbon dioxide content for thriving plant growth GH test: determination of the general hardness KH test: pH stability of the water (carbonate hardness) PO4 test: determination of the phosphate content (reason for algae growth and plant nutrient) NH4 / NH3 test: indication of non-toxic ammonium, determination of the toxic ammonia using table NO2 test: determination of the nitrogen compound nitrite, which is toxic for the fish NO3 test: determination of the nitrate content (reason for algae growth and plant nutrient) Fe test: determination of the iron content to monitor the fertilization Cu test: deadly heavy metal for invertebrates. Also important for setting the therapeutic dose of medications SiO2 test: determination of the silicate content (silicic acid) as the cause of algae growth To determine the CO2 content you just need to measure the pH and KH values. From both values arises the CO2 content in the water, which then can be referenced from the enclosed table. With our research department and our worldwide expeditions into the habitats of the animals JBL permanently sets new standards with its product innovations.
<urn:uuid:4f62bc3c-fd83-4302-a151-863b7bb99c79>
CC-MAIN-2022-05
https://aquaremedyireland.com/product/jbl-testlab/
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301263.50/warc/CC-MAIN-20220119033421-20220119063421-00254.warc.gz
en
0.869709
410
3.125
3
Using a tiny microscope and a cell phone, students examine the red, green and blue pixels that use additive color to create all the colors you see on your computer screen. Grade Range: 4-9 Duration 1/2 Hour Micro-Phone Lens, phone with camera (5+ mega pixels), Computer screen and Light Blox Ps4.A: Wave Properties Information can be digitized (e.g., a picture stored as the values of an array of pixels); in this form, it can be stored reliably in computer memory and sent over long distances as a series of wave pulses (HS-PS4-2, HS-PS4-5)
<urn:uuid:35a2ab36-3cb4-42e0-b2db-b9b390e7c13d>
CC-MAIN-2019-51
https://laserclassroom.com/lessons/pixels-and-additive-color/
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540517156.63/warc/CC-MAIN-20191209013904-20191209041904-00287.warc.gz
en
0.858017
137
2.828125
3
It is normal for your hands and feet to turn cold during winter season or if you are exposed to dry windy air. However, there is certainly something abnormal when you feel your hands and feet cold throughout the year. Poor blood circulation in limbs is the main cause for cold hands and feet. When exposed to cold air, the arteries and capillaries of the limbs get narrow or become constricted. This results in poor blood supply to your hands and feet. Thus you feel your hands and feet turning cold. Besides, cold atmosphere, your hands and feet can turn cold with some underlying health problem. At times cold extremities can be painful and discomforting. What Causes Cold Hands And Feet? There are number of reasons that can turn your hands and leg cold, here are some of the common causes given below: - If you are inadequately protected from cold atmosphere, meaning if you are not wearing woolen clothes and gloves to protect from cold, the peripheral arteries become narrow. The body restricts flow of blood to the limbs with an intention of keeping the central and core part of the body warm. - Reynaud’s syndrome: the condition is more common in women than in men. It occurs below the age of 50. In this condition there is abnormal constriction of arteries and capillaries of the limbs. Due to constriction of the arteries there is less supply of blood to the hands and leg tissues. When these people are exposed to cold air, their finger and toes become pale or white. The fingers and toes in Reynaud’s syndrome are extremely painful. - Hormonal fluctuation as it occurs during menstruation or hormonal imbalance in thyroid disease can lead to cold hands and feet. If you are on strict diet or suffering from anorexia nervosa your extremities will feel cold to touch. Mental strain and stress can constrict the arteries and it can affect not only your heart but arteries of the limbs too. Frost nip and frostbite due to exposure to extreme freezing temperature can adversely affect the blood circulation of the extremities. A person may experience cold extremities, with blisters and pain if there is no prompt care. - Certain diseases such as diabetes, atherosclerosis, peripheral vascular disease especially due to smoking, panic attacks, iron deficiency anemia, allergic shock that causes low blood pressure and perspiration can lead to cold hands and feet. Prevention And Treatment For Cold Feet And Hands Cold hands and feet can be prevented if you take simple precautions in the first place; use warm clothing during cold weather. Your body has to be covered from head to toe. Wear gloves in your hands and socks in your feet as they are vulnerable to extreme cold temperature. - Avoid and stop smoking. Smoking causes peripheral vascular disease, one of the leading causes for cold extremities. - Exercise: exercise helps to improve the vascular circulation. If the arteries have constricted, exercise helps to develop healthy collateral’s which can compensate the main artery. Exercise also increases your metabolic process and helps to keep your body warm. - Your diet should consist of vitamins, minerals and healthy nutrients that will keep your body in a healthy condition. - In case of frostbite and frost nip do not try to apply pressure or rub while cleaning. Instead, consult your physician; he will try to re-warm in water lightly more than your body temperature. Frozen part is thawed till the finger returns to it normal pink color. - Alternately stimulating massage with essential oils can help to increase the circulation of the extremities. Rosemary essential oil massage with base oil such as coconut oil is the best way to treat cold feet and hands.
<urn:uuid:92587a80-f56b-49e7-a457-ba84db44db31>
CC-MAIN-2014-49
http://www.simple-remedies.com/health-questions/cold-hands-feet-causes-treatment.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931008105.47/warc/CC-MAIN-20141125155648-00227-ip-10-235-23-156.ec2.internal.warc.gz
en
0.934663
749
3.03125
3
Gut health has been put at the forefront of health concerns thanks to emerging studies showing the importance of cultivating your microbiome. The microbiome is the hub inside your body that contains all the organisms living in your gut. Your microbiome makes up for about eighty percent of your immune system–so it’s really important! One of the best ways to support this microbiome is by eating fermented foods. You’ve heard really good things about fermented foods, but have you ever wondered what the fermentation process really does? Essentially, the foods are exposed to bacteria and yeast, and steeped until the carbohydrates and the sugars become bacteria-boosting agents. This undoubtedly results in unique flavours that can require some getting used to. Once you have, the delicious flavour and health benefits are truly rewarding. To add some good probiotic into your microbiome, try these delicious fermented foods. You’d have to be Rip Van Winkle to not have heard of kombucha. Seriously, it’s all craze, and for good reasons. Kombucha is essentially fermented sweetened tea. The starter tea is exposed to SCOBY (kombucha bacteria culture) and fermented for about a month, resulting in a mild and pleasant vinegary taste. Kombucha is excellent for boosting your microbiome and is tremendous when it comes to digestive aid. If you’re having a little bowel trouble, grab a bottle of kombucha! 2. Kefir Water There’s a little debate on whether kefir water is better than kombucha. The truth is, they both offer different benefits. Kefir water likely appear to more people for its delicious flavour slightly resembling coconut water. It is made by adding kefir grain to sugar water, fruit juice, or coconut water, and fermenting it for one to two days. Kefir water contains more bacteria strains than kombucha, which means that you’re adding microbiome diversity. Need a little probiotic boost? Try kefir water. Wondering what tempeh is? It started off as an accidental creation when the Chinese brought the tofu industry to Indonesia, and somehow flourished in an amazing way. Tempeh is actually healthier than tofu! This is largely due to its fermentation process. It’s essentially whole soybeans that have been cooked and pressed down, giving it a meaty and grainy texture. The fermentation process adds a pleasant tangy flavour. Besides the probiotics, tempeh packs a lot of protein and a long list of minerals. 4. Pickled Foods Pickling is a surefire way to not let any produce go to waste, and there are a lot of pickling methods out there! But to truly reap the health benefits of pickled foods, make sure that they’re fermented. The naturally occurring microorganisms on the vegetables’ skins forms lactic acid (which gives pickled foods the sour taste). Steep your fresh vegetables in a salt brine inside an airtight container for up to four weeks. Don’t forget to salvage your leftover vegetable parts–they’re not necessarily compost! Let’s not forget about the most popular type of culture dairy product: yogurt! Whether you’re having regular yogurt, Greek yogurt, full-fat yogurt, or even dairy-free yogurt, you can get a good boost of probiotics! The probiotics found in yogurt help with digestive function, and help produce vitamin B12. It also contains diverse strains of bacteria, which really helps with your microbiome. Whether you’re making a yogurt bowl or yogurt bark, you’ll be happy to know that you’re giving your immune system a good boost. Probiotics are really important, as they can help protect you from the cold and the flu. And fermented foods are packed with probiotics. Have you tried any fermented foods? Share with us your favourites, so we can add it to the list.
<urn:uuid:eb5bf5e9-10c0-488e-925a-2afac93afd76>
CC-MAIN-2021-39
https://rojano.spud.com/blog-delicious-fermented-foods/
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056548.77/warc/CC-MAIN-20210918154248-20210918184248-00591.warc.gz
en
0.919211
820
2.546875
3
Creep-(noun): My college roommate… Why girls never came over. Creep-(verb): a soft tissue reaction where as a displacement in amplitude or length of collagenous structures over a period of time when a force is applied… Why you’re shorter at the end of the day. How often do you have to adjust the rear-view mirror on the drive home? Sure, Harford County roads are partially to blame, but by 5pm we’re all about 1-2% shorter. Blame Darwin; we’re just animals adapting to our environment. We start our day hunched over simple carbs and neuro-stimulants. Then off to be hunched over the steering wheel, keyboard, not so smart ‘smart’ phone, back to steering wheel (stopping off somewhere for some extra-high inflammatory processed lunch from McSatan’s) then home again and back to the keyboard (but this time it’s okay because you’re reading this). This causes a shortening and tightening of our anterior muscles: Rotator cuff/thoracic outlet and carpal tunnel syndromes & decreased oxygen consumption. And conversely a slow over-stretching and weakening of the posteriors: headache, neck pain, back pain & abnormal nerve activity. That’s why for those of you we performed a Functional Capacity Examination on; we strive for a 1:1.14 flexor: extensor ratio. In other words, your back needs to be a little stronger than your front (except your knees, they’re backwards). So, in short, stretch the front and build up the back. Don’t know how? Try Google. But if you want the right answers, ask me.
<urn:uuid:48e885ff-18f7-43f5-9696-04bdd34ccac5>
CC-MAIN-2019-18
https://susquespine.com/kinda-creepy
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578760477.95/warc/CC-MAIN-20190426053538-20190426075538-00081.warc.gz
en
0.856953
370
2.59375
3
Issue No. 04 - July/August (2002 vol. 14) <p>This paper presents a method for finding patterns in 3D graphs. Each node in a graph is an undecomposable or atomic unit and has a label. Edges are links between the atomic units. Patterns are rigid substructures that may occur in a graph after allowing for an arbitrary number of whole-structure rotations and translations as well as a small number (specified by the user) of edit operations in the patterns or in the graph. (When a pattern appears in a graph only after the graph has been modified, we call that appearance “approximate occurrence.”) The edit operations include relabeling a node, deleting a node and inserting a node. The proposed method is based on the geometric hashing technique, which hashes node-triplets of the graphs into a 3D table and compresses the label-triplets in the table. To demonstrate the utility of our algorithms, we discuss two applications of them in scientific data mining. First, we apply the method to locating frequently occurring motifs in two families of proteins pertaining to RNA-directed DNA Polymerase and Thymidylate Synthase and use the motifs to classify the proteins. Then, we apply the method to clustering chemical compounds pertaining to aromatic, bicyclicalkanes, and photosynthesis. Experimental results indicate the good performance of our algorithms and high recall and precision rates for both classification and clustering.</p> KDD, classification and clustering, data mining, geometric hashing, structural pattern discovery, biochemistry, medicine. Xiong Wang, Jason T.L. Wang, Dennis Shasha, Bruce A. Shapiro, Isidore Rigoutsos, Kaizhong Zhang, "Finding Patterns in Three-Dimensional Graphs: Algorithms and Applications to Scientific Data Mining", IEEE Transactions on Knowledge & Data Engineering, vol. 14, no. , pp. 731-749, July/August 2002, doi:10.1109/TKDE.2002.1019211
<urn:uuid:11867907-7731-4aba-a069-4cd8cc960af0>
CC-MAIN-2017-34
https://www.computer.org/csdl/trans/tk/2002/04/k0731-abs.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106865.74/warc/CC-MAIN-20170820170023-20170820190023-00209.warc.gz
en
0.873409
431
2.6875
3
Green beans are the unripe, young fruit and protective pods of various cultivars of the common bean. Immature or young pods of the runner bean, yardlong bean, and hyacinth bean are used in a similar way. Take a look below for 25 more fun and fascinating facts about green beans. 1. Green beans are known by many common names, including French beans, string beans, snap beans, and snaps. 2. They are distinguished from the many differing varieties of beans in that green beans are harvested and consumed with their enclosing pods, typically before the seeds inside have fully matured. 3. In the past, beans pods often contained a “string”, which was a hard fibrous strand running the length of the pod. 4. Before, the fibrous strand that was attached to the bean pods was removed before cooking, or made edible by cutting the pod into short segments. 5. Modern, commercially grown green bean varieties lack the hard fibrous strand altogether. 6. Green beans are eaten around the world and are marketed canned, frozen and fresh. 7. They’re often steamed, boiled, stir-fried, or baked in casseroles. 8. A dish with green beans popular throughout the United States, particularly at Thanksgiving, is green bean casserole, which consists of green beans, cream of mushroom soup and French fried onions. 9. Some U.S. based restaurants serve green beans that are battered and fried, while some Japanese restaurants serve green bean tempura. 10. Green beans are sometimes sold dried, and fried with vegetables such as carrots, corn, and peas, as vegetable chips. 11. The first “stringless” bean was bred in 1894 by Calvin Keened, who is known as the “father of the stringless bean,” while working in Le Roy, New York. 12. Green beans are classified by growth habit into two major groups, “bush” beans and “pole” beans. 13. Bush beans are short plants, growing no more than 2 feet in height, often without requiring support. They generally reach maturity and produce all of their fruit in a relatively short period of time, then cease to produce. 14. Pole beans have a climbing habit and produce a twisting vine, which must be supported by poles, tellises or other means. Pole beans can be common beans, runner beans or yardlong beans. 15. Over 130 varieties of green bean are known. 16. Leaves of green beans can be green or purple in color. They’re divided in three lobes and have smooth edges. Leaves are alternately arranged on the stem. 17. Green beans produce white, pink or purple flowers which are usually pollinated by insects. 18. They propagate through seeds. It takes about 45 to 60 days from planting to harvesting. 19. Green beans are a rich source of proteins, carbohydrates and dietary fibers. They also contain vitamins of the B group, vitamins C and K and minerals such as magnesium, iron and manganese. 20. Green beans need to be cooked before consumption. Steaming, boiling, frying and baking are usual methods used for preparation of dishes made of green beans. 21. Raw green beans have a high content of lectins which can be harmful for human health. The high temperature when cooking destroys lectins. 22. China is the biggest manufacturer of green beans. It produces and exports over 15 million tons of green beans each year. 23. Leaves of green beans are covered with miniature hairs which are used for trapping of bed bugs. 24. It’s an annual plant, which means that it finishes its life cycle in one year. 25. The green bean plants originated from Peru, but can be found all around the world today. People have cultivated and eaten green beans for at least 7000 years.
<urn:uuid:231c861e-ddf0-46ce-ab45-ce2df8f49f31>
CC-MAIN-2023-23
http://tonsoffacts.com/25-fun-fascinating-facts-green-beans/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656869.87/warc/CC-MAIN-20230609233952-20230610023952-00229.warc.gz
en
0.954116
808
2.953125
3
When people say that someone eats like a bird, they usually mean that the person eats very little. But, in some ways, the assumptions behind the saying aren’t accurate. The truth is, birds eat a lot when comparing the amount that they consume with their size. Also, one doesn’t see many humans imitating the dining habits described below. Can you match these birds that spend some time in the Chesapeake watershed with their food-related activities? Answers are below. American tree sparrow 1. Your work is done, time for yum. I like to stand above anthills and let the little buggers climb all over me. Ants secrete formic acid, which helps me repel mites and other pests. Then I eat them. 2. Belly up! I have been seen wading into water as deep as my stomach to catch small fish to eat. Humans may use worms to catch fish, but that would be a waste of my more common prey. 3. What a “mothful!” In addition to moths and caterpillars, I have been known to eat beetles and their larvae, dragonflies, snails, millipedes and earthworms, as well as buds and berries. 4. News to use: Some of my captive kin have been seen using newspapers strips to gather food into their cages. People tend to forget that I am mostly vegetarian and seem to only remember (and despise) me for occasionally eating the eggs or nestlings of other birds. 5. Don’t stick your nose up at my eating habits! One of my favorite restaurants is an outhouse because it attracts the flies that I like to eat. In some places, I am even known as the “latrine bird.” 6. Beat it! When snow is on the ground, I get tall weeds to release their seeds by beating the weeds with my wings and eating the seeds that fall to the ground. 7. Mind if I have a bite? I don’t feel like hunting today and what that duck has looks pretty tasty! Can you blame me for taking it? 8. In the pink. A few of my kind develop pinkish belly feathers. The cause is thought to be a diet high in crayfish. 1. American crow; 2. American robin; 3. Scarlet tanager; 4. Blue jay; 5. American redstart; 6. American tree sparrow; 7. American coot; 8. Barred owl
<urn:uuid:84bbe54d-3d57-4a01-a363-0b207df3dabb>
CC-MAIN-2018-22
https://www.bayjournal.com/article/eat_like_a_bird
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865456.57/warc/CC-MAIN-20180523063435-20180523083435-00615.warc.gz
en
0.965581
525
2.671875
3
Gamma Ursae Majoris Gamma Ursae Majoris (γ Ursae Majoris, abbreviated Gamma UMa, γ UMa), formally named Phecda //, is a star in the constellation of Ursa Major. Since 1943, the spectrum of this star has served as one of the stable anchor points by which other stars are classified. Based upon parallax measurements with the Hipparcos astrometry satellite, it is located at distance of around 83.2 light-years (25.5 parsecs) from the Sun. It is more familiar to most observers in the northern hemisphere as the lower-left star forming the bowl of the Big Dipper, together with Alpha Ursae Majoris (Dubhe, upper-right), Beta Ursae Majoris (Merak, lower-right) and Delta Ursae Majoris (Megrez, upper-left). Along with four other stars in this well-known asterism, Phecda forms a loose association of stars known as the Ursa Major moving group. Like the other stars in the group, it is a main sequence star not unlike the Sun, although somewhat hotter, brighter and larger. Phecda is located in relatively close physical proximity to the prominent Mizar-Alcor star system. The two are separated by an estimated distance of 8.55 ly (2.62 pc); much closer than the two are from the Sun. The star Beta Ursae Majoris is separated from Gamma Ursae Majoris by 11.0 ly (3.4 pc). It bore the traditional names Phecda or Phad, derived from the Arabic phrase فخذ الدب fakhth al-dubb 'thigh of the bear'. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Phecda for this star. In Chinese, 北斗 (Běi Dǒu), meaning Northern Dipper, refers to an asterism equivalent to the Big Dipper. Consequently, the Chinese name for Gamma Ursae Majoris itself is 北斗三 (Běi Dǒu sān, English: the Third Star of Northern Dipper) and 天璣 (Tiān Jī, English: Star of Celestial Shining Pearl). Gamma Ursae Majoris is an Ae star, which is surrounded by an envelope of gas that is adding emission lines to the spectrum of the star; hence the 'e' suffix in the stellar classification of A0 Ve. It has 2.6 times the mass of the Sun, three times the Sun's radius, and an effective temperature of 9,355 K in its outer atmosphere. This star is rotating rapidly, with a projected rotational velocity of 178 km s−1. The estimated angular diameter of this star is about 0.92 mas. It has an estimated age of 300 million years. Gamma Ursae Majoris is also an astrometric binary: the companion star regularly perturbs the Ae-type primary star, causing the primary to wobble around the barycenter. From this, an orbital period of 20.5 years has been calculated. The secondary star is a K-type main-sequence star that is 0.79 times as massive as the Sun, and with a surface temperature of 4,780 K. - van Leeuwen, F. (November 2007), "Validation of the new Hipparcos reduction", Astronomy and Astrophysics, 474 (2): 653–664, arXiv:0708.1752, Bibcode:2007A&A...474..653V, doi:10.1051/0004-6361:20078357 - Oja, T., "UBV photometry of stars whose positions are accurately known. III", Astronomy and Astrophysics Supplement Series, 65 (2): 405–4 - Eggl, S.; Pilat-Lohinger, E.; Funk, B.; Georgakarakos, N.; Haghighipour, N. (2012). "Circumstellar habitable zones of binary-star systems in the solar neighbourhood". Monthly Notices of the Royal Astronomical Society. 428 (4): 3104. arXiv:1210.5411. Bibcode:2013MNRAS.428.3104E. doi:10.1093/mnras/sts257. - Wielen, R.; et al. (1999), "Sixth Catalogue of Fundamental Stars (FK6). Part I. Basic fundamental stars with direct solutions", Veröff. Astron. Rechen-Inst. Heidelb, Astronomisches Rechen-Institut Heidelberg, 35 (35): 1, Bibcode:1999VeARI..35....1W - Gontcharov, G.A.; Kiyaeva, O.V. (2010). "Photocentric orbits from a direct combination of ground-based astrometry with Hipparcos II. Preliminary orbits for six astrometric binaries". New Astronomy. 15 (3): 324–331. arXiv:1606.08182. Bibcode:2010NewA...15..324G. doi:10.1016/j.newast.2009.09.006. - Fitzpatrick, E. L.; Massa, D. (March 2005), "Determining the Physical Properties of the B Stars. II. Calibration of Synthetic Photometry", The Astronomical Journal, 129 (3): 1642–1662, arXiv:astro-ph/0412542, Bibcode:2005AJ....129.1642F, doi:10.1086/427855 - King, Jeremy R.; et al. (April 2003), "Stellar Kinematic Groups. II. A Reexamination of the Membership, Activity, and Age of the Ursa Major Group", The Astronomical Journal, 125 (4): 1980–2017, Bibcode:2003AJ....125.1980K, doi:10.1086/368241 - Royer, F.; Zorec, J.; Gómez, A. E. (February 2007), "Rotational velocities of A-type stars. III. Velocity distributions", Astronomy and Astrophysics, 463 (2): 671–682, arXiv:astro-ph/0610785, Bibcode:2007A&A...463..671R, doi:10.1051/0004-6361:20065224 - Su, K. Y. L.; et al. (December 2006), "Debris Disk Evolution around A Stars", The Astrophysical Journal, 653 (1): 675–689, arXiv:astro-ph/0608563, Bibcode:2006ApJ...653..675S, doi:10.1086/508649 - Allen, Richard Hinckley (1899), "Star-names and their meanings", New York, G. E. Stechert, Bibcode:1899sntm.book.....A - "PHECDA -- Emission-line Star", SIMBAD, Centre de Données astronomiques de Strasbourg, retrieved 2011-12-29 - Kunitzsch, Paul; Smart, Tim (2006). A Dictionary of Modern star Names: A Short Guide to 254 Star Names and Their Derivations (2nd rev. ed.). Cambridge, Massachusetts: Sky Pub. ISBN 978-1-931559-44-7. - "IAU Catalog of Star Names". Retrieved 28 July 2016. - Garrison, R. F. (December 1993), "Anchor Points for the MK System of Spectral Classification", Bulletin of the American Astronomical Society, 25: 1319, Bibcode:1993AAS...183.1710G, retrieved 2012-02-04 - Perryman, M. A. C.; Lindegren, L.; Kovalevsky, J.; et al. (July 1997), "The Hipparcos Catalogue", Astronomy and Astrophysics, 323: L49–L52, Bibcode:1997A&A...323L..49P - Perryman, Michael (2010), "The Making of History's Greatest Star Map", The Making of History's Greatest Star Map, Astronomers’ Universe, Heidelberg: Springer-Verlag, Bibcode:2010mhgs.book.....P, doi:10.1007/978-3-642-11602-5, ISBN 978-3-642-11601-8 - Shaya, Ed J.; Olling, Rob P. (January 2011), "Very Wide Binaries and Other Comoving Stellar Companions: A Bayesian Analysis of the Hipparcos Catalogue", The Astrophysical Journal Supplement, 192 (1): 2, arXiv:1007.0425, Bibcode:2011ApJS..192....2S, doi:10.1088/0067-0049/192/1/2 - Garfinkle, Robert A. (1997), Star-Hopping: Your Visa to Viewing the Universe, Cambridge University Press, p. 118, ISBN 0-521-59889-3 - "IAU Working Group on Star Names (WGSN)". Retrieved 22 May 2016. - "Bulletin of the IAU Working Group on Star Names, No. 1" (PDF). Retrieved 28 July 2016. - (in Chinese) AEEA (Activities of Exhibition and Education in Astronomy) 天文教育資訊網 2006 年 6 月 15 日 - Jaschek, C.; Andrillat, Y. (June 1998), "AE and A type shell stars", Astronomy and Astrophysics Supplement, 130 (3): 507–512, Bibcode:1998A&AS..130..507J, doi:10.1051/aas:1998101 - Nordgren, Tyler E.; et al. (December 1999), "Stellar Angular Diameters of Late-Type Giants and Supergiants Measured with the Navy Prototype Optical Interferometer", The Astronomical Journal, 118 (6): 3032–3038, Bibcode:1999AJ....118.3032N, doi:10.1086/301114
<urn:uuid:b22d2742-4947-4315-b798-42f236d210c0>
CC-MAIN-2019-43
https://www.marbk-28w.win/wiki/Margaret_Sanger
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675409.61/warc/CC-MAIN-20191017145741-20191017173241-00392.warc.gz
en
0.717233
2,248
3.109375
3
For more than five decades, humans have been hurtled into space. These pioneers have not only carried along all of life’s necessities, they also have stowed carefully the power of imagination and the desire to discover. It is these intangibles that have built the legacy of space-based research. We talked with some HMS researchers who, together with intrepid astronauts, have used the special laboratory of space to test the limits of their questions of science. Their stories provide a glimpse of the medical benefits these research collaborations have produced: pocket-sized imaging tools; light therapy to reset circadian clocks; drugs to rebuild bone; software-based behavioral therapy. And they provide a glimmer of what discoveries await.
<urn:uuid:d6137861-51d7-4423-8948-08abbdaa28bc>
CC-MAIN-2015-11
http://hms.harvard.edu/news/harvard-medicine/space-savers-introduction
s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463485.78/warc/CC-MAIN-20150226074103-00026-ip-10-28-5-156.ec2.internal.warc.gz
en
0.948164
145
2.9375
3
CalEnviroScreen; Biomonitoring, Environmental justice; Health impact assessment, Risk assessment Why is this useful? Solomon GM, Morello-Frosch R, Zeise L, Faust JB. Cumulative Environmental Impacts: Science and Policy to Protect Communities. Annu Rev Public Health. 2016;37:83-96. doi: 10.1146/annurev-publhealth-032315-021807. Epub 2016 Jan 6. PMID: 26735429. Many communities are located near multiple sources of pollution, including current and former industrial sites, major roadways, and agricultural operations. Populations in such locations are predominantly low-income, with a large percentage of minorities and non-English speakers. These communities face challenges that can affect the health of their residents, including limited access to health care, a shortage of grocery stores, poor housing quality, and a lack of parks and open spaces. Environmental exposures may interact with social stressors, thereby worsening health outcomes. Age, genetic characteristics, and preexisting health conditions increase the risk of adverse health effects from exposure to pollutants. There are existing approaches for characterizing cumulative exposures, cumulative risks, and cumulative health impacts. Although such approaches have merit, they also have significant constraints. New developments in exposure monitoring, mapping, toxicology, and epidemiology, especially when informed by community participation, have the potential to advance the science on cumulative impacts and to improve decision making. Gina M. Solomon,1 Rachel Morello-Frosch,2 Lauren Zeise,3 and John B. Faust3
<urn:uuid:88169db5-1973-4cc8-b356-b7170fd08402>
CC-MAIN-2023-06
https://scican.org/resource/cumulative-environmental-impacts-science-and-policy-to-protect-communities/
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500215.91/warc/CC-MAIN-20230205032040-20230205062040-00262.warc.gz
en
0.918252
453
2.9375
3
By Joanna Kyriakakis The Azaria Chamberlain case is a reminder that the criminal justice system does get it wrong, with each error bearing its own human cost. Last week, the Northern Territory Coroner’s office concluded an inquest into the cause of death of baby Azaria Chamberlain near Uluru on the night of 17 August 1980. The finding: a dingo took the baby. Despite the same determination in the original coronial inquest, Lindy Chamberlain-Creighton was tried and found guilty of the child’s murder in one of the most public criminal cases in Australian history. She was sentenced to life in prison and served nearly three years before new evidence and a Royal Commission Inquiry led to a pardon and reversal of her wrongful conviction. The Chamberlain-Creighton conviction was based largely on the use of unreliable or improper forensic science during the trial. But the Chamberlain case is only one example of wrongful conviction following a flawed criminal trial. Many have been sent to jail or even executed on the basis of faulty evidence despite their innocence. The Innocence Project in the United States is dedicated to exonerating wrongfully convicted people through the use of modern DNA testing. They report that in US history there have been 292 post-conviction DNA exonerations. From their experience, the seven most common causes of wrongful convictions are eyewitness misidentification, improper forensic science, false confessions, government misconduct, reliance on informants, and plain old bad lawyering – including defence counsel sleeping during trial. Unreliable or improper forensic science was found to be present in 52% of the first 225 exoneration cases the Innocence Project have dealt with. A lack of scientific standards for assessing the results of forensic testing methods was a key finding by Justice Morling in the Royal Commission Inquiry into the Chamberlain case. This unreliable evidence, along with unverified assumptions by experts, were presented as scientific evidence to the court. Despite advancements in forensic practice, modern concerns persist with respect to its use in criminal trials. For example, the term “the CSI effect” has been coined to describe the impact of television programs that depict a high level of sophistication in current forensic sciences. These shows foster unrealistic expectations among jurors as to the need for, and veracity of, forensic science in criminal trials. Also with respect to jury trials, studies have shown that jurors can be prone to confusion as to legal directions and factual narratives, especially in cases involving complex evidence. They may also automatically defer to expert witnesses. There is a large corpus of international human rights law directed at ensuring procedural fairness for defendants. These rights seek to balance the usually limited power of defendants relative to that of the state and to minimise the risks of injustice. Key among these is Article 14 of the International Covenant on Civil and Political Rights (ICCPR) that ensures basic rights. These include the presumption of innocence, the right to silence, to be dealt with by an impartial tribunal, to be fully informed of charges, to participate fully in the examination of evidence and to be protected against multiple prosecutions for the same offence. Article 14 also confirms the right of all persons to appeal to a higher court and to compensation where new evidence leads to exoneration. But even with due process assurances, errors occur. In states retaining the death penalty such errors may result in the highest cost of all. The Death Penalty Information Centre reports that in the United States 140 people have been released from death row since 1973 due to evidence of wrongful conviction. Some would say that this shows the criminal justice system working, as appeal processes have enabled the errors to be identified. To some extent this is true. Sadly, however, there are cases where the error has not been uncovered in time or limitations in the system have worked against its recognition. For example, there is evidence to suggest that in 2004 Cameron Todd Willingham was wrongfully executed by the state of Texas. His murder convictions had been founded on discredited scientific theories as to how the fire that killed his children most likely occurred. Earlier this year the Columbia Human Rights Law Review dedicated one of its editions to research detailing how Carlos DeLuna was executed in 1989 for a crime that he most likely did not commit. Studies in the United States also suggest the race of a victim has a bearing on the likelihood of an imposition of the death penalty. The death penalty and international law International human rights law has yet to prohibit the use of capital punishment. Instead, Article 6 of the ICCPR limits its application to the most serious crimes and to defendants over 18 years of age. Despite this, there is a global trend towards its abolition. This trend is supported by international instruments such as the 1989 Second Optional Protocol to the ICCPR, which confirms that the abolition of the death penalty contributes to enhancement of human dignity and progressive development of human rights. The Chamberlain case is a reminder that criminal justice systems are fallible. For the family in this case, the legal system has, as far as is possible, rectified the errors – the criminal conviction has been reversed, financial compensation awarded, and now the accurate recording of the cause of Azaria’s death. Some wrongs can be rectified. But as philosopher John Stuart Mill acknowledged in his eloquent defence of the death penalty in 1868, there is one argument against the practice which “never can be entirely got rid of. .. [T]hat if by an error of justice an innocent person is put to death, the mistake can never be corrected”.
<urn:uuid:973db546-457f-4bf4-bbdc-362e5bce2db6>
CC-MAIN-2021-31
https://castancentre.com/2012/06/19/lessons-from-the-chamberlain-case-the-human-cost-of-wrongful-conviction-2/
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154053.17/warc/CC-MAIN-20210731043043-20210731073043-00049.warc.gz
en
0.958492
1,124
2.546875
3
International Women’s Day is celebrated every year on March 8 to recognise the social, economic, cultural and political achievements of women. The event celebrates women’s achievements and raises awareness about women’s equality and lobbies for accelerated gender parity. Observed for the first time in 1911, International Women’s Day aims to highlight and recognise the achievements of women in different spheres while also bringing attention to important issues including gender discrimination that exist even today. History of International women’s Day IWD has been celebrated for over a century now, but many people think of it purely as a feminist cause. Its roots, however, are found in the labour movement, wherein it was first organised in 1911 by the early 20th century Marxist from Germany Clara Zetkin. Zetkin was born in 1857 in Wiederau, where she trained as a teacher, and was associated with the Social Democratic Party (SPD) — one of the two major political parties in Germany. She participated in both labour movement and women’s movement. It is said that in the 1880s, anti-socialist laws were enforced by German leader Otto von Bismarck, and Zetkin went into a ‘self-imposed exile’ in Switzerland and France. She wrote and distributed literature that was forbidden back then, and also met with leading socialists. Zetkin also played a significant role in the formation of the Socialist International. When she returned to Germany, she became the editor of Die Gleichheit (‘Equality’) — SPD’s newspaper for women — from 1892 to 1917. In the SPD, Zetkin was closely associated with the far-left thinker and revolutionary Rosa Luxemburg. In 1910 — three years after she became a co-founder of the International Socialist Women’s Congress — Zetkin proposed at a conference that Women’s Day be celebrated in every country on February 28. A conference comprising 100 women from 17 countries, with unions, socialist parties, working women’s clubs and female legislators unanimously approving the suggestion, and Women’s Day was observed for the first time in the year 1911. But, in 1913, the date was changed to March 8, and it continues to be celebrated every year. International Women’s Day 2022 Theme The theme of 2022 International Women’s Day is “gender equality today for a sustainable tomorrow”. “Advancing gender equality in the context of the climate crisis and disaster risk reduction is one of the greatest global challenges of the 21st century. Women are increasingly being recognized as more vulnerable to climate change impacts than men, as they constitute the majority of the world’s poor and are more dependent on the natural resources which climate change threatens the most. At the same time, women and girls are effective and powerful leaders and change-makers for climate adaptation and mitigation…This International Women’s Day, let’s claim “Gender equality today for a sustainable tomorrow”,” reads a statement by the United Nations. Additionally, internationalwomensday.com states that IWD 2022 campaign theme is ‘#BreakTheBias’. It intends to promote a “gender equal world”, which is “free of bias, stereotypes, and discrimination”. “A world that is diverse, equitable, and inclusive”, and where “difference is valued and celebrated”. International Women’s Day Significance International Women’s Day is celebrated to recognise the social, economic, cultural and political achievements of women. Organisations like colleges and institutions across the world celebrate International Women’s Day by holding public speeches, rallies, exhibitions, workshops and seminars on themes and concepts, debates, quiz competitions and lectures.
<urn:uuid:91cad109-68a8-49ae-9b0d-f703637547f3>
CC-MAIN-2022-33
https://www.wondersofnepal.com/international-womens-day-2022-theme/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571150.88/warc/CC-MAIN-20220810070501-20220810100501-00715.warc.gz
en
0.963966
800
3.96875
4
Studies and experts both strongly point out to asbestos exposure as the primary cause of mesothelioma cancer. The longer a person is exposed to this particle, the higher the chances for contracting the disease. Therefore, people whose line of job is linked to asbestos usage/exposure are the ones most likely to develop the disease. The technical use of asbestos includes soundproofing, insulation, roofing, fireproofing, and the manufacture of ironing board covers. Hence, companies or industries that employ asbestos use or benefit from it are construction corporations, power plants, oil refineries, steel mills, and shipyards. There are also exceptional cases where the disease may be contracted via irradiation, which is the process of inhaling fibrous silicate such as eronite. Mining also happens to be one of the main culprits behind it. In fact, even if asbestos mining is no longer being done in Australia, its effects are still widespread, as many people still suffer from symptoms of mesothelioma cancer. This is the reason why those in the mining industry are urged to continuously look for global mining solutions that can eradicate the spread of this disease. Indeed, even after many years or decades of asbestos exposure, this type of cancer can still manifest. Ingestion or inhalation of asbestos particles, or fibers eventually result in accumulation and lining up along the lungs, chest, or abdomen and heightens the chances for the development of cancer cells. Mesothelioma is categorized as a rare cancer targeting the mesothelial cells especially those found along the lining covering the lungs (pleura). Mesothelial cells comprise the lining or membrane found on the outer surfaces of body organs. Most people inflicted with this rare disease have had exposure to asbestos at one point in their lives. Types of Mesothelioma - Pleural mesothelioma – this targets and affects the pleura, that lining surrounding the lungs, and is considered the most common form. - Peritoneal mesothelioma – this type banks on the lining of the abdomen called the peritoneum and is the second most common form. - Pericardial mesothelioma – this type strikes the protective layer covering the heart and is believed to be the rarest type. Signs and Symptoms of Mesothelioma Symptoms of pleural mesothelioma include shortness of breath, unexplained weight loss, painful consistent coughing, exhaustion, and seeming lack of energy, pain under the rib cage, occasional lumps beneath the skin within the chest area, lower back pain, and at times feverish conditions. Peritoneal mesothelioma is characterized by abdominal pain and swelling, unexpected weight loss, abdominal lumps and nausea or vomiting. Pericardial mesothelioma is distinguished by low blood pressure, edematose conditions especially in the feet area, shortness of breath, palpitations, stabbing chest pains and low energy resulting to faster exhaustion even when performing very minimal physical activity. Diagnosis of Mesothelioma Digital chest x-ray is still the leading and the most common way of detecting Mesothelioma. Getting a thorough profile of the patient, such as work history, lifestyle, past ailments, and family history, imaging scans may be ordered which are either a CT scan or an X-ray of the abdomen or chest. Depending on the scan results, more procedures may be recommended to rule out possibilities. Biopsy is also another diagnostic procedure where tissue is removed for further examination under a microscope. The biopsy is dependent on what part of the patient’s organs or body is targeted. Laparotomy is the process where the patient is opened up through the abdomen for physical examination. Tissue samples are then removed for laboratory analysis. Prognosis for Mesothelioma Because cancer remains to be one of the most unpredictable diseases known to humanity, coming up with a precise prognosis is difficult and tricky. Being considered a highly aggressive form of cancer, mesothelioma also has a long latency period. In most cases, detection occurs when the cancer is already in its advanced stage and survival is pegged at an average of about one or two years.
<urn:uuid:c45e79f0-4a42-45d7-999b-76236e517902>
CC-MAIN-2015-18
http://bambiesturf.com/page/3/
s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246656747.97/warc/CC-MAIN-20150417045736-00237-ip-10-235-10-82.ec2.internal.warc.gz
en
0.938329
877
3.234375
3
The origin of life is a pretty enormous mystery. There are several theories for how life might have come about, but it's difficult to design experiments to narrow down these options. In the meantime, researchers continue to look for clues and evidence for life that didn't originate on our planet. Here are just a few examples that could one day lead us in the right direction. Ah, life imitating art (or art accidentally imitating life). Earlier this year, we had Rob Reid post an excerpt and discuss his new novel, Year Zero, concerning aliens listening to Earth music for free, without a license... and then realizing that they've been infringing our copyrights for years, and owe the record labels more money than exists in the galaxy. Funny story, right? Except... as Joe Betsill points out, apparently at least EMI really was afraid that aliens might listen to music without a license. In the Wikipedia entry for the Beatles' famous song, "Here Comes the Sun" it notes the following bit of trivia: Astronomer and science popularizer Carl Sagan had wanted the song to be included on the Voyager Golden Record, copies of which were attached to both spacecraft of the Voyager program to provide any entity that recovered them a representative sample of human civilization. Although The Beatles favoured the idea, EMI refused to release the rights and when the probes were launched in 1977 the song was not included. Of course, just a few weeks ago, we also discussed Sagan and the Voyager Golden Record, in noting how the world is changing in that we no longer have to wait for the modern Carl Sagans to decide what gets sent into space any more. So, perhaps the story in Year Zero isn't so far-fetched after all... Looking for extraterrestrial life has been a largely fruitless task for many decades. There have been a few times when people thought they might have found evidence of life that wasn't from Earth, but upon further analysis, those discoveries weren't so clear cut. Still, the search for ETs is on-going, and here are just a few links on some ways to find alien friends. As mentioned earlier this week, at 12:30pm PT/3:30pm ET today, we're hosting a live video chat with Rob Reid, the author of Year Zero, our Techdirt Book Club book of the month for July. Rob will be joining me at our offices, and we'll be broadcasting the conversation via Google Hangouts on Air. We'll be posting the embed beneath this text, so it should stream live if you're reading this while the conversation is ongoing. If you're here afterwards, the video should be replayable below. If you have any questions that you'd like to ask Rob during this conversation, please tweet with the hashtag #yearzero. We will be monitoring that tweetstream and will try to take some questions from the audience. This is the first time that we're doing something like this via a video stream, rather than a text chat. Please bear with us in case there are technical difficulties. It is very much an experiment, but hopefully provides a worthwhile experience. As we announced a few weeks ago, the July Techdirt Book Club book is Year Zero written by Rob Reid and which comes out today, published by Random House. Rob will be joining us in a few weeks to talk about writing a comic sci-fi novel about the mess that is copyright law... but in the meantime, he's provided the following excerpt, which is Chapter 1. There is a "prologue" before this, which you can read here, or you can just watch this video, which more or less covers the prologue info: As part of this, Rob and Random House have agreed to do another give away, this time just for Techdirt readers, which will go to five commenters on this post, based on your voting scores on the comments. We'll give one copy of the book each to the highest ranked "funny" and "insightful" comments, and then the three highest total scores other than the top ranked (so either funny or insightful). There are a few conditions: you have to be in the US or Canada. I know this sucks for those of you not in those places, but there's nothing we can do about it. Also, to win, we obviously have to be able to contact you, which means (a) you need to be logged in when you comment, so we can email you and (b) you have to respond to our email informing you of your win within 24 hours of our email. Also, you can't win twice -- if you score the highest in multiple categories, you get a prize in one and the others will go to the runners-up. We'll keep the voting open until Wednesday night and then tally the votes. So, get to work with your funny/insightful comments... CHAPTER ONE: ASTLEY Even if she'd realized that my visitors were aliens who had come to our office to initiate contact with humanity, Barbara Ann would have resented their timing. Assistants at our law firm clearout at five-thirty, regardless -- and that was almost a minute ago. "I don't have anyone scheduled," I said, when she called to grouse about the late arrival. "Who is it?" "I don't know, Nick. They weren't announced." "You mean they just sort of . . . turned up at your desk?" I stifled a sneeze as I said this. I'd been fighting a beast of a cold all week. This was odd. Reception is two key-card-protected floors above us, and no one gets through unaccompanied, much less unannounced. "What do they look like?" I asked. "Lady Gaga strange?" Carter, Geller & Marks has some weird-looking clients, and Gaga flirts with the outer fringe, when she's really gussied up. "No--kind of stranger than that. In a way. I mean, they look like they're from . . . maybe a couple of cults." From what? "Which ones?" "One definitely looks Catholic," Barbara Ann said. "Like a . . . priestess? And the other one looks . . . kind of Talibanny. You know -- robes and stuff?" "And they won't say where they're from?" "They can't. They're deaf." I was about to ask her to maybe try miming some information out of them, but thought better of it. The day was technically over. And like most of her peers, Barbara Ann has a French postal worker's sense of divine entitlement when it comes to her hours. This results from there being just one junior assistant for every four junior lawyers, which makes them monopoly providers of answered phones, FedEx runs, and other secretarial essentials to some truly desperate customers. So as usual, I caved. "Okay, send 'em in." The first one through the door had dark eyes and a bushy beard. He wore a white robe, a black turban, and a diver's watch the size of a small bagel. Apart from the watch, he looked like the Hollywood ideal of a fatwa-shrieking cleric -- until I noticed a shock of bright red hair protruding from under his turban. This made him look faintly Irish, so I silently christened him O'Sama. His partner was dressed like a nun -- although in a tight habit that betrayed the curves of a lap dancer. She had a gorgeous tan and bright blue eyes and was young enough to get carded anywhere. O'Sama gazed at me with a sort of childlike amazement, while the sister kept it cool. She tried to catch his eye -- but he kept right on staring. So she tapped him on the shoulder, pointing at her head. At this, they both stuck their fingers under their headdresses to adjust something. "Now we can hear," the nun announced, straightening out a big, medieval-looking crucifix that hung around her neck. This odd statement aside, I thought I knew what was happening. My birthday had passed a few days back without a call from any of my older brothers. It would be typical of them to forget -- but even more typical of them to pretend to forget, and then ambush me with a wildly inappropriate birthday greeting at my stodgy New York law office. So I figured I had about two seconds before O'Sama started beatboxing and the nun began to strip. Since you never know when some partner's going to barge through your door, I almost begged them to leave. But then I remembered that I was probably getting canned soon anyway. So why not gun for YouTube glory, and capture the fun on my cellphone? As I considered this, the nun fixed me with a solemn gaze. "Mr. Carter. We are visitors from a distant star." That settled it. "Then I better record this for NASA." I reached across the desk for my iPhone. "Not a chance." She extended a finger and the phone leapt from the desk and darted toward her. Then it stopped abruptly, emitted a bright green flash, and collapsed into a glittering pile of dust on the floor. "What the . . . ?" I basically talk for a living, but this was all I could manage. "We're camera shy." The nun retracted her finger as if sheathing a weapon. "And as I mentioned, we‘re also visitors from a distant star." I nodded mutely. That iPhone trick had made a believer out of me. "And we want you to represent us," O'Sama added. "The reputation of Carter, Geller & Marks extends to the farthest reaches of the universe." The absurdity of this flipped me right back to thinking "prank" -- albeit one featuring some awesome sleight of hand. "Then you know I'll sue your asses if I don't get my iPhone back within the next two parsecs," I growled, trying to suppress the wimpy, nasal edge that my cold had injected into my voice. I had no idea what a parsec was, but remembered the term from Star Wars. "Oh, up your nose with a rubber hose," the nun hissed. As I was puzzling over this odd phrase, she pointed at the dust pile on the floor. It glowed green again, then erupted into a tornado-like form, complete with thunderbolts and lightning. This rose a few feet off the ground before reconstituting itself into my phone, which then resettled gently onto my desk. That refuted the prank theory nicely -- putting me right back into the alien-believer camp. "Thank you very kindly," I said, determined not to annoy Xena Warrior Fingers ever, ever again. "Don't mention it. Anyway, as my colleague was saying, the reputation of Carter, Geller & Marks extends to the farthest corner of the universe, and we'd like to retain your services." Now that I was buying the space alien bit, this hit me in a very different way. The farthest corner of the universe is a long way for fame to travel, even for assholes like us. I mean, global fame, sure -- to the extent that law firms specializing in copyright and patents actually get famous. We're the ones who almost got a country booted from the UN over its lax enforcement of DVD copyrights. We're even more renowned for our many jihads against the Internet. And we're downright notorious for virtually shutting down American automobile production over a patent claim that was simply preposterous. So yes, Earthly fame I was aware of. But I couldn't imagine why they'd be hearing about us way out on Zørkan 5, or wherever these two were from. "So, what area of the law do you need help in?" I asked in a relaxed, almost bored tone. Feigning calm believably is a survival tactic that I perfected as the youngest of four boys (or of seven, if you count our cousins, who lived three doors down. I sure did). It made me boring to pick on -- and useless as a prank victim, because I'd treat the damnedest events and circumstances as being mundane, and entirely expected. It had also helped me immensely as a lawyer (although by itself, it had not been enough to make me a successful one). Sister Venus gave me a cagey look. "It's sort of . . . an intellectual property thing." Okay, I know it was just two days ago that we announced the Techdirt Book Club book for June (and, a reminder that tomorrow at 1pm PT/4pm ET, we'll be holding a Q&A with Patricia Aufderheide for the Techdirt Book Club book for May), but today we're "pre-announcing" the Techdirt Book Club book for July. And that's because if you want to get a free hard copy of the book, you can enter a giveaway starting today. You may remember, a few months back, Rob Reid (founder of Listen.com, among other things) got plenty of attention for his rather humorous talk about copyright math. And, earlier this week, we wrote about his op-ed for the WSJ concerning ways to compete with "free." But none of that compares to Year Zero, Reid's new novel, which is being released on July 10th. It's all about aliens who go bankrupt after they realize they owe the record labels more money than exists in the universe, because they got hooked on our music, and shared that music with other aliens. Rob has released a video trailer as a teaser for the book, which is quite amusing: I've had a chance to read the book, and I can say that it's awesome. Think Hitchhikers Guide to the Galaxy, but with copyright law driving a major plot line. A mainstream humorous sci-fi novel that uses the Berne Convention as a key plot point and tosses aside casual references to Larry Lessig and Fark? Yes. Count me in. And, unlike most novels that bring up copyright, this one gets the legal issues mostly right (there is one point where trademark and copyright get confused, but it's so minor, you'll let it slip). Anyway, as we said, this will be the Book Club book for July, and we'll be doing some fun things with Rob to have him engage with everyone here—but, if you're lucky, there's a chance for you to get a physical copy of the book delivered a month before it's actually released. Rob has the details on his blog, but basically you have to let him know (via a comment on his blog, a tweet or a Facebook comment) what song you'd like to beam to the aliens. Thirty winners -- ten from the comments, ten from Twitter and ten from Facebook (though you can enter all three) -- will be chosen at random to get books. So, go ahead and beam some songs to aliens. And just hope the RIAA doesn't claim that you're "inducing" infringement by doing so... Just about every science fiction story that involves aliens has to come up with some way for different languages to be translated and understood. Babel Fish, C3PO and Star Trek's "universal translator" all served this purpose. But, it would be revolutionary for technology just to translate between different human languages. Here are some quick links on the topic of communication research. The search for intelligent life somewhere else in the universe hasn't turned up any positive results so far. But the universe is a big place -- and we haven't really been looking for that long. Here are some quick links on some projects that could help identify ETs.
<urn:uuid:7c3cdbaa-ad96-4a0f-af01-e5ecc4cf98d8>
CC-MAIN-2017-13
https://www.techdirt.com/blog/?tag=aliens&start=10
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191984.96/warc/CC-MAIN-20170322212951-00062-ip-10-233-31-227.ec2.internal.warc.gz
en
0.974335
3,287
2.84375
3
World UFO Day is celebrated on July 2nd of each year. World UFO Day was established by the World UFO Day Organization (WUFODO). According to worldufoday.com, there are several reasons this day was founded. On of the first and foremost reasons is to raise awareness about the undoubted existence of UFO’s and with that intelligent beings from outer space. Also this day is used to encourage governments to unclassify their knowledge about sightings throughout the history. Many governments, the US government for instance, are believed to have gained exclusive information about UFO’s through their military departments. A subject that still raises a lot of curiosity is the Roswell incident in 1947 when a believed UFO crashed in Roswell New Mexico. Find out more about World UFO Day by visiting their website: www.worldufoday.com What is a UFO An unidentified flying object, often abbreviated UFO or U.F.O., is an unusual apparent anomaly in the sky that is not readily identifiable to the observer as any known object, often associated with extraterrestrial life. While technically a UFO refers to any unidentified flying object, in modern popular culture the term UFO has generally become synonymous with alien spacecraft; however, the term ETV (ExtraTerrestrial Vehicle) is sometimes used to separate this explanation of UFOs from totally earthbound explanations. Proponents argue that because these objects appear to be technological and not natural phenomena and are alleged to display flight characteristics or have shapes seemingly unknown to conventional technology, the conclusion is that they must not be from Earth. Though UFO sightings have occurred throughout recorded history, modern interest in them dates from World War II (see foo fighter), further fueled in the late 1940s by Kenneth Arnold’s report of a close encounter, which led to coining of the term flying saucer, and the Roswell UFO Incident. Since then governments have investigated UFO reports, often from a military perspective, and UFO researchers have investigated, written about, and created organizations devoted to the subject. One such investigation, the UK’s Project Condign, made public in 2006, attributed unaccountable UFO sightings to a hitherto unknown and scientifically unexplained “plasma field.” It also concluded that Russian, former Soviet Republics, and Chinese authorities had made a co-ordinated effort to understand the UFO phenomenon and that military organizations, particularly in Russia, had done “considerably more work (than is evident from open sources)” on military applications stemming from their UFO research. The report also noted that “several aircraft have been destroyed and at least four pilots have been killed ‘chasing UFOs’.
<urn:uuid:a705fee4-6eec-416c-8a42-3eb40767e62d>
CC-MAIN-2018-17
http://www.nationalwhateverday.com/whatever-days/july/world-ufo-day/
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947759.42/warc/CC-MAIN-20180425080837-20180425100837-00299.warc.gz
en
0.970182
541
3.0625
3
Ecosystem root respiration in sugar maple forests increases with temperature within populations but not across populations: Results of a range-wide study Common temperate tree species, such as sugar maple, have geographic ranges that encompass tremendous natural variation in climate. Understanding the local adaptations that occur in response to these climatic differences could improve our understanding of potential large-scale responses to climatic warming. Root respiration typically increases exponentially with temperature within a location and species, but there are little data regarding how this response may vary among populations and the mechanisms controlling population differences. Our objective was to determine how carbon allocated to sugar maple root respiration varies with local climate at sixteen sugar maple dominated sites in the central U.S. The sites span 10 degrees of latitude, across which mean annual temperature increases from 4 to 14 oC. Fine-root (<1 mm) biomass and respiration at ambient soil temperature were measured during June, July, and September, 2014. Ecosystem fine-root respiration was estimated as the product of specific fine-root respiration and biomass. Models often allow plant tissue respiration to increase exponentially with temperature, with a Q10 of 2. This would imply 85% greater specific respiration rates at the southern sites than at the northern sites in July, when soil temperatures increased from 11.3 to 21.0 oC from north to south. Within each of the locations, root respiration rates were greater for sampling dates with warmer soil temperature. Across sites, however, root respiration did not increase from cooler to warmer locations for any of the three sampling periods, and were actually slightly lower at the southernmost sites (P = 0.01) in July, despite soil temperatures nearly 10 oC warmer than northern sites. This was associated with decreases in fine root N concentration (P = 0.003) and metabolic capacity (P < 0.001) from north to south. Fine-root biomass also decreased (P < 0.001) from north to south, which contributed to lower ecosystem fine root-respiration at the warmer, southern sites (P < 0.001). It is not known if these differences among sites are a plastic response to environmental conditions that all sugar maple are capable of or if they are the result of genetically different ecotypes, tightly adapted to local climate, occurring at each site. If the former is true, climatic warming will have little impact on belowground C allocation across sugar maple’s range. If the latter is true, ecosystem root respiration would increase greatly at all locations, potentially altering ecosystem carbon balance and productivity.
<urn:uuid:280a6666-7f9e-4a8e-9908-251feaa877c5>
CC-MAIN-2021-21
https://eco.confex.com/eco/2015/webprogram/Paper55603.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991812.46/warc/CC-MAIN-20210515004936-20210515034936-00484.warc.gz
en
0.953215
520
3.65625
4
*The West Virginia Aerospace and Engineering Scholars Experience will not be held in 2014. The College of Science and Technology looks forward to pursuing this endeavor in the future.* THE WEST VIRGINIA AEROSPACE AND ENGINEERING SCHOLARS EXPERIENCE The West Virginia Space Grant Consortium and Mid-Atlantic Aerospace Complex are partnering with NASA Johnson Space Center to provide the exciting opportunity of a NASA-based science, technology, engineering, and mathematics (STEM) educational program for high school juniors in West Virginia. The West Virginia Aerospace and Engineering Scholars, a partnership of NASA Aerospace Scholars, is a competitive program that allows high school juniors to apply to take an engaging online NASA-developed course using a Space Exploration theme to teach a broad range of science, technology, engineering, and mathematics skills aligned with West Virginia's Content Standards and Objectives. Based on their course performance, scholars may be selected to participate in an all expense paid, three-day residential academy at Fairmont State University during the first week of August. Coursework is offered online from May through July. This graded course consists of eleven lessons and a final project that allow students to build their knowledge of NASA, space exploration, and key STEM skills. Master Educators will work with students online throughout the course. Eleven lesson modules have been compiled to expand students’ knowledge, to prepare the Scholars for their three-day Summer Academy at Fairmont State University, and to familiarize them with aerospace exploration. Learning will take place through weekly reading assignments, simulations, viewing video segments, and participating in online discussions. Learning will be demonstrated through completion of case study analysis, technology design activities, quizzes, participation in online discussions, essays, and a final scientific report. These activities are submitted in sequential order, every week and posted through the WVAES online course website. All activities within the modules are reviewed and evaluated by a certified West Virginia educator through the online system and detailed feedback is provided. Once students have been accepted to the WVAES program, they will receive instructions on how to complete and submit each module assignment. All WVAES modules are aligned to the following standards for students in grades 9-12. Online Discussions/Chat Room Scholars participate with mentors, WVAES staff, their instructors, and each other in on-line discussions during the class. These online discussions focus on current aerospace science and technology that support NASA’s vision for space exploration. As part of the course assessment, Scholars are expected to participate in these discussions. Scholars who are chosen to participate in the Summer Academy submit a final project before arriving at Fairmont State University in the summer. These projects represent the culmination of knowledge gained from the online modules and prepare the students for their work at the Summer Academy where they will work collaboratively. The Scholars will choose their project topic based upon their interests and team assignment. Teams are assigned mid-way through the online course. Science Elective Credit If selected students participate in the distance learning coursework and complete the Summer Academy program, school districts will be encouraged to grant a science elective credit towards graduation. The final decision regarding any credit rests with the student's school system. Who Can Apply? Students from across the state of West Virginia are selected to participate through a competitive application process. In order to participate students must: What is the timeline? As a WVAES Scholar, once the distance learning activities are finished and the Summer Academy ended, the responsibility continues. Scholars are encouraged to: Science and Technology
<urn:uuid:97029a58-5a23-4876-841d-c29d4c65b960>
CC-MAIN-2017-51
https://www.fairmontstate.edu/collegeofscitech/outreach/west-virginia-aerospace-and-engineering-scholar
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948542031.37/warc/CC-MAIN-20171214074533-20171214094533-00159.warc.gz
en
0.921566
725
2.53125
3
By Catherine Addington When a Christian is caught between a political economy hostile to human flourishing and a Church all too often comfortable with the status quo, it is demoralizing to have recourse to an ugly, embattled public square. Who wants to have life-or-death debates in a cold professional setting? In what universe is pitting hostile voices against one another conducive to Christian fellowship? But by the time Bartolomé de las Casas and Juan Ginés de Sepúlveda met at Valladolid, Spain in 1550 to debate the morality of the conquest of America, the question had already been settled along with the continent. The debate was convened by Carlos V, king of Spain, Holy Roman Emperor, who had not yet been born when Columbus arrived on Hispaniola nearly sixty years ago. The existence of America, and Spanish dominion over it, were facts of life for him. The Spanish were not seriously considering withdrawal from the Americas. There was no going back. The debate was not about conquest, then, but colonization; it was not about the nature of indigenous people, but their treatment. Carlos V was not asking if he could conquer indigenous people, but if he could give them to his soldiers as slaves, along with their land, as a reward for their service to the crown. Sepúlveda argued that the conquest was a just war, so Carlos could keep the profits (land and people) and distribute them as he pleased. Las Casas argued that the conquest was unjust, so Carlos had to make restitution for it. Neither man won the debate, and the issue was never resolved. The debate has mainly become famous in retrospect, metonymically standing in for the entire colonial project. At the time, though, it was politics. As such, the men’s writings have a curious dual nature as both catty interpersonal sniping from opposite sides of the political spectrum and incredibly high-stakes ethical discussions. Regrettably, it is a rather familiar tone. Bartolomé de las Casas became a planter and owner of indigenous slaves at the age of 18, when he immigrated with his father to the island of Hispaniola in 1502. After becoming a priest, he experienced a profound conversion while meditating upon the book of Sirach: “If one sacrifices ill-gotten goods, the offering is blemished; the gifts of the lawless are not acceptable.” Abandoning his ill-gotten wealth, Las Casas returned to Spain as an anti-slavery activist. In the following years, he was granted a position as court adviser, given the title of Protector of the Indians, and testified before the legislature on the conquistadores’ abuses. (This testimony resulted in the abolition of indigenous enslavement, which was ignored by rioting colonists and repealed.) When Las Casas became Bishop of Chiapas, México, he attempted to enforce abolition by refusing the sacraments to slave owners. This proved so unpopular that he was forced to return permanently to Spain, where he continued his activism. Juan Ginés de Sepúlveda was Carlos V’s royal chronicler and chaplain. His writings in this capacity were nominally historical, but functionally defensive, providing an official version of the Spanish empire’s expansion in the Americas and a justification for its policies there. Before he took on that office, his career was a long string of academic treatises (anti: Desiderius Erasmus, Henry VIII; pro: Aristotle, Machiavelli). His first major work was a panegyric in honor of the emperor. Theologians saw him as compromised—to say the least—but he had the vigorous support of the emperor’s advisers, who had invested a great deal in the colonies. Las Casas’ activism was the political question of the day, and everyone had an opinion. Sepúlveda just happened to be the one who got the guy’s attention. In 1550, Sepúlveda released Democrates alter, a fictitious dialogue arguing that the Spanish conquest of America was a just war. It invoked Aristotle’s concept of “natural slavery” at length: …the Spanish have a perfect right to rule these barbarians of the New World and the adjacent islands, who in prudence, skill, virtues, and humanity are as inferior to the Spanish as children to adults, or women to men, for there exists between the two as great a difference as … between apes and men.1 Before Las Casas even read the book, he had already written a response to it—or at least to the Spanish summary of it that came across his desk. “What blood will they not shed?” Las Casas began his Apologia, describing the soldiers allegedly emboldened by Sepúlveda’s words. What cruelty will they not commit, these brutal men who are hardened to seeing fields bathed in human blood, who make no distinction of sex or age, who do not spare infants at their mothers’ breasts, pregnant women, the great, the lowly, or even men of feeble and gray old age for whom the weight of years usually awakens reverence or mercy?2 Las Casas blatantly broke the rules of procedure here. One can almost hear the Twitter discourse: maybe read the book first before you get all wrapped up about it? Better yet: how about engaging the ideas instead of going immediately for the ad hominem? But then again, he was operating on the correct assumption that the urgency of combatting an attempt to justify indigenous enslavement outweighs intellectual politeness. Rejecting the time-honored temptation to make an idol of decorum, he put things plainly. This exchange makes evident the clash of personality (let alone ideas) between the two men. Sepúlveda wrote a Socratic dialogue of Aristotelian ideas, branding himself the rational debater. He philosophizes. Las Casas wrote with strong language and evocative imagery, coming off as an impassioned firebrand. He preaches. Even though they both cited the Greek philosophers and the books of the Bible throughout their works, and even cited each other, they were fundamentally not having the same discussion. It’s a familiar disconnect today. After observing this exchange, Carlos V invited Las Casas and Sepúlveda to debate one another on the matter at Valladolid. They met twice, in 1550 and 1551. They read from Democrates alter and the Apologia while various scholars and courtiers sat as judges. Their approaches and priorities continued to be starkly different. Most crucially, Las Casas always framed the debate in terms of real-life consequences, not rhetorical soundness: When Sepúlveda, by word or in his published works, teaches that campaigns against the Indians are lawful, what does he do except encourage oppressors and provide an opportunity for as many crimes and lamentable evils as these men commit, more than anyone would find it possible to believe?3 Sepúlveda saw this approach not as driven by urgency, but by pettiness. For him, this discussion was a matter of reputation, in the defense of which his good intentions were the only relevant evidence. He wrote of his frustrations with Las Casas’ personal attacks to a friend: May those who would directly attack my virtue and religious sentiments take care, and be aware that perhaps with their same intentions, I direct all my energies toward the attainment of virtue and the defense of religion, without fraud or lie, making honest use of the freedom that God gave me.4 Even when they did leave the personal dimension out of it, the two men differed on their interpretations of the facts. Sepúlveda argued that Spain’s conquest was justified because it liberated would-be victims of human sacrifice and cannibalism. Las Casas doubted Spain’s moral credibility, as a practitioner of equal evils—like unprovoked war, robbery, and violence: The Indians are under no obligation to believe the Spaniards, even if they force the truth on them a thousand times. Why will they believe such a proud, greedy, cruel, and rapacious nation? Why will they give up the religion of their ancestors, unanimously approved for so many centuries and supported by the authority of their teachers, on the basis of a warning from a people whose words work no miracles to confirm the faith or lessen vice?5 Neither human sacrifice nor cannibalism was an active problem in the 1550s (and the latter likely never had been), and Las Casas often pointed out that indigenous people were not the monsters that European chroniclers made them out to be. Nevertheless, instead of debunking indigenous cruelty, Las Casas reminded the Spaniards of their own. It is a move that makes no sense if you are a scholar trying to win a political argument with Spanish royal judges. It is, however, an effective moral argument if you are a preacher trying to appeal to conscience. And so both men went home victorious, or defeated, or just riled up. The king never made a final decision. The following year, Las Casas published a series of eight treatises in rapid succession. One was his earlier testimony on the conquistadores’ behavior, now known as the Short Account of the Destruction of the Indies. Another was his account of the debate at Valladolid. Infuriated, Sepúlveda quickly published a response: The reckless, scandalous, and heretical propositions that Doctor Sepúlveda noted in the book about the conquest of the Indies that Fray Bartolomé de las Casas, former bishop of Chiapas, printed without license in Sevilla in the year 1552. Sepúlveda’s outrage at his opponent having published their debate “without license,” as if a public figure engaging in a public debate on public policy were entitled to keep his views private, reveals his anger to be more procedural than substantial. Even so, the far more serious charge of heresy in the title suggests that Sepúlveda was behind the denunciation of Las Casas to the Inquisition. (It is not known what became of these charges; Las Casas was evidently never prosecuted.) “I thought I had done enough regarding the Bishop of Chiapas for him to leave me in peace,” Sepúlveda’s response began. Nevertheless, he reiterates their disagreement: The Bishop of Chiapas, having read my work a thousand times, does not refute it but rather spends all his time recounting the cruelties and robberies that soldiers have committed (and even those they have not committed), saying falsely that I favor and approve of such evils, even while knowing—like everyone else who read my book, circulated among all Christendom—that I affirm the opposite. Such evils seem even worse to me than they do to him, and I denounce them as bitterly as one should in my book, though I do not spend as much time on it as he does, since that was never the issue at hand.6 Both men moved on from there. Las Casas returned to his activism, trying to help the indigenous leaders in Perú buy their land back from the Crown. He died fourteen years later, having battled accusations of treason and heresy for decades. The troubled conscience of the former slave owner was never quite put to rest. Sepúlveda returned to his solid academic career, and died a few years after his rival, mildly well-respected to the end. He had accomplished what he planned for his own epigraph. “Here lies Juan Ginés de Sepúlveda,” he had written, “who sought to comport himself in such a fashion that his ways would merit the approval of upright and pious men, and that his doctrine and books written about Theology, Philosophy, and History would themselves merit the approval of learned and unbiased men.”7 These are the models we have for Christian discourse when the empire gets conflicted and wants our permission to proceed with atrocities. We can be motivated by conscience, like Las Casas was, the words of Sirach reverberating in his mind. We can recognize, as Las Casas did, the ways in which our wealth is ill-gotten, and avoid both scrupulosity and complacency in our response to that recognition. We can be troubled by our own relationship to the status quo and use our position within it to advocate for those without. Or we can be motivated by the approval of others, like Sepúlveda was, more interested in intellectual accolades and social capital than the mandates of the Gospel. We can focus on actual human lives and the lived consequences of our beliefs, like Las Casas did. We can have debates where emotion is welcome, rather than grounds for dismissal, because emotion is a perfectly rational response to injustice. We can prioritize the poor, not as a hypothetical class to collectively receive vassals’ benefits, but as an actual group of human beings to whom restitution is owed. We can have debates that talk about people, or we can have Sepúlveda’s debates where we talk about ideas and judge them purely by rhetorical criteria. We can prioritize faithfulness to religious conviction, like Las Casas did, remembering that our society is well served by making it conform to Christ. We can be concerned for souls on both sides of injustice. We can be Christians, or we can prioritize loyalty to national identity. Sepúlveda, ever the good Spanish subject, thought treason was the worst accusation a Christian could face. We can share his fear, if we are afraid of a Savior who was called a traitor in his day. It always sounds very high-stakes and dramatic to talk about our own moment in history this way. It is easier to feel isolated within tradition, facing unique challenges, too embattled and exhausted to do much of anything. That is why I study this moment in history; it reminds me of our own, and I am looking for inspiration. Here’s where I find it. I look back five centuries at these men scoffing at one another in government reports and panel debates, taking swipes at each other’s religious sincerity and intellectual capabilities, and I swear it gives me hope. Valladolid tells me: we have always been like this. It is always hard to make your faith inform your politics in any real way, because it is always hard to do the right thing. This is what it is to be fallen. And yet this is not a particularly difficult time to be a decent human being, no more so than the sixteenth century. Our consciences are at work. God is at work. - Source: Columbia University. - Source: Casas, Bartolomé de las. In Defense of the Indians. Translated by Stafford Poole. Northern Illinois University Press, 1974. p. 19. - Source: Casas 1974, p. 19. - Source: “Carta a Santiago Neyla,” p. 211. In Sepúlveda, Juan Ginés de. Epistolario: Selección. Translated by Ángel Losada. Ediciones Cultura Hispánica, 1966. My translation to English. - Casas, 1974. - My translation from the digitized original manuscript at Biblioteca Nacional de España. - “Carta a Pedro de Sepúlveda,” p. 206. In Sepúlveda, Juan Ginés de. Epistolario: Selección. Translated by Ángel Losada. Ediciones Cultura Hispánica, 1966. My translation to English.
<urn:uuid:750cd2f8-d6ca-4f00-8b51-e5d320d95a00>
CC-MAIN-2019-18
https://mereorthodoxy.com/decent-indecent-age-bartolome-de-las-casas/
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578633464.75/warc/CC-MAIN-20190424054536-20190424080536-00307.warc.gz
en
0.97268
3,293
2.515625
3
Overview for Bengal Tiger in the Bangladesh Sundarbans The book focuses on: i. The status of the habitat of the Bengal tiger in the Bangladesh Sundarbans; ii. Biological details of the tigers in the Sundarbans mangrove forest; iii. The status of the principal prey species of the Bengal tigers in the Sundarbans; iv. The human-tiger interaction in the Sundarbans with a view to minimize conflicts; and v. the ways for improved management and conservation of the Bengal tigers as well as their habitat, the Sundarbans. The terrain of the Sundarbans is totally different from terrain found in Ranthambhore and Nagarahole National Park in India or Royal Chitwan National Park in Nepal, where the Bengal Tigers have been studied extensively. The mangroves presents one of the most difficult terrains in the world for conducting scientific research. A number of books as research papers on the various aspects of ecology of the Bengal tigers are available; but there are only a few published documents on various aspects of the Sundarbans tigers. This makes the Sundarbans animal, as well as its habitat more mysterious, and difficult to understand. Hence, to fulfill the demand of the academic and scientific community, as well as the general public, this book is being published to present scientific information on the mysterious life of the Bengal tiger in the Bangladesh Sundarbans mangrove forest.
<urn:uuid:90eaee9c-9a00-4541-84a0-384b12677ab6>
CC-MAIN-2014-49
http://www.bagchee.com/books/BB39084/bengal-tiger-in-the-bangladesh-sundarbans/
s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400380464.40/warc/CC-MAIN-20141119123300-00193-ip-10-235-23-156.ec2.internal.warc.gz
en
0.931053
288
3.5
4
The Principal Pipeline Podcast: Practitioners Share Lessons from the Field (Intro) Episode 1: Building the Pipeline Episode 2: Improving Job Standards and Hiring Pays Off Episode 3: Districts and Universities Work Together to Improve Preparation Episode 4: Mentors Support Novice Principals on the Job Episode 5: States Can Play a Role in Building Principal Pipelines Episode 6: Shoring Up Two Critical Roles, Assistant Principals and Principal Supervisors Episode 7: A District Strategy to Improve Student Achievement Episode 8: Building Principal Pipelines Improves Principal Retention Episode 9: Measuring the Effectiveness of Principal Pipelines Episode 10: How Districts Sustained Their Principal Pipelines What exactly is it that effective principals do that ripples through classrooms and boosts learning, especially in failing schools? Since 2000, The Wallace Foundation, which has supported projects to promote education leadership in 24 states and published 70 reports on the subject (including the Minnesota/Toronto research), has been trying to answer that question. A recently published Wallace Perspective report that takes a look back at the foundation’s research and field experiences finds that five practices in particular seem central to effective school leadership (The Wallace Foundation, 2012): - Shaping a vision of academic success for all students, one based on high standards - Creating a climate hospitable to education in order that safety, a cooperative spirit, and other foundations of fruitful interaction prevail - Cultivating leadership in others so that teachers and other adults assume their part in realizing the school vision - Improving instruction to enable teachers to teach at their best and students to learn at their utmost - Managing people, data and processes to foster school improvement. When principals put each of these elements in place—and in harmony—principals stand a fighting chance of making a real difference for students. Design Challenge Statement How might we employ our school leadership team (SLT) to mobilize our school’s vision of academic success for all students? As part of this design challenge, you’ll work with your team to design a process and/or tool to evaluate SLT actions and decisions on the basis of their contribution to a vision of academic success for all students. Tool: Collaborative Conversation Guide Leaders will learn how to use the Collaborative Conversation Guide Prototype to improve their decision-making process. This guide provides an evidence-based framework relative to student outcomes for structuring conversation around key processes, instructional grouping, instructional delivery, and collaborative culture. Tool: Leadership Team Dashboard Leaders will learn about the importance of leveraging the capacity of their leadership team. Through the use of a Leadership Team Dashboard, authentic structure and resources will assist leaders in shaping and monitoring a consistent school wide vision for student success. Building Ranks Dimensions Student-Centeredness; Equity; Communication; Collaborative Leadership; Curriculum, Instruction and Assessment Design Challenge Statement How might we provide teachers more leadership opportunities? As part of this design challenge, you’ll work with your team to design products and tools that cultivate opportunities for teacher leadership that enhance a school’s climate and optimize its potential for supporting the educational mission of the school. Tool: PowerPoint Presentation on Fostering Teacher Leadership Leaders will learn how to enhance their school’s climate by fostering and developing teacher leadership in the building. This prototype highlights proven, research-based practices in identification of potential leaders; methods for selecting teacher leaders; and ideas for how to train teacher leaders. Building Ranks Dimensions Collaborative Leadership, Human Capital Management Design Challenge Statement How might we increase teacher collaboration? As part of this design challenge, you’ll work with your team to design tools for promoting teacher collaboration. Tool: A Roadmap to Effective PLCs Leaders will learn about an interactive prototype that supports the principal in increasing teacher collaboration during Professional Learning Community (PLC) time to improve student achievement. This prototype features a self-assessment; a diagnostic area of focus; and targeted resources including videos, templates, protocols, and rubrics within each area, which allow leaders to evaluate the progress of the PLC work in their buildings. Building Ranks Dimensions Student-Centeredness; Collaborative Leadership; Results-Orientation; Curriculum, Instruction and Assessment; Reflection and Growth Design Challenge Statement How might we develop common understanding of how literacy develops across grade levels? As part of this design challenge, you’ll work with your team to design a tool for promoting deep understanding of literacy acquisition and development across grade levels. Tool: ProLit: Literacy Integration Tool Leaders will experience a dashboard of the effective components of literacy integration in all subject areas. Appreciative inquiry serves as the foundation of integrating literacy. Information and ideas on how to use this approach are included, along with videos, rubrics, and other resources for literacy in math. The shell is built for other areas of literacy integration and can be populated by the leader according to their needs. Building Ranks Dimensions Equity; Communication; Results-Orientation; Curriculum, Instruction and Assessment; Human Capital Management Design Challenge Statement How might we use student work as a data source to plan instruction? As part of this design challenge, you’ll work with your team to design a tool to make efficient and effective analytical use of student work as a mechanism to drive instructional decisions. This tool will focus on instruction planning after the analysis of student work rather than the analysis itself. Tool: I’ve Analyzed My Data, Now What? Leaders will experience a planning tool that provides the “so what” after teachers analyze data. The tool will give teachers the resources and support necessary to plan differentiated instruction for their students both individually and collaboratively after data analysis has occurred. Building Ranks Dimensions: Student-Centeredness; Collaborative Leadership; Results Orientation; Curriculum, Instruction and Assessment; Human Capital Management States can pull a number of policy levers to help school districts develop, support and maintain a large corps of effective school principals. Looking Inside and Across 33 leading SEL Programs: A Practical Resource for Schools and OST Providers; Revised and Expanded 2nd Edition (Preschool and Elementary Focus) The number of assistant principals has grown markedly in recent years, and with reconsideration, the AP role could do more to help foster educational equity, school improvement and principal effectiveness. Long-term study of summer learning programs finds meaningful benefits over multiple years. Supports for Social and Emotional Learning in American Schools and Classrooms: Findings from the American Teacher Panel Teachers are confident they can help build students’ social-emotional skills, but say they could use more support to do so, according to a RAND survey. To improve leadership of their schools, seven states have pulled a number of policy levers, from updating principal job standards to changing administrator licensing. Changing the Principal Supervisor Role to Better Support Principals: Evidence from the Principal Supervisor Initiative The Principal Supervisor Initiative succeeded in changing the supervisor position so that it centered on developing and evaluating principals to help them promote effective teaching and learning in their schools. Leading the Change: A Comparison of the Principal Supervisor Role in Principal Supervisor Initiative Districts and Other Urban Districts Six school districts working to focus the principal supervisor job on boosting principals’ instructional leadership were more likely than other districts to set up structures, such as dedicated training, to support the new role. Large school districts nationwide are redesigning the principal supervisor job to focus more on principal support. Taking Stock of Principal Pipelines: What Public School Districts Report Doing and What They Want to Do to Improve School Leadership This report reveals findings from a first-of-its-kind national survey on effective school leadership as it relates to improving education and using a comprehensive principal pipeline. This report, co-authored by the RAND Corporation, focuses on summer learning policy in the time of the pandemic. This study finds that district leaders are pleased with the results of their ongoing principal pipelines. A New Role Emerges for Principal Supervisors: Evidence from Six Districts in the Principal Supervisor Initiative This report examines the expenditures of six large school districts, all participants in a Wallace Foundation initiative, as they built and operated principal pipelines. Among the chief findings: The cost represented a very small slice of annual district spending—an average of about $5.6 million annually for the districts, or about 0.4 percent of their local annual expenditures. How can school districts build a pipeline of effective school principals? This updated Wallace Perspective summarizes lessons learned about pipelines over the course of the initiative. It describes the four components of the pipelines: job standards for principals, high-quality pre-service training, rigorous hiring procedures, and tightly aligned on-the-job performance evaluation and support. A look at data systems to improve school leadership offers hard-won insights gathered from six school districts that are building these systems to assist in everything from principal hiring to principal training. The RAND Corporation conducted a synthesis of the evidence base on school-leadership interventions to better inform the rollout of those interventions under ESSA. This report is intended to help federal, state, and district education policymakers understand and implement school-leadership improvement efforts that are consistent with ESSA. This report is the last in a series of studies examining the implementation of Wallace’s Principal Pipeline Initiative. In 2011, six large school districts each set out to develop a large corps of highly qualified school principals. After five years, according to this report, they have much to show for their efforts, having succeeded in putting into place four key components of a pipeline to the principalship. Principals can make a big difference in the quality of the education students receive. That statement is not just a platitude. Research over the past decade or so has established that school leadership is second only to teaching among school-related influences on student learning, accounting for about one quarter of total school effects. The largest study of summer learning finds that students with high attendance in free, five- to six-week, voluntary summer learning programs experienced educationally meaningful benefits in math and reading. Intended for state officials involved in the assessment and approval of university and other programs to train future school principals, this report describes five design principles for effective program evaluation. The report also describes how two states, Illinois and Delaware, have approached evaluation, and provides a tool from its model-development work, an assessment that states can use to determine their degree of readiness for building a stronger system to evaluate principal preparation programs. What is the state of university-based principal preparation programs? How are these essential training grounds of future school leaders viewed—by themselves as well as by the school districts that hire their graduates? Do the programs need to improve? If so, by what means? This publication seeks to help answer those questions by bringing together findings from four reports commissioned by The Wallace Foundation to inform its development of a potential new initiative regarding university-based principal training. This report is the fourth in a series of studies examining six districts’ experiences in The Wallace Foundation’s Principal Pipeline Initiative, a six-year effort designed to help these districts build larger pools of strong principals and then study the results. It explores the districts’ work to change their approach to principal performance evaluation so that it focuses on working with principals, especially novices, to grow into their jobs and concentrate on improving teaching and learning in their classrooms. School leadership is second only to teaching among school influences on student success, according to research. What can a school district do to produce a large and steady supply of top-notch school principals—and support their effective supervision? This Wallace Update describes two related Wallace Foundation initiatives seeking answers to that question. Developing Excellent School Principals to Advance Teaching and Learning: Considerations for State Policy School principals are “invaluable multipliers of teaching and learning in the nation’s schools,” according to this report by political scientist Paul Manna, but to date it’s been unclear what state policymakers could do to boost their effectiveness. Drawing from sources including the experiences of states that have focused on developing stronger principal policy, this report aims to fill that gap by offering guidance in the form of three sets of considerations for those who want to take action. Describes the School Administration Manager (SAM) process, an approach that about 700 schools around the nation are using to direct more of principals’ time and effort to improve teaching and learning in classrooms. Principals often find themselves mired in matters of day-to-day administration and have little time to cultivate better teaching. The SAM process is designed to free up principals’ time so they can focus on improving instruction in classrooms. Making Time for Instructional Leadership: Volume 2. The Feasibility of a Randomized Control Trial of the SAM Process This report finds the SAM process could be replicated in a large enough number of schools, with enough fidelity to a theoretical model, that a randomized controlled trial would be a meaningful test of its impact. This report includes 10 appendices referred to in the first two volumes of the series. Look at what six districts are doing to create new principal pipelines that are grounded in strong leadership standards, pre-service training, selective hiring procedures, and on-the-job evaluation and support. This first report of an ongoing evaluation of The Wallace Foundation’s Principal Pipeline Initiative describes the six participating school districts’ plans and activities during the first year of their grants. The evaluation, conducted by Policy Studies Associates and the RAND Corporation, is intended to inform policymakers and practitioners about the process of carrying out new policies and practices for school leadership. This report reveals that all six participating districts in the Wallace Foundation’s Principal Pipeline initiative have partnered with external programs for leader preparation. Meanwhile, novice leaders are supported by coaches—and those coaches, mentors, and supervisors all get assistance to build their capacity. This is the second report in an ongoing series that evaluates the activities of participating districts in the foundation’s initiative. This report focuses on implementation of all components of the initiative as of 2014 and its four interrelated areas of district policy and practice: leader standards, preservice preparation, selective hiring and placement, and evaluation and support. This is the third report tracing the activities of six districts that agreed to adopt and implement the Wallace Foundation’s initiative-specific approaches to improve school leader recruitment and retention. This report adds to existing resources for superintendents to provide support for new principals. The three tools designed by education researchers at the University of Washington are meant to help. Two focus on the redesign of central offices in ways that foster effective leadership in schools. The last is an aid for principal supervisors seeking to develop the instructional capabilities of the principals they oversee. Download the full report here. Corcoran et al. look at the six districts participating in The Wallace Foundation’s Principal Pipeline Initiative. Part I presents a description of the organizational structure and general features of the various principal supervisory systems, including the roles, selection, staffing, professional development, and evaluation of principal supervisors, as well as the preparation, selection, support, and evaluation of principals. Part II provides recommendations for building more effective principal supervisors. Based on the survey results and observations from the site visits, these recommendations identify those structures and practices that are most likelyto result in stronger school leaders and higher student achievement. The Wallace Foundation distills insights from school leadership projects and major studies supported by the foundation since 2000 to highlight key district actions to boost school leadership, including drawing up meaningful job descriptions and mentoring novice principals. This Wallace Perspective summarizes a decade of foundation research and work in school leadership to identify what it is that effective school principals do. It concludes that they carry out five key actions particularly well, including shaping a vision of academic success for all students and cultivating leadership in others. This report draws on a decade of work by the Wallace Foundation and identifies ways that preservice and in-service training can be enhanced to further develop strong leadership in every school. This report focuses on candidate selection, emphasizing instructional leadership—and including high-quality mentoring and individualized professional development. This Wallace Foundation report shows that leadership is second only to teaching among school influences on student success, and its impact is greatest in schools with the greatest need, according to this landmark examination of the evidence on school leadership. Principal Leadership, Vol. 18 n1, pp. 47-49Browne, D. (May 2017). The role of nonacademic skills in academic outcomes. Principal Leadership, Vol.19 n9, pp. 40-43Cummins, H.J. (January/February 2015). Best Practices in Action. Principal, v94 n3 pp. 26-29 Making Space for New Leaders. Principal Leadership, v15 n5 p24-27. JSD the Learning Forward Journal, v35 n5 p46-49.School districts are experimenting with several strategies to build up the role of principals’ managers in the central office as a means to improve principal effectiveness and provide instructional support. Gil, J. Wallace Foundation. Despite tight budgets, Denver Public Schools has hired more people to coach and evaluate leaders. Here’s how the district did it. Syed, S. A Wallace Foundation study finds five practices that will help principals lead their schools through implementing new standards. Mendels, P., Mitgang, L.D. The Wallace Foundation reports on dozens of districtwide efforts that are aiming to make school leaders more effective. This articles argues for principal training programs that are selective, comprehensive and support principals beyond their graduation dates. Mendels. P. Six school districts are participating in an initiative funded by The Wallace Foundation to ensure that a large corps of school leaders is properly trained, hired, and developed on the job. Mendels, P. After reviewing its body of research and field experiences, The Wallace Foundation pinpoints five practices central to effective school leadership. This is a four-part video series that explores Illinois’ actions to revamp the way school principals are prepared. The series begins with the tale of how the state of Illinois and its partners, including universities, districts and teachers’ unions, accomplished this change. Two of the videos profile exemplary preparation programs at the University of Illinois at Chicago and New Leaders Chicago, which helped to inspire the higher standards and whose graduates effectively lead Chicago public schools. The final video features Chicago principals who describe how their training programs prepared them for the real demands of their jobs. How can state policy improve the effectiveness of school principals? Educators, including New York State’s commissioner of education gathered in Washington, D.C., on November 3, 2015, to discuss a major Wallace Foundation report that seeks to answer that question. Keynote speaker Paul Manna, professor of government and public policy at the College of William & Mary and the author of the report, Developing Excellent School Principals to Advance Teaching and Learning: Considerations for State Policy, summarized key findings from his research. He described three matters policymakers must understand before taking action: principals’ place on their state’s policy agenda; six possible levers that could trigger change; and their state’s unique context, including the ways in which key education-related institutions interact. This video series follows 10 principals in four metropolitan areas through their workdays, showing how they use five practices of effective school leadership to improve teaching and learning in their classrooms. The practices, described in The School Principal as Leader, are based on more than a decade of Wallace-supported research to identify what successful principals do. Six large school districts have been participating since 2011 in The Wallace Foundation’s Principal Pipeline Initiative, a six-year effort to train, hire and support talented principals. In this series of eight videos, the superintendents of these districts discuss details of their effort, lessons they have learned and advice they can offer to other districts. Many of the experiences they recount are detailed in a January 2015 report about the initiative, one in a series by independent researchers evaluating the effort. This web-based professional learning guide uses excerpts from the award-winning PBS documentary film, The Principal Story, to illustrate the five practices. The guide is intended to help those who prepare and support aspiring and current principals probe these essential practices. Use this facilitator guide to explore options for using these tools. These videos ask, “What makes for an effective principal?” And they answer: Five practices, done well. Listen as 13 school leaders talk about how they have put those practices to work. Identified by local administrators for their efforts to boost teaching and learning, often under difficult circumstances, the principals come from districts receiving Wallace Foundation grants to improve school leadership. a critically-acclaimed PBS documentary that follows two school leaders determined to make successes of the difficult schools they lead – with specially-prepared materials to help users promote excellence among principals. Videos and conversation guides can be used by principals, state or district officials, policymakers and concerned parents. For additional resources from the Wallace Foundation, visit their website.
<urn:uuid:03f7351a-39f2-4494-8da6-828df2a6d43a>
CC-MAIN-2023-40
https://www.nassp.org/the-wallace-foundation-resources/
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510208.72/warc/CC-MAIN-20230926111439-20230926141439-00165.warc.gz
en
0.939647
4,389
2.671875
3
Concrete structures may need regular maintenance procedures, such as resurfacing or patching, especially when they have superficial cracks or erosion problems. However, when the surface of the concrete is too damaged, you will probably need to replace the entire piece of concrete immediately. You need to hire a professional expert, like Apex, for any concrete replacement on your property. Some conditions that require you to have a complete concrete replacement include sunken concrete slabs, frost damages on the concrete surface, deep cracks, construction additions, severely damaged concrete surface, faulty concrete, etc. Some of these concrete issues require you to replace the whole concrete slab immediately. The concrete needs to be demolished when it is completely damaged. Some Methods of Concrete Demolition This bursting method can be done chemically or mechanically. Holes are going to be drilled into the concrete. Then, the lateral forces are applied to the concrete. In the mechanical bursting method, the concrete is split with a powerful hydraulic pressure machine. In the chemical bursting, the concrete will be split with an expansive slurry. Once the concrete is spit, it can be removed by hand manually or with a crane. Pneumatic and Hydraulic Breaker This method is usually used in any demolition projects for destroying any pavement, bridge decks, or other properties. The complexity of this method will strongly depend on the concrete strength, the steel reinforcement, the hammer, and also the working conditions. Machine-mounted breakers will perform demolitions in the range of 100 – 20,000 foot-pounds with 300 – 800 blows every minute. Some breakers are also designed for underwater demolition. Ball and Crane Method This is one of the oldest concrete demolition methods. In this method, a crane will use a 13,500 + pound wrecking ball to start demolishing the concrete structure. The ball will be dropped or swung to the concrete structure. Only professional and high-skilled operators can perform this ball and crane concrete demolition. Concrete can be demolished easily with this method. However, this method will produce a lot of noises, vibrations, and dust. There are many factors that will determine the success rate of this demolition process, such as the crane size, the working area, and the proximity to the nearest power lines. Explosives can also be used to destroy large volumes of concrete. They are usually inserted in some predefined boreholes on the concrete structure. They will cause the whole structure to break into several small pieces. This method is flexible and versatile, but it will result in damage around the building. Before you can perform this method, you need to have permits from the local governments or authorities for performing this concrete demolition process. There are some requirements that you need to follow, so you can perform this method safely. This is another common method that can be used to demolish concrete structures. The concrete structure can be cut with a saw, thermal lance, or high-speed water jets. The concrete pieces will be removed by the crane. Therefore, the complete demolition process will result in less noise and dust. This method doesn’t put a lot of impact on the surroundings. This method is very beneficial, especially when the building structures are being removed completely. Some building portions, such as walls or slabs, need to be removed before this concrete dismantling process can be done. Those are some concrete demolition methods that are popular among many contractors today. Concrete demolition can be necessary in some situations, for example, major renovation projects or heavily damaged concrete. Each method has its own benefits and disadvantages for all people. Therefore, you have to take a look at all available methods before you decide to choose the best demolishing service for your needs. For any concrete services in North Carolina, contact Apex Concrete & Hauling today!
<urn:uuid:c81bddc8-41bf-4ee2-9970-70b744e39c41>
CC-MAIN-2022-49
https://apexconcreteservice.com/the-process-of-concrete-demolition/
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711552.8/warc/CC-MAIN-20221209213503-20221210003503-00369.warc.gz
en
0.931052
790
2.578125
3
A mobile solution captures 99% of entrained oil utilizing an innovative cross-flow method. Oil entrained in produced water can amount to significant dollars in lost revenue. Extracting oil typically requires and produces significant quantities of water. Even when there are systems in place to separate the oil from the water at the surface, such as “heater treaters” or similar methods, oil is still entrained and lost in the water. When the water gets hauled away, the entrained oil goes with it, along with the revenue the oil could have generated. If you visit a disposal well with an open lagoon, you can see exactly how much oil has separated from the water. Consider a disposal well that injects 16,000 bbls per day. If the water contains 1,000 ppm of oil, 20 bbls of oil are pumped away every day. While enough oil separates out to make disposal wells more profitable, there is still enough entrained in the water to cause problems when the mixture is pumped downhole. Revenue is lost and the BHTP (Bottom Hole Treating Pressure) goes up over time as the injection reservoir becomes coated with oil. For example, consider a disposal well that injects 16,000 bbls per day. If the water contains 1,000 ppm of oil, 20 bbls of oil are pumped away every day. Separating more oil from the water makes sense, but until now it hasn’t been an easy process. There are several separation methods, but dirty produced and flowback water complicate most processes. One proven method is membrane separation. The downside, however, is that it is typically too slow and the membranes are sometimes not robust enough for the oil field. The new micro-porous membrane used in this case study “loves” water and “hates” oil so much that it can be used to dewater a 90 API condensate stream. Water flows freely though the membrane while the oil is held back. The membrane permeates water at much higher rates than traditional oil/water membranes. SOLUTION AND PERFORMANCE To fully utilize the membrane, a concept new to the oilfield, known as cross-flow (X-Flow) filtration is required. Instead of dead-end filtration, where the fluid flows directly into the filter, X-Flow continually sweeps the oil-contaminated water across the membrane. Oil-free water permeates the membrane and the membrane is kept clean by the dirty fluid continually flowing tangentially across it. X-Flow extends the life of the membrane to months or years, which eliminates labor-intensive filter change outs. The pump feeding the dirty, oil-contaminated water does the work that keeps the filter’s surface clean. In dead-end filtration, particles can become embedded in the media. By contrast, this system can be cleaned and backwashed, which greatly extends the life of the membrane. In trials at a disposal well in south Texas, the new filter not only separated oil and water, it also separated bacteria, iron, and more. The resulting salt water was clean and oil free. The X-Flow filtration equipment can be scaled by adding modules and pumps to get more oil-free, clean water permeate. Currently there are laboratory systems to run 5-gallon samples, a full-scale, field laboratory trailer set up to permeate up to 10 gpm, and one full-scale, field-ready unit to permeate up to 250 gpm. X-Flow modules banked together on a gooseneck trailer can deliver up to 10 bpm of oil-free clean water for disposal or frac fluid reuse. In summation, this is an economical and innovative technology approach to capture 99 percent of free oil, and it is available now.
<urn:uuid:be0a4439-b834-4e0f-a060-30f1d90cf609>
CC-MAIN-2020-24
https://highlandfluid.com/entrained-oil-recovery/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347409337.38/warc/CC-MAIN-20200530133926-20200530163926-00577.warc.gz
en
0.947964
797
2.6875
3
To use all functions of this page, please activate cookies in your browser. With an accout for my.chemeurope.com you can always see everything at a glance – and you can configure your own website and individual newsletter. - My watch list - My saved searches - My saved topics - My newsletter Carbon-14, 14C, or radiocarbon, is a radioactive isotope of carbon discovered on February 27, 1940, by Martin Kamen and Sam Ruben. Its nucleus contains 6 protons and 8 neutrons. Its presence in organic materials is used extensively as basis of the radiocarbon dating method to date archaeological, geological, and hydrogeological samples. There are three naturally occurring isotopes of carbon on Earth: 99% of the carbon is carbon-12, 1% is carbon-13, and carbon-14 occurs in trace amounts, making up as much as 1 part per trillion (0.0000000001%) of the carbon on the Earth. The half-life of carbon-14 is 5730±40 years. It decays into nitrogen-14 through beta-decay. The activity of the modern radiocarbon standardis about 14 disintegrations per minute (dpm) per gram carbon . The atomic mass of carbon-14 is about 14.003241 amu. The different isotopes of carbon do not differ appreciably in their chemical properties. This is used in chemical research in a technique called carbon labeling: some carbon-12 atoms of a given compound are replaced with carbon-14 atoms (or some carbon-13 atoms) in order to trace them along chemical reactions involving the given compound. Additional recommended knowledge Origin and radioactive decay of carbon-14 Carbon-14 is produced in the upper layers of the troposphere and the stratosphere by thermal neutrons absorbed by nitrogen atoms. When cosmic rays enter the atmosphere, they undergo various transformations, including the production of neutrons. The resulting neutrons (1n) participate in the following reaction: The highest rate of carbon-14 production takes place at altitudes of 9 to 15 km (30,000 to 50,000 feet) and at high geomagnetic latitudes, but the carbon-14 readily mixes and becomes evenly distributed throughout the atmosphere and reacts with oxygen to form radioactive carbon dioxide. Carbon dioxide also dissolves in water and thus permeates the oceans. Carbon-14 then goes through radioactive beta decay. By emitting an electron and an anti-neutrino, carbon-14 (half life of 5730 years) decays into the stable, non-radioactive isotope nitrogen-14. Radiocarbon dating is a radiometric dating method that uses carbon-14 (14C) to determine the age of carbonaceous materials up to about 60,000 years old. The technique was discovered by Willard Libby and his colleagues in 1949 during his tenure as a professor at the University of Chicago. Libby estimated that the steady state radioactivity concentration of exchangeable carbon-14 would be about 14 disintegrations per minute (dpm) per gram. In 1960, he was awarded the Nobel Prize in chemistry for this work. One of the frequent uses of the technique is to date organic remains from archaeological sites. Plants fix atmospheric carbon during photosynthesis, so the level of C-14 in plants at the time wood is laid down, or in animals at the time they die, equals the level of C-14 in the atmosphere at that time. However, it decreases thereafter from radioactive decay, allowing the date of death or fixation to be estimated. The initial C-14 level for the calculation can either be estimated, or else directly compared with known year-by-year data from tree-ring data (dendrochronology) to 10,000 years ago, or from cave deposits (speleothems), to about 45,000 years of age. A calculation or (more accurately) a direct comparison with tree ring or cave-deposit carbon-14 levels, gives the wood or animal sample age-from-formation. The technique has limitations within the modern industrial era, due to fossil fuel carbon (which has little carbon-14) being released into the atmosphere in large quantities, in the past several centuries. Carbon-14 and fossil fuels Most man-made chemicals are made of fossil fuels, such as petroleum or coal, in which the carbon-14 has long since decayed. However, oil deposits often contain trace amounts of carbon-14 (varying significantly, but ranging from 1% the ratio found in living organisms to undetectable amounts, comparable to an apparent age of 40,000 years for oils with the highest levels of carbon-14). This may indicate possible contamination by small amounts of bacteria, underground sources of radiation (such as uranium decay), or other unknown secondary sources of carbon-14 production. Presence of carbon-14 in the isotopic signature of a sample of carbonaceous material therefore indicates its possible contamination by biogenic sources or the decay of radioactive material in related geologic strata. Carbon-14 and nuclear tests The above-ground nuclear tests that occurred in several countries between 1955 and 1963 dramatically increased the amount of carbon-14 in the atmosphere and subsequently in the biosphere; after the tests ended the atmospheric concentration of the isotope began to decrease. This enabled the determination of the birth year of a deceased individual: the amount of carbon-14 in tooth enamel is measured with accelerator mass spectrometry and compared to records of past atmospheric carbon-14 concentrations. Since teeth are formed at a specific age and do not exchange carbon thereafter, this method allows age to be determined to within 1.6 years. This method only works for individuals born after 1943, and it must be known whether the individual was born in the Northern or the Southern Hemisphere. Carbon-14 in the human body Since essentially all sources of human food are derived from plants, the carbon that comprises our bodies contains carbon-14 at the same concentration as the atmosphere. The beta-decays from this internal radiocarbon contribute approx 1 mrem/year (.01 mSv /year) to each person's dose of ionizing radiation. This is small compared to the doses from potassium-40 (0.39 mSv/year) and radon (which vary). Carbon-14 can be used as a radioactive tracer in medicine. In the urea breath test, a diagnostic test for Helicobacter pylori, urea labeled with approx 1 μCi (37kBq) carbon-14 is fed to a patient. In the event of a H. pylori infection, the bacterial urease enzyme breaks down the urea into ammonia and radioactively-labeled carbon dioxide, which can be detected by low-level counting of the patient's breath. |This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Carbon-14". A list of authors is available in Wikipedia.|
<urn:uuid:4569c1a5-65db-4ebb-b683-6d74ba3606b5>
CC-MAIN-2019-30
https://www.chemeurope.com/en/encyclopedia/Carbon-14.html
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524972.66/warc/CC-MAIN-20190716221441-20190717003441-00357.warc.gz
en
0.918131
1,444
3.890625
4
What Causes Low Testosterone? Published on Dec 21, 2010 You probably know the important role testosterone plays in a man’s life. It’s the stuff that makes a man a man, from facial hair and a deeper voice to sex drive, erections, and sperm production. When a man has low testosterone, his libido can plummet, along with other aspects of his sexual function. So you might wonder what causes low testosterone. And is there anything you can do to maintain your testosterone levels, even as you age? Let’s define what we mean by low testosterone. Testosterone is measured with a simple blood test. Most healthy adult men have testosterone levels between 270 and 1,070 nanograms per deciliter (ng/dL). 300 ng/dL is usually the threshold for a low testosterone diagnosis. But keep in mind that a man’s testosterone levels fluctuate during the day. Levels are usually highest around 8 a.m. and lowest around 9 p.m. Most doctors conduct testosterone tests early in the morning so they can get a consistent reading over time. Another thing to think about is the way testosterone is produced. Most of it is made in the testes, but before that even happens, signals from the pituitary gland and the hypothalamus (a part of the brain) need to trigger that production. The pituitary gland and hypothalamus are just as important as the testes. Now, let’s look at some of the reasons behind low testosterone. Aging. For most men, testosterone levels start decreasing around age 40 and continue to decrease about 1% each year. So by age 70, your levels can decline by about 30%. The good news is that even with the drop, three-quarters of older men still have testosterone levels in the normal range. Obesity. Some of a man’s testosterone is naturally converted to estrogen, a hormone usually associated with women. But men need estrogen, too, especially to maintain healthy bone density. The problem with obesity is that the conversion from testosterone to estrogen mainly happens in fat cells. The more fat cells you have, the more testosterone is being converted to estrogen, leading to lower testosterone levels. Injury to the testicles or scrotum. Injured testes are sometimes unable to produce the amount of testosterone a man needs. Interestingly, amounts can remain stable if only one testicle is injured. The healthy one can still produce enough testosterone on its own. Chemotherapy and radiation therapy. These therapies can damage cells in the testes that make testosterone. Sometimes, levels return to normal if the cells recover, but sometimes the damage is permanent. Medications. Opiates, taken for pain, and certain hormones can cause problems with testosterone production. Performance enhancing drugs (anabolic steroids). Bodybuilders and athletes sometimes take anabolic steroids to make them stronger or faster. But performance enhancing drugs can make testicles shrink and impair testosterone production. They are also illegal, when used in this way. Inflammation. Certain conditions and diseases, such as sarcoidosis, histiocytosis, tuberculosis, and HIV/AIDS can affect the pituitary gland and/or the hypothalamus because of inflammation. Infection. Mumps, meningitis, and syphilis are known to lower testosterone levels. Head trauma and tumors. These conditions can also affect the pituitary gland and hypothalamus. Too much iron in the blood (hemochromatosis). This can cause damage to your testes and your pituitary gland. Is there anything I can do to keep my testosterone levels from decreasing? Maybe. Keeping yourself fit and healthy – important for so many reasons – is also important for your testosterone. Protect your testicles when you play sports. Make sure you get enough exercise, including resistance exercises and strength training. Eat a healthy diet full of fruits and vegetables and high-fiber foods. Watch your fat intake. Practice safe sex and don’t abuse drugs and alcohol. Taking these steps can help prevent some of the causes of low testosterone, such as obesity, cancer, and HIV/AIDS. Plus, you’ll improve your overall health and your sex life. It’s a win-win situation. If I have low testosterone, should I have hormone replacement therapy (HRT)? Talk to your doctor. There are pros and cons to hormone replacement therapy. Researchers are still unsure how much HRT helps a man’s sexual function overall and there are other factors that can affect your sex life as much as testosterone.
<urn:uuid:2811602f-d448-4610-8af9-7324095fc8ba>
CC-MAIN-2014-10
http://www.sexhealthmatters.org/sex-health-blog/what-causes-low-testosterone
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011174089/warc/CC-MAIN-20140305091934-00070-ip-10-183-142-35.ec2.internal.warc.gz
en
0.940845
949
2.640625
3
What if you could carry a complex scientific experiment right in the palm of your hands? A recent $1.26 million grant from the Defense Advanced Research Projects Administration to Penn researchers can help make such portable laboratories a reality. Professor of Mechanical Engineering and Applied Mechanics Haim H. Bau is leading Penn’s investigation of microfluidic systems. Resembling circuit boards, these minute devices contain a network of conduits so tiny that they measure from about the thickness of a human hair to a millimeter. The goal is to push liquids from one part of the device to the other so that they undergo chemical or biological reactions—all without the aid of humans. While Bau and his colleagues are focused on the design aspect of the research, he said the potential applications are manifold. For example, by placing a drop of blood into the device, physicians could potentially make the sample undergo a series of reactions, get results in a matter of minutes and consequently make fast decisions. “A practitioner can decide right away what course of action she or he should take in order to deal with whatever situation one encounters,” said Bau. The military could also possibly make use of such a device. Meant to be portable and easily manufactured out of silicon, ceramics and/or plastics, the technology could act as a biodetector—a device that could warn the user when harmful biological agents are present. “I’m speculating but the military perhaps would be interested in each soldier or person in each platoon carrying a biodetector in their pocket,” said Bau. But while these potential applications sound exciting, Bau stressed that the work being conducted now is still at its preliminary stages. “We [have] just a small piece of this very large pie,” he said. Although he predicted uses in medicine, he cautioned, “We may be generating the necessary tools for this to happen, but we know very little about blood.” And he doesn’t want to mislead people into thinking that his work will solve the bioterrorism dilemma. For now, he and Howard H. Hu, associate professor of mechanical engineering, are working on the mathematical modeling of microfluidic devices. To test their models, they are depending on Irwin M. Chaiken, research professor of medicine and rheumatology, to conduct experiments involving biological interactions. And there are other areas which need further research. Bau is searching for an effective means to mix reagents. “Everything that happens here is at very slow flow,” he said. “One has to think of innovative ways to facilitate mixing.” Bau wants to shy away from mechanical means, such as valves, because of their tendency to break and fail. Using electrical and magnetic forces are current possibilities. Sounds complicated? Not so, said Bau. He hopes to make microfluidic systems user-friendly. “The idea is that the layman should be able to use it,” he said. “Just as to use this PC you don’t need a computer scientist and you don’t need to have any specialized training. After 15 to 20 minutes of fooling around, you can start doing something.” Originally published on November 29, 2001
<urn:uuid:7f891c88-2f3d-48be-805e-54d3724fe4f9>
CC-MAIN-2014-35
http://www.upenn.edu/pennnews/current/node/1516
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921872.11/warc/CC-MAIN-20140909030935-00034-ip-10-180-136-8.ec2.internal.warc.gz
en
0.961893
685
3.265625
3
Found black seeds in tomato and wondered whether you should discard them or just ignore them? Is the tomato is still edible? Here are the [...] A collection of guides about various plant diseases, causes, treatments and how to prevent them from destroying your garden. These are 5 of the best fungicides for tomatoes, very efficient in fighting tomato early blight, late blight, septoria leaf spot, and other fungal [...] Do your roses have yellow patterns on their leaves? They are infected with the rose mosaic virus. Learn more about this rose disease from this [...] Plant leaves can tell many things about the health of your plants. Most of the times, the color of the leaves indicates whether a plant is healthy or not. Naturally, the leaves of most plants are green because they contain a green-colored pigment called chlorophyll. When the plant leaves turn to another color, fully or [...] Are the leaves of your tomato plants turning white and they are drying out? This post will explain the causes, what to do to save your plants and what you can do in the future to avoid this problem. This was the first year when I decided to plant my own tomato seedlings instead of [...] Early Blight is a plant disease caused by a fungal pathogen called Alternaria Solani. The causes, symptoms and the treatment for early blight in the [...]
<urn:uuid:d26712d3-8275-4c65-b08b-769af4cf8260>
CC-MAIN-2020-34
https://backgarden.org/plant-diseases/
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738816.7/warc/CC-MAIN-20200811150134-20200811180134-00377.warc.gz
en
0.946278
277
2.5625
3
Keyboards are one of the most important devices or component of a computer, it is an input device. If you want to type a mail or browse the Internet then without keyboard you will not be able to do anything. Keyboard is also very important if you are a computer gamer. You can do lot of activity without mouse on computer if you know commands to perform those tasks. Now you want to know that how the keyboard does works? We will help you to know the working of a keyboard. Layout of the keyboard: As you know that keyboard is an input device. When you type on keyboard or press a key, it put the information into a program on the computer. Generally keyboards have 80 to 110 keys. There are keycaps which displays the numbers and letters on the keyboard; keycaps are the buttons that are pressed when a person types. The layout of all keyboards are same, the standard layout is QWERTY. In some countries you may see some other layouts. If you have ever seen inside any keyboard then you observed that it is sort of mini computer and consists of small processors and circuits. These processors transfer information from keyboards to the processor inside computer. The key matrix resides beneath the keyboard’s processor. The key matrix is nothing but a grid of circuits, when you push a key, it push the switch beneath it and passes electronic current to circuit then pass it to the processor. When current passes through switch it vibrates and make the processor to read the key pressed. When you press the key the circuit will be closed and the closing of circuit tells the processor to read the keymap. Keymap is some times called character map and the processor use it to find which key is being pressed by you. It also determines the uppercase and lowercase letter when shift is pressed with the letter key. Keyboards can be plugged to the computer using pin male plug or PS2 plug. Now a day you can also see keyboard with USB connectivity. The working of a computer and the keyboard is bidirectional. Bidirectional means they can send information or data to each other. There is one clock line from keyboard and one data line from computer, both makes bi directional connectivity. It is also important to know that both lines should be clear to connect with each other. If one is busy and computer is sending info using clock line but keyboard is not ready then it waits until the line is clear. When the line is low, keyboard hold the data and wait for the computer to send the data. When computer sends the data, it makes both data and clock line low to clear the line. It is also done to make sure that both will not send message at same time. This is the way that used by computer and keyboard to communicate with each other.
<urn:uuid:ebed8e98-7960-46ad-aec6-737674cb8192>
CC-MAIN-2021-39
https://www.onlinepcsupport.com/tech-news/technology/how-does-a-computer-keyboard-work/
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056120.36/warc/CC-MAIN-20210918002951-20210918032951-00443.warc.gz
en
0.948213
564
3.53125
4
9. Haga Castle (Shiso City, ☆☆) Haga Shichiro built the original fortification on this site around the year 1261. It was taken over by the Nakamura Clan, who ruled over the area until the end of the Sengoku Period (1467-1590). In 1580, the castle fell to Toyotomi Hideyoshi and was abandoned. This castle is best reached by private transportation. There's a nice little park and a beautiful view of the valley below. The turret in the pictures is a mock yagura, however. 8. Sumoto Castle (Sumoto City, ☆☆) A castle was first built at this site on Awaji Island in 1526 by Atagi Haruoki. When the Awaji area was conquered by Toyotomi Hideyoshi, he assigned Sengoku Hidehisa as lord of the castle. In 1585, Wakisaka Yasuhara was reassigned from Takatori Castle to Sumoto Castle, and he renovated much of the castle during his 24-year reign. In 1615, Awaji came under the control of the Tokushima domain, and Hachisuka Yoshishige became the new lord. The castle lordship was passed to Inada Shigetane, a retainer of the Hachisuka, in 1631, and the Inada continued to rule until the Meiji Restoration (1868). These pictures don't do it justice, but Sumoto Castle has many impressive stone walls. It's a bit difficult to get to on public transportation, but would be a worthwhile trip for anyone living in the Kansai area or castle fans with some extra time. The reconstructed "main keep" is really just a simple lookout tower to provide nice views; it should not be considered historically accurate nor representative of any castles. 7. Sasayama Castle (Sasayama City, ☆☆☆) From the top of Mount Takashiro to the southeast of Sasayama Castle, the Hatano Clan ruled over the Tanba area from Yagami Castle. But Yagami Castle fell to attacks by Akechi Mitsuhide in 1579, and in 1608, Tokugawa Ieyasu's son Matsudaira Yasushige became lord of Yagami Castle. The following year, Ieyasu initiated the construction of Sasayama Castle while dismantling Yagami Castle as part of his plan to better control Osaka Castle, the Toyotomi, and the other lords of western Japan. The castle was designed by Todo Takatora and completed in only six months. It's famous for having an intact gate type called umadashi. The structures of this castle were torn down after the Meiji Restoration (1868), with the exception of the Oshoin Palace, which was burned in a fire in 1944. However, the palace was reconstructed from original pictures, and also by making reference to the Ninomaru Palace at Nijo Castle. Sasayama sits in a basin surrounded by mountains on all sides. When visiting this location it's easy to get caught up in the castle and castle town in such an isolated spot in the mountains. Even though there are almost no structures remaining of the castle today, one can easily imagine what it was like during the Edo Period (1603-1868). There are also a museum and samurai homes to tell the story of samurai in this area. There are few tourists during the off season, though you can expect more crowds during cherry blossoms season in the spring and during the summer festival period. The area is also famous for a kind of black bean and wild boar meat. These Tanba-area beans are large and sweet, with excellent flavor. 6. Tatsuno Castle (Tatsuno City, ☆☆☆) Tatsuno Castle is actually made up of the mountaintop castle that sits atop Mount Keirozan and the castle at the base of the mountain. The mountaintop castle was constructed around 500 years ago by Akamatsu Murahide, and was controlled by four generations of the Akamatsu. In 1577, the Akamatsu turned over this castle to Toyotomi Hideyoshi, who had by then conquered the Chugoku region. At this point, a new castle was constructed at the base of the mountain as a subordinate castle to Himeji Castle. Until Wakisaka Yasumasa became lord of the castle in 1672, it changed hands several times, leading to the degradation of both the castle and the surrounding castle town. The castle and town were reinvigorated under Wakisaka, and his descendants continued to rule over the region until the Meiji Period (1868-1912). The famous swordsman Miyamoto Musashi trained at Enkoji Temple and taught his disciples here in Tatsuno. The current Honmaru Palace, gates and yagura (turrets) are wooden reconstructions. Situated just 15 kilometers (9.3 miles) from Himeji in the southwest of the Harima region (southwestern Hyogo), Tatsuno has thrived since the old days due its location near the Ibo River and convenient transportation. The town itself is rather small, but as you walk its narrow streets among the houses you can see old samurai homes and temples while enjoying the historic atmosphere. It has also been called the "little Kyoto" of Harima. The extensive remains around the mountaintop are an easy hike, and well worth the time for castle fans. There are many terraced baileys up on the way to the honmaru (central bailey). The hiking path may be a bit difficult to find. Look for a wooden box on a post with maps inside. It's around the back and to the left of the palace, near the Koraimon gate. Then you need to open and close the small gate next to the box of maps. You can also ask for directions and a map at the museum. Down the opposite side of the mountain on the way back, you can also see the terraced remains of many samurai homes and gardens. 5. Arikoyama Castle (Izushi Town, ☆☆☆) Yamana Suketoyo built this castle in 1574 after his Konosumiyama Castle (此隅山城) was defeated by Toyotomi Hideyoshi. Arikoyama Castle is just 3 kilometers (1.9 miles) southeast of Konosumiyama Castle. Since konosumi-yama sounds like ko-nusumi-yama, which means "mountain of the stolen child," he named his new castle Arikoyama Castle (有子山城)—literally, "mountain where the child is"—as a play on words. Arikoyama Castle was attacked by Hideyoshi in 1580, when Suketoyo's son Akihiro was lord of the castle. After it fell to Hideyoshi, Maeno Nagayasu and then Koide Yoshimasa became lords of Arikoyama Castle. After the Battle of Sekigahara (1600), the Koide Clan fortified the foot of the mountain as Izushi Castle and abandoned the mountaintop castle entirely. The Edo Period (1603-1868) saw a more stable government with no local conflicts, so many provinces moved from mountaintop castles to lower castles and put their efforts into building up the surrounding castle towns instead. 4. Izushi Castle (Izushi Town, ☆☆☆) After the Battle of Sekigahara, Japan's political climate became significantly more stable, with less need for castles as defensive positions. Because of this, Koide Yoshimasa's son Yoshihide decided to fortify the area around the foot of their clan's mountaintop castle, Arikoyama Castle, and built Izushi Castle in 1605. As a result, Arikoyama Castle was abandoned and Izushi Castle became the main castle for the Tajima Domain under the "One Domain, One Castle" law of 1615. A main keep was never built at Izushi Castle, but it was well fortified with several baileys, moats and yagura (turrets). Four main baileys start at the base of the mountain and go up in steps. The highest bailey, Inari Kuruwa, is thought to have been the location of the lord's palace for Arikoyama Castle. The castle town was also designed for defense of the castle. The samurai quarters surround the outside of the castle, and several temples were strategically placed near main roads and entrances that could also be used for defense if needed. The Koide ruled until 1697, when Izushi was transferred to the Matsudaira. In 1706, Matsudaira Tadanori was transferred to Ueda Castle and Sengoku Masaaki, lord of Ueda Castle, was moved to Izushi Castle. The Sengoku Clan continue to rule over Izushi Castle until the Meiji Restoration in 1868. 3. Akashi Castle (Akashi City, ☆☆☆) Ogasawara Tadazane (former lord of Matsumoto) moved into this area in 1617. In 1619, under orders from Tokugawa Hidetada, he built Akashi Castle in just one year for the purposes of watching over the western lords and building up Tokugawa defenses in the region. He accomplished building this castle in so little time mainly because he used materials from castles in the area that were decommissioned under the one-castle-per-domain law. The castle deftly makes use of the natural terrain in a three-tiered compound. Ogasawara's father-in-law, Honda Tadamasa—who also directed the construction of Himeji Castle—assisted with the construction of Akashi Castle. Even though they built a foundation for a large main keep, no main keep was ever built. In its place, the honmaru (central bailey) had four large, three-story yagura (turrets), two of which are still standing today. Eventually, Ogasawara Tadazane was moved to Kokura Castle, and the lordship of Akashi Castle changed hands several times until it was taken over by Matsudaira Naoakira in 1682. The Matsudaira continued to ruled until the coming of the Meiji Restoration (1868). While you mostly only see pictures of the two main yagura for this castle, many stone walls and well-defined baileys still exist as well. 2. Ako Castle (Ako City, Hyogo, ☆☆☆☆) Ukita Hideie built a branch or subordinate castle of Okayama Castle here in 1573. When Asano Naganao came in 1648, he was instructed by the Tokugawa government to build a new castle. If you look at a map of the castle, you'll see that the outline looks very unique. It employs a lot of corners and arrowhead-type structures. This was a very modern idea to improve firing range near the castle and increase its defensive ability. You also see such structures very clearly at Goryokaku in Hakodate. There's a main keep foundation at Ako Castle but the main keep wasn't built because the Tokugawa government never granted permission to do so. Ako Castle was dismantled in 1873 under the Castle Abolishment Law. 1. Takeda Castle (Asago City, Hyogo, ☆☆☆☆) Takeda Castle was built on this site in the path of aggression between the Harima (southwestern Hyogo), Tanba (eastern Hyogo and central Kyoto Prefecture) and Tajima (northern Hyogo) regions as a stronghold of Izushi Castle. It was built by Otagaki Mitsukage, a retainer of Yamana Sozen, lord of the area, in 1441. Otagaki, whose family had been military commanders of the Yamana clan for five generations, became lord of the castle. Takeda Castle was conquered by Toyotomi Hideyoshi in his Tajima campaign of 1577. Hideyoshi placed it in the control of his younger brother, Hidenaga, who moved to Izushi less than two years later. Akamatsu Hirohide, the last lord of the castle, fought on the side of the Western Forces for Tokugawa Ieyasu at the Battle of Sekigahara (1600). Hirohide served valiantly in the battle, but was accused of arson. Later that year he committed seppuku and Takeda Castle was abandoned. This is a truly impressive castle. Despite being only ruins, the location, stone walls, design and view easily make it worth four stars. It's amazing how they built such extensive stone walls on top of the mountain, and you won't find fences or shrubs along the steep drop-offs along the edges of the stone walls like you might see at other castles. There are few trains running to Takeda, and you'll want at least 90 minutes at the castle (depending on how much you take pictures), so plan accordingly. There is a bit of historic atmosphere of the old castle town and temples near the station as well, and there are also nice views looking down on the castle from a nearby mountain.
<urn:uuid:2ce4e443-c8ee-4be9-8387-0c8be5e61541>
CC-MAIN-2023-50
https://allabout-japan.com/en/article/2910/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100448.65/warc/CC-MAIN-20231202172159-20231202202159-00545.warc.gz
en
0.972663
2,819
3.109375
3
U-boat is an anglicised version of the German word U-Boot [ˈuːboːt] (listen), a shortening of Unterseeboot, literally "undersea boat". While the German term refers to any submarine, the English one (in common with several other languages) refers specifically to military submarines operated by Germany, particularly in the First and Second World Wars. Although at times they were efficient fleet weapons against enemy naval warships, they were most effectively used in an economic warfare role (commerce raiding) and enforcing a naval blockade against enemy shipping. The primary targets of the U-boat campaigns in both wars were the merchant convoys bringing supplies from Canada and other parts of the British Empire, and the United States to the United Kingdom and (during the Second World War) to the Soviet Union and the Allied territories in the Mediterranean. Austro-Hungarian navy submarines were also known as U-boats. - 1 Early U-boats (1850–1914) - 2 World War I (1914–1918) - 3 Interwar years (1919–1939) - 4 World War II (1939–1945) - 5 Memorial - 6 Post–World War II and Cold War (after 1945) - 7 See also - 8 References - 9 Further reading - 10 External links Early U-boats (1850–1914) The first submarine built in Germany, the three-man Brandtaucher, sank to the bottom of Kiel harbor on 1 February 1851 during a test dive. The inventor and engineer Wilhelm Bauer had designed this vessel in 1850, and Schweffel & Howaldt constructed it in Kiel. Dredging operations in 1887 rediscovered Brandtaucher; it was later raised and put on historical display in Germany. There followed in 1890 the boats WW1 and WW2, built to a Nordenfelt design. In 1903 the Friedrich Krupp Germaniawerft dockyard in Kiel completed the first fully functional German-built submarine, Forelle, which Krupp sold to Russia during the Russo-Japanese War in April 1904. The SM U-1 was a completely redesigned Karp-class submarine and only one was built. The Imperial German Navy commissioned it on 14 December 1906. It had a double hull, a Körting kerosene engine, and a single torpedo tube. The 50%-larger SM U-2 (commissioned in 1908) had two torpedo tubes. The U-19 class of 1912–13 saw the first diesel engine installed in a German navy boat. At the start of World War I in 1914, Germany had 48 submarines of 13 classes in service or under construction. During that war the Imperial German Navy used SM U-1 for training. Retired in 1919, it remains on display at the Deutsches Museum in Munich. World War I (1914–1918) On 5 September 1914, HMS Pathfinder was sunk by SM U-21, the first ship to have been sunk by a submarine using a self-propelled torpedo. On 22 September, U-9 sank the obsolete British warships HMS Aboukir, HMS Cressy and HMS Hogue (the "Live Bait Squadron") in a single hour. In the Gallipoli Campaign in early 1915 in the eastern Mediterranean, German U-boats, notably the U-21, prevented close support of allied troops by 18 pre-Dreadnought battleships by sinking two of them. For the first few months of the war, U-boat anticommerce actions observed the "prize rules" of the time, which governed the treatment of enemy civilian ships and their occupants. On 20 October 1914, SM U-17 sank the first merchant ship, the SS Glitra, off Norway. Surface commerce raiders were proving to be ineffective, and on 4 February 1915, the Kaiser assented to the declaration of a war zone in the waters around the British Isles. This was cited as a retaliation for British minefields and shipping blockades. Under the instructions given to U-boat captains, they could sink merchant ships, even potentially neutral ones, without warning. In February 1915, a submarine U-6 (Lepsius) was rammed and both periscopes were destroyed off Beachy Head by the collier SS Thordis commanded by Captain John Bell RNR after firing a torpedo. On 7 May 1915, SM U-20 sank the liner RMS Lusitania. The sinking claimed 1,198 lives, 128 of them American civilians, and the attack of this unarmed civilian ship deeply shocked the Allies. According to the ship's manifest, Lusitania was carrying military cargo, though none of this information was relayed to the citizens of Britain and the United States who thought that the ship contained no ammunition or military weaponry whatsoever and it was an act of brutal murder. Munitions that it carried were thousands of crates full of ammunition for rifles, 3-inch artillery shells, and also various other standard ammunition used by infantry. The sinking of the Lusitania was widely used as propaganda against the German Empire and caused greater support for the war effort. A widespread reaction in the U.S was not seen until the sinking of the ferry SS Sussex. The sinking occurred in 1915 and the United States entered the war in 1917. The initial U.S. response was to threaten to sever diplomatic ties, which persuaded the Germans to issue the Sussex pledge that reimposed restrictions on U-boat activity. The U.S. reiterated its objections to German submarine warfare whenever U.S. civilians died as a result of German attacks, which prompted the Germans to fully reapply prize rules. This, however, removed the effectiveness of the U-boat fleet, and the Germans consequently sought a decisive surface action, a strategy that culminated in the Battle of Jutland. Although the Germans claimed victory at Jutland, the British Grand Fleet remained in control at sea. It was necessary to return to effective anticommerce warfare by U-boats. Vice-Admiral Reinhard Scheer, Commander in Chief of the High Seas Fleet, pressed for all-out U-boat war, convinced that a high rate of shipping losses would force Britain to seek an early peace before the United States could react effectively. The renewed German campaign was effective, sinking 1.4 million tons of shipping between October 1916 and January 1917. Despite this, the political situation demanded even greater pressure, and on 31 January 1917, Germany announced that its U-boats would engage in unrestricted submarine warfare beginning 1 February. On 17 March, German submarines sank three American merchant vessels, and the U.S. declared war on Germany in April 1917. Unrestricted submarine warfare in early 1917 was initially very successful, sinking a major part of Britain-bound shipping. With the introduction of escorted convoys, shipping losses declined and in the end the German strategy failed to destroy sufficient Allied shipping. An armistice became effective on 11 November 1918 and all surviving German submarines were surrendered. Of the 360 submarines that had been built, 178 were lost, but more than 11 million tons of shipping had been destroyed. Of the 178 submarines destroyed, SM U-103 was sunk when the troopship RMS Olympic rammed it as it attempted to crash dive, killing all on board. - Körting kerosene-powered boats - Mittel-U MAN diesel boats - U-Cruisers and Merchant U-boats - UB coastal torpedo attack boats - UC coastal minelayers - UE ocean minelayers Surrender of the fleet Under the terms of armistice, all U-boats were to immediately surrender. Those in home waters sailed to the British submarine base at Harwich. The entire process was done quickly and in the main without difficulty, after which the vessels were studied, then scrapped or given to Allied navies. Stephen King-Hall wrote a detailed eyewitness account of the surrender. Interwar years (1919–1939) At the end of World War I, as part of the Paris Peace Conference, 1919, the Treaty of Versailles restricted the total tonnage of the German surface fleet. The treaty also restricted the independent tonnage of ships and forbade the construction of submarines. However, a submarine design office was set up in the Netherlands and a torpedo research program was started in Sweden. Before the start of World War II, Germany started building U-boats and training crews, labeling these activities as "research" or concealing them using other covers. When this became known, the Anglo-German Naval Agreement limited Germany to parity with Britain in submarines. When World War II started, Germany already had 65 U-boats, with 21 of those at sea, ready for war. World War II (1939–1945) During World War II, U-boat warfare was the major component of the Battle of the Atlantic, which lasted the duration of the war. Germany had the largest submarine fleet of the revived German War Navy (Kriegsmarine) from 1935 into World War II (1939-1945), since the Armistice of November 11th, 1918 scuttled most of the old Imperial German Navy from the First World War and the following Treaty of Versailles of 1919. This limited the surface navy of Germany's new Weimar Republic to only six battleships (of less than 10,000 tons each), six cruisers, and 12 destroyers. British Prime Minister Winston Churchill later wrote "The only thing that really frightened me during the war was the U-boat peril." In the early stages of the war the U-boats were extremely effective in destroying Allied shipping due to the large gap in mid-Atlantic air cover. Cross-Atlantic trade in war supplies and food was extensive and critical for Britain's survival. The continuous action surrounding British shipping became known as the Battle of the Atlantic, as the British developed technical defences such as ASDIC and radar, and the German U-boats responded by hunting in what were called "wolfpacks" where multiple submarines would stay close together, making it easier for them to sink a specific target. Britain's vulnerable shipping situation existed until 1942, when the tides changed as the U.S. merchant marine and Navy entered the war, drastically increasing the amount of tonnage of supplies sent across the Atlantic. The combination of increased tonnage and increased naval protection of shipping convoys made it much more difficult for U-boats to make a significant dent in British shipping. Once the United States entered the war, U-boats ranged from the Atlantic coast of the United States and Canada to the Gulf of Mexico, and from the Arctic to the west and southern African coasts and even as far east as Penang. The U.S. military engaged in various tactics against German incursions in the Americas; these included military surveillance of foreign nations in Latin America, particularly in the Caribbean, to deter any local governments from supplying German U-boats. Because speed and range were severely limited underwater while running on battery power, U-boats were required to spend most of their time surfaced running on diesel engines, diving only when attacked or for rare daytime torpedo strikes. The more ship-like hull design reflects the fact that these were primarily surface vessels that could submerge when necessary. This contrasts with the cylindrical profile of modern nuclear submarines, which are more hydrodynamic underwater (where they spend the majority of their time), but less stable on the surface. While U-boats were faster on the surface than submerged, the opposite is generally true of modern submarines. The most common U-boat attack during the early years of the war was conducted on the surface and at night. This period, before the Allied forces developed truly effective antisubmarine warfare tactics, which included convoys, was referred to by German submariners as "die glückliche Zeit" or The First Happy Time The U-boats' main weapon was the torpedo, though mines and deck guns (while surfaced) were also used. By the end of the war, almost 3,000 Allied ships (175 warships; 2,825 merchant ships) were sunk by U-boat torpedoes. Early German World War II torpedoes were straight runners, as opposed to the homing and pattern-running torpedoes that were fielded later in the war. They were fitted with one of two types of pistol triggers: impact, which detonated the warhead upon contact with a solid object, and magnetic, which detonated upon sensing a change in the magnetic field within a few meters. One of the most effective uses of magnetic pistols would be to set the torpedo's depth to just beneath the keel of the target. The explosion under the target's keel would create a detonation shock wave, which could cause a ship's hull to rupture under the concussive water pressure. In this way, even large or heavily armored ships could be sunk or disabled with a single, well-placed hit. In practice, however, the depth-keeping equipment and magnetic and contact exploders were notoriously unreliable in the first eight months of the war. Torpedoes often ran at an improper depth, detonated prematurely, or failed to explode altogether—sometimes bouncing harmlessly off the hull of the target ship. This was most evident in Operation Weserübung, the invasion of Norway, where various skilled U-boat commanders failed to inflict damage on British transports and warships because of faulty torpedoes. The faults were largely due to a lack of testing. The magnetic detonator was sensitive to mechanical oscillations during the torpedo run, and to fluctuations in the Earth's magnetic field at high latitudes. These were eventually phased out, and the depth-keeping problem was solved by early 1942. Later in the war, Germany developed an acoustic homing torpedo, the G7/T5. It was primarily designed to combat convoy escorts. The acoustic torpedo was designed to run straight to an arming distance of 400 m and then turn toward the loudest noise detected. This sometimes ended up being the U-boat itself; at least two submarines may have been sunk by their own homing torpedoes. Additionally, these torpedoes were found to be only effective against ships moving at greater than 15 knots (28 km/h). The Allies countered acoustic torpedoes with noisemaker decoys such as Foxer, FXR, CAT and Fanfare. The Germans, in turn, countered this by introducing newer and upgraded versions of the acoustic torpedoes, like the late-war G7es, and the T11 torpedo. However, the T11 torpedoes did not see active service. U-boats also adopted several types of "pattern-running" torpedoes that ran straight out to a preset distance, then traveled in either a circular or ladder-like pattern. When fired at a convoy, this increased the probability of a hit if the weapon missed its primary target. During World War II, the Kriegsmarine produced many different types of U-boats as technology evolved. Most notable is the Type VII, known as the "workhorse" of the fleet, which was by far the most-produced type, and the Type IX boats, which were larger versions of the VII designed for long-range patrols, some traveling as far as Japan and the east coast of the United States. With the increasing sophistication of Allied detection and subsequent losses, German designers began to fully realise the potential for a truly submerged boat. The Type XXI "Elektroboot" was designed to favor submerged performance, both for combat effectiveness and survival. It was the first true submersible. The Type XXI featured an evolutionary design that combined several different strands of the U-Boat development program, most notably from the Walter U-boats, the Type XVII, which featured an unsuccessful yet revolutionary hydrogen peroxide air-independent propellant system. These boats featured a streamlined hull design, which formed the basis of the later USS Nautilus nuclear submarine, and was adapted for use with more conventional propulsion systems. The larger hull design allowed for a greatly increased battery capacity, which enabled the XXI to cruise submerged for longer periods and reach unprecedented submerged speeds for the time. Throughout the war, an arms race evolved between the Allies and the Kriegsmarine, especially in detection and counterdetection. Sonar (ASDIC in Britain) allowed Allied warships to detect submerged U-boats (and vice versa) beyond visual range, but was not effective against a surfaced vessel; thus, early in the war, a U-boat at night or in bad weather was actually safer on the surface. Advancements in radar became particularly deadly for the U-boat crews, especially once aircraft-mounted units were developed. As a countermeasure, U-boats were fitted with radar warning receivers, to give them ample time to dive before the enemy closed in, as well as more anti aircraft guns. However, by early to mid-1943, the Allies switched to centimetric radar (unknown to Germany), which rendered the radar detectors ineffective. U-boat radar systems were also developed, but many captains chose not to use them for fear of broadcasting their position to enemy patrols and lack of sufficient electronic countermeasures. Early on, the Germans experimented with the idea of the Schnorchel (snorkel) from captured Dutch submarines, but saw no need for them until rather late in the war. The Schnorchel was a retractable pipe that supplied air to the diesel engines while submerged at periscope depth, allowing the boats to cruise and recharge their batteries while maintaining a degree of stealth. It was far from a perfect solution, however. Problems occurred with the device's valve sticking shut or closing as it dunked in rough weather; since the system used the entire pressure hull as a buffer, the diesels would instantaneously suck huge volumes of air from the boat's compartments, and the crew often suffered painful ear injuries. Waste disposal was a problem when the U-boats spent extended periods without surfacing, as it is today. Speed was limited to 8 knots (15 km/h), lest the device snap from stress. The Schnorchel also had the effect of making the boat essentially noisy and deaf in sonar terms. Finally, Allied radar eventually became sufficiently advanced that the Schnorchel mast could be detected beyond visual range. Several other pioneering innovations included acoustic- and electro-absorbent coatings to make them less of an ASDIC or RADAR target. The Germans also developed active countermeasures such as facilities to release artificial chemical bubble-making decoys, known as Bold, after the mythical kobold. - Type I: first prototypes - Type II: small submarines used for training purposes - Type V: uncompleted experimental midget submarines - Type VII: the "workhorse" of the U-boats, with 700 active in World War II - Type IX: these long-range U-boats operated as far as the Indian Ocean with the Japanese (Monsun Gruppe), and the South Atlantic - Type X: long-range minelayers and cargo transports - Type XI: uncompleted experimental artillery boats - Type XIV: used to resupply other U-boats; nicknamed the Milchkuh ("Milk Cow") - Type XVII: small coastal submarines powered by high-test peroxide propulsion systems - Type XXI: known as the Elektroboot; first subs to operate primarily submerged - Type XXIII: smaller version of the XXI used for coastal operations - Midget submarines, including Biber, Hai, Molch, and Seehund - Uncompleted U-boat projects Advances in convoy tactics, high-frequency direction finding (referred to as "Huff-Duff"), radar, active sonar (called ASDIC in Britain), depth charges, ASW spigot mortars (also known as "hedgehog"), the intermittent cracking of the German Naval Enigma code, the introduction of the Leigh light, the range of escort aircraft (especially with the use of escort carriers), the use of mystery ships, and the full entry of the U.S. into the war with its enormous shipbuilding capacity, all turned the tide against the U-boats. In the end, the U-boat fleet suffered extremely heavy casualties, losing 793 U-boats and about 28,000 submariners (a 75% casualty rate, the highest of all German forces during the war). At the same time, the Allies targeted the U-boat shipyards and their bases with strategic bombing. The British had a major advantage in their ability to read some German naval Enigma codes. An understanding of the German coding methods had been brought to Britain via France from Polish code-breakers. Thereafter, code books and equipment were captured by raids on German weather ships and from captured U-boats. A team including Alan Turing used special purpose "Bombes" and early computers to break new German codes as they were introduced. The speedy decoding of messages was vital in directing convoys away from wolf packs and allowing interception and destruction of U-boats. This was demonstrated when the Naval Enigma machines were altered in February 1942 and wolf-pack effectiveness greatly increased until the new code was broken. The German submarine U-110, a Type IXB, was captured in 1941 by the Royal Navy, and its Enigma machine and documents were removed. U-559 was also captured by the British in October 1942; three sailors boarded her as she was sinking, and desperately threw all the code books out of the submarine so as to salvage them. Two of them, Able Seaman Colin Grazier and Lieutenant Francis Anthony Blair Fasson, continued to throw code books out of the ship as it went under water, and went down with it. Further code books were captured by raids on weather ships. U-744 was boarded by crew from the Canadian ship HMCS Chilliwack on 6 March 1944, and codes were taken from her, but by this time in the war, most of the information was known. The U-505, a Type IXC, was captured by the United States Navy in June 1944. It is now a museum ship in Chicago at the Museum of Science and Industry. Battle of Bell Island Two events in the battle took place in 1942 when German U-boats attacked four allied ore carriers at Bell Island, Newfoundland. The carriers SS Saganaga and SS Lord Strathcona were sunk by U-513 on 5 September 1942, while the SS Rosecastle and PLM 27 were sunk by U-518 on 2 November with the loss of 69 lives. When the submarine launched a torpedo at the loading pier, Bell Island became the only location in North America to be subject to direct attack by German forces in World War II. "Operation Deadlight" was the code name for the scuttling of U-boats surrendered to the Allies after the defeat of Germany near the end of the war. Of the 154 U-boats surrendered, 121 were scuttled in deep water off Lisahally, Northern Ireland, or Loch Ryan, Scotland, in late 1945 and early 1946. Post–World War II and Cold War (after 1945) From 1955, the West German Bundesmarine was allowed to have a small navy. Initially two sunken Type XXIIIs and a Type XXI were raised and repaired. In the 1960s, the Federal Republic of Germany (West Germany) re-entered the submarine business. Because West Germany was initially restricted to a 450 tonne displacement limit, the Bundesmarine focused on small coastal submarines to protect against the Soviet Union (Russian) threat in the Baltic Sea. The Germans sought to use advanced technologies to offset the small displacement, such as amagnetic steel to protect against naval mines and magnetic anomaly detectors. The initial Type 201 was a failure because of hull cracking; the subsequent Type 205, first commissioned in 1967, was a success, and 12 were built for the German navy. To continue the U-boat tradition, the new boats received the classic U designation starting with the U-1. With the Danish government's purchase of two Type 205 boats, the West German government realized the potential for the submarine as an export. Three of the improved Type 206 boats were later sold to the Israeli Navy, becoming the Gal-class. The German Type 209 diesel-electric submarine was the most popular export-sales submarine in the world from the late 1960s into the first years of the 21st century. With a larger 1,000–1,500 tonne displacement, the class was very customizable and has seen service with 14 navies with 51 examples being built as of 2006. Germany has brought the U-boat name into the 21st century with the new Type 212. The 212 features an air-independent propulsion system using hydrogen fuel cells. This system is safer than previous closed-cycle diesel engines and steam turbines, cheaper than a nuclear reactor and quieter than either. While the Type 212 is also being purchased by Italy, the Type 214 has been designed as the follow-on export model and has been sold to Greece, South Korea and Turkey. In July 2006, Germany commissioned its newest U-boat, the U-34, a Type 212. - Submarine warfare - List of U-boats of Germany - List of U-boats never deployed - List of successful U-boats - List of successful U-boat commanders - Das Boot, 1981 German U-boat film - Aces of the Deep, 1994 U-boat simulator video game - Silent Hunter III, 2005 U-boat simulator video game, third of a series - Karl Dönitz - Orkney Wireless Museum contains an example of a U-boat radio - List of Knight's Cross recipients of the U-boat service - Sieglinde (decoy) - Bold (decoy) - U-boat Campaign (World War I) - I-boat, Japanese equivalent - "U-boat". Online Etymology Dictionary. Retrieved 2012-06-22. - Showell, p. 23 Compare: Chaffin, Tom (2010). The H. L. Hunley: The Secret Hope of the Confederacy. Macmillan. p. 53. ISBN 9781429990356. Retrieved 2016-07-14. Bauer's boat made a promising start, diving in tests in the Baltic Sea's Bay of Kiel to depths of more than fifty feet. In 1855, during one of those tests, the boat malfunctioned. The Brandtaucher plunged fifty-four vertical feet and refused to ascend from the seafloor. Bauer and his crew – leaving their craft on the bottom – barely escaped with their lives. - Showell, p. 201 - Showell, pp. 22, 23, 25, 29 - Showell, p. 30 - Showell, pp. 36 & 37 - "Archived copy". Archived from the original on 27 December 2008. Retrieved 2 November 2008. - "WWI U-Boats U-17". Uboat.net. Retrieved 2008-03-24. - Haley Dixon (21 June 2013). "Story of Captain's courage resurfaces after 98 years". Daily Telegraph. Retrieved 22 June 2013. - "Full text of "A North Sea diary, 1914–1918 / Commander Stephen King-Hall"". - Hakim, Joy (1998). A History of Us: War, Peace and all that Jazz. New York: Oxford University Press. pp. 100–104. ISBN 0-19-509514-6. - Military History Online - Crocker III, H. W. (2006). Don't Tread on Me. New York: Crown Forum. p. 310. ISBN 978-1-4000-5363-6. - Karl Dönitz. Memoirs: Ten Years and Twenty Days. Naval Institute Press. p. 482. ISBN 0-87021-780-1. - "The Torpedoes". - Helgason, Gudmundur "Captured U Boats" UBoat.Net http://uboat.net/fates/captured.htm - John Abbatiello. Anti-Submarine Warfare in World War I: British Naval Aviation and the Defeat of the U-Boats (2005) - Buchheim, Lothar-Günther, Das Boot (original German edition 1973, eventually translated into English and many other Western languages). Movie adaptation in 1981, directed by Wolfgang Petersen - Gannon, Michael (1998) Black May. Dell Publishing. ISBN 0-440-23564-2 - Gannon, Michael (1990) Operation Drumbeat. Naval Institute Press. ISBN 978-1-59114-302-4 - Gray, Edwyn A. The U-Boat War, 1914–1918 (1994) - Hans Joachim Koerver. German Submarine Warfare 1914–1918 in the Eyes of British Intelligence, LIS Reinisch 2010, ISBN 978-3-902433-79-4 - Kurson, Robert (2004). Shadow Divers: The True Adventure of Two Americans Who Risked Everything to Solve One of the Last Mysteries of World War II. Random House Publishing. ISBN 0-375-50858-9 - Möller, Eberhard and Werner Brack. The Encyclopedia of U-Boats: From 1904 to the Present (2006) ISBN 1-85367-623-3 - O'Connor, Jerome M. "Inside the Grey Wolves' Den." Naval History, June 2000. The US Naval Institute Author of the Year feature describes the building and operation of the German U-boat bases in France. - Preston, Anthony (2005). The World's Greatest Submarines. - Stern, Robert C. (1999). Battle Beneath the Waves: U-boats at war. Arms and Armor/Sterling Publishing. ISBN 1-85409-200-6. - Showell, Jak Mallmann. The U-boat Century: German Submarine Warfare, 1906–2006 (2006) ISBN 1-59114-892-8 - van der Vat, Dan. The Atlantic Campaign. Harper & Row, 1988. Connects submarine and antisubmarine operations between World War I and World War II, and suggests a continuous war. - Von Scheck, Karl. U122: The Diary of a U-boat Commander. Diggory Press ISBN 978-1-84685-049-3 - Georg von Trapp and Elizabeth M. Campbell. To the Last Salute: Memories of an Austrian U-Boat Commander (2007) - Westwood, David. U-Boat War: Doenitz and the evolution of the German Submarine Service 1935–1945 (2005) ISBN 1-932033-43-2 - Werner, Herbert. Iron Coffins: A Personal Account of the German U-Boat Battles of World War II ISBN 978-0-304-35330-9 |Wikimedia Commons has media related to U-boat.| - TheSubPen The Sub "Pen," your home for submarine and U-boat history. - uboat.net Comprehensive reference source for WW I and WW II U-boat information. - uboat-bases.com The German U-boat bases of the WW-II in France: Brest, Lorient, St-Nazaire, La Rochelle, Bordeaux. - ubootwaffe.net Comprehensive reference source for WW II U-boat information. - WWII German UBoats - German sub sank near U.S., The Augusta Chronicle - U Boat Sanctuary – Inside The Indestructible U Boat Bases In Brittany History Articles
<urn:uuid:2a69126d-ed7d-4db1-b64f-481cac749bf7>
CC-MAIN-2018-51
https://en.wikipedia.org/wiki/U-boat
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825098.68/warc/CC-MAIN-20181213193633-20181213215133-00096.warc.gz
en
0.956313
6,581
3.84375
4
Women’s Reproductive Health: Human Rights Women’s rights to reproductive and sexual health are fundamental to women’s health in the United States and abroad. Efforts concerning women’s rights to reproductive health have been essential in expanding women’s human rights. Adoption of a health and human rights framework encourages logical applications about the correlation between women’s health and human rights, social justice, and respect for human dignity. Hindrance to reproductive health rights is political, legal, social, and financial in nature (Gruskin 1737). The purpose of this paper is to detail the significance of human rights associated with women’s reproductive health rights in the United States and the public health implications of these rights. This paper investigates health and human rights, as it relates to a woman’s reproductive health in the United States, including the right to autonomy; the right to health care and information; and the right to equity in the distribution of health service resources, availability, and accessibility. The association of these rights to women’s reproductive health in the United States has significant public health implications, discussed below. Historical and Modern Application of Modern Human Rights Development after WWII Human rights are standards that defend all humans from serious legal, political, and social abuses (Mann et al. 9). Historical and modern applications of modern human rights development after World War II include, the World Health Organization’s (WHO) Constitution in 1946, the Universal Declaration of Human Rights (UDHR) in 1948, and The International Covenant on Economic, Social and Cultural Rights in 1966. Each of these doctrines spelled out the premise that all humans are equal and free with rights, including the right to health. The right to health was first expressed in the World Health Organizations’ Constitution (1946). The World Health Organization declared in the Constitution that the fulfillment of the utmost achievable paradigm of health is one of the essential privileges of every person (Mann et al. 9; Ross 55; Robinson par. 8). Conversely, the right to health continues to be neglected in many parts of the world. This neglect, while not as grossly, is extended to the United States. The United States has abstained from passing this and other international agreements. In reality, the United States has not ratified a single treaty that acknowledges an entitlement to health for its citizens. The United States’ lack of ratifications of these treaties is challenging and will be elucidated later in this discussion. Human rights were also expressed by the United Nations in the 1948 Universal Declaration of Human Rights. The Universal Declaration of Human Rights (UDHR) was implemented as a reaction to the Nazi holocaust and set a benchmark by which the human rights actions of all countries should be defined. The UDHR commences by setting forth the fundamental principle that all people are born uninhibited and equivalent in distinction and rights (Mann et al. 10). Also, it prohibits any division in the fulfillment of human rights on the grounds as race, color, sex, language, religion, political, national origin, birth status. In addition, the UDHR clearly spells out the rights to security, life, and liberty, as well as the entitlement to be liberated from slavery, servitude, torture or cruel conduct or retribution (Cook, Dickens, and Fathalla 90-91; Ross 55-56). The International Covenant on Economic, Social and Cultural Rights (1966) further expanded on the issue of human rights by specifying socio-economic rights. These rights include, but are not inhibited to, the right to education, shelter, health, water and food, employment, social security, a healthy environment, and the right to advancement (“International Covenant on Economic, Social, and Cultural Rights” articles 10-12). The treaty exemplifies processes to be implemented by States parties to accomplish: maternal, child and reproductive health; healthy natural and workplace environments; prevention, treatment and control of disease; health facilities, goods and services. This treaty also states that all socio-economic rights must be declared without inequity (Cook, Dickens, and Fathalla 153) The right to health is also acknowledged in various other documents world-wide including: 1961 European Social Charter, 1978 Declaration of Alma Ata, 1981 African Charter on Human and People’s Rights, 1988 Additional Protocol to the American Convention on HRs in the Area of Economic, Social and Cultural Rights, and 1989 Convention on the Rights of the Child. Women’s Human Rights Women’s human rights are the freedoms and benefits given to women and girls. Women’s human rights are categorized collectively and distinguished from comprehensive philosophies of human rights because they frequently vary from the self-determinations essentially held by men and boys. Themes regularly connected with the concepts of women’s rights include, but are not restricted to, the right: to physical integrity and autonomy; to education; and to have marital, parental and religious rights. In 1979, The Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW) was adopted by the United Nations. CEDAW affirms women equal rights with men in all realms of life, including education, employment, healthcare, nationality, and marriage (Cook, Dickens, and Fathalla 198-203; Ross 1-3). In 1995, “The Fourth World Conference on Women: Action for Equality, Development and Peace,” also popularly known as the United Nations Fourth World Conference on Women, was held in Beijing, China. The conference raised global knowledge of human rights, the inequalities and inequities between men and women, and bestowed the required motivation for accentuating gender-based violence as a precedence issue for engagement by the global community (Cook, Dickens, and Fathalla 79). Human rights are being used to promote public health. Reproductive health rights become visible in the globally reputable structure of human rights through established rights to life, security, equal treatment, education, development, and to the maximum health standards. The rights include the privilege to emergency medical services and to the fundamental health determinants, such as sovereignty from discrimination, and adequate food, water, and sanitation (Gruskin and Loff 1880). The right to health is an essential human right that consists of free will and privileges (Hunt 1878). The freedoms consist of the right to contribute to apposite decisions about one’s health, including those made about sexual and reproductive freedom (Germain, “Reproductive Health and Human Rights” 65). Human Rights and Public Health Standards in Regards to Women’s Reproductive Rights The associations amid medicine, public health, and human rights are developing swiftly, in result of a multitude of actions, occurrence, and efforts. These are comprised of the ongoing efforts on various aspects of women’s health. To understand the associations between human rights and public health, it is fundamental to evaluate the important essentials of modern public health. Medicine and public health are two corresponding and interrelated methods for health advancement and protection through physical, mental, and social security. However medicine and public health must be separated because they serve different purposes (Germain 65). The primary disparity involves the population importance of public health, which varies with the individual center of medical care. Public health recognizes and measures health risks to the populations, composes legislative policies in reaction to these risks, and develops certain services contributing to the promotion of health and disease prevention (Gruskin and Loff 1880). Medicine, on the other hand, concentrates on the diagnosis and treatment of individuals. There is a strong association between public health and human rights. In the article “Health and Human Rights”, Jonathan Mann et al. describe a trinary outline of health and human rights and the impact and implications in health policies, human rights, and the connection between the two. Health practices, policies, and programs have an effect on human rights. Public health liabilities are accomplished in considerable evaluation through programs and policies distributed, employed and implemented with assistance from the state. Public health functions are appraising health concerns and inadequacies, cultivating policies intended to manage health issues of precedence, and ensuring agendas to employ planned health goals (Mann et al. 13-17). For example, compilation of information on population health problems may be gathered on particular significant health problems opposed to others. This consequently creates inequity and other human rights violations by neglecting to contribute suitable health services. Public health is concerned with the advancement and security of the health of populations. There is a correlation between socioeconomic circumstances and inadequate health on women’s reproductive health and human rights. The themes of public health and human rights are each comprised of health promotion and clarifying standards for performance (Gruskin and Loff 1880). The health and human rights framework is applicable to population issues concerning women’s reproductive health. Human rights violations, such as gender inequalities, and lack of access to family planning, have a negative impact on women’s health. Encouraging gender equality, development and ascertainment of women’s reproductive health services and the elimination of impediment to women’s economic and educational contribution is essential to promote public health. Gender disparities are a chief reason of disproportion in health status, including health care. Gender differentiations are evident in disease prevalence; access to preventive care; and reproductive health. Promotion of gender equality in other sectors can influence health status and have reinforced public health outcomes (Robinson par. 9). Unfortunately, there remains a considerable disparities among recognized allegiance to gender equality in reproductive health services within the United States and abroad. The foremost cause of death and disease in women globally age 15-44 are reproductive health issues. Globally, inadequacies in family planning access contribute to the chief aspect regarding the 76 million unplanned pregnancies each year; nearly 20 million result in unsafe abortions, and attributing to nearly 70,000 deaths yearly. In emergent countries, the primary reason of death and impairment among women of reproductive age is pregnancy and childbirth complexities. Less than a quarter of married women use contraception in Africa. Females contribute to half the people infected with HIV-nearly 100 percent live in emergent countries (United Nations, “Reproductive Health Factsheet”). Cultural and societal customs regarding reproductive health contribute to the variations among women’s and men’s health status. Acknowledgment of the dynamic gender roles and associations reliant on social perspectives where cultural, religious, economic, and political positions are mutual are necessary to promoting gender equality in healthcare. Gender customs and discrimination within the United States, in addition to policies and laws influence women’s access to health services and education can have a significant effect on women’s reproductive health and their interrelated human rights (Germain, “Reproductive Health and Human Rights” 66). It is imperative to acknowledge the significant health outcomes attributed to a woman’s capability of autonomy in controlling health and health decisions. The ability for a woman to have control over when and how many children she has is crucial to increasing women’s economic abilities. Family planning occupies the use of contraception to control the amount of children and intervals between births. An effective analysis of reproductive health allows women to establish informed decisions about their reproductive health and welfare (Cook, Dickens, and Fathalla 45-48). Family planning also encourages the preservation of women’s freedoms and protects their health by precluding unplanned pregnancies and decreasing women’s vulnerability to the health risks (Koop, Pearson, and Schwartz 190-191). All women should have the freedom to determine unconditionally and conscientiously the amount and proportion of children to have and to be able to acquire the education and information required to realize this right. Services include access to contraceptives, education, legal abortion, sexually transmissible infection (STI) screenings and treatment, pregnancy testing and counseling. In many parts of the world, including the U.S., these services remain unavailable. For example, between 1994 and 2001, impecunious women had increased number of unplanned pregnancies, rates of abortion, and unintended births contrary to more affluent women. Low-income women are less likely to use contraceptives, thus increasing the incidence of STI’s and abortion (Finer and Henshaw 95). High-quality family planning and the highest medical care aim to reduce abortion rates. Prohibiting access to superior reproductive health services and education amplifies the rate of abortion. Reproductive health and human rights and social and economic development. Population health is necessary for continuing economic advancement and overcoming poverty (Novick, Morrow, and Mays 20-24). Men and women should have a fundamental right to health and welfare, but significant infringements and disparities in health determinants and healthcare access continue to exist (Germain, “Reproductive Health and Human Rights” 65). In the United States, numerous relations among poverty and sexual and reproductive behavior exist. Being disadvantaged is related to first intercourse acts at an earlier age; less constancy with or no contraceptive usage; and reduced rationale to evade childbearing and rearing (Gruskin 1737). The prevalent concern is to surmount social cultural barriers and initiate family planning courses and assistances to women and girls. Supporting and promoting women’s reproductive rights and encouraging family planning, enhances economic circumstances of women and families. Violence and discrimination against women continue to negatively impact their United States’ economy. The collaboration between public health and human rights transforms social and political structures that prevent women from fulfilling their highest human potential. The theory of a complex association between health and human rights has outcomes. Health professionals may supply beneficially to public acknowledgment of the remuneration and expenses related to the realization in respect of human rights and dignity. Public health may encumber human rights. In the name of public health, gross misapplication of private health status information can, consequently, aid in harming individuals and violating rights. Mann et al. explains that mishandling of HIV information has resulted in limitations on human rights in such areas as marriage and family, education and work, and freedoms (14). When vital public health problems are delineated on the basis of religion, national origin, or sex, health issues of prioritization may cause bias and are assigned inferior precedence. Additionally, discrimination may arise when health services fail to consider economic and socio-cultural impediments to their access. There are health effects consequent from human rights violations. The extent and scope of health consequences resultant from violation of rights and dignity continue to be disregarded. It is indisputable that human rights and dignity violations have poor effects on health. Recognition of these health influences connected with violations of rights and dignity can promote health and human rights fields (Mann et al. 17-19). For instance, the right to information may be violated when a woman seeks to attain a surgical procedure without appropriate procedural and health risk information available to her. Exploring the link between human rights and health is challenging. The most extensively established examination concentrates on higher socioeconomic status and enhanced health status. Lawrence Finer and Stanley Henshaw explain in the article, “Disparities in Rates of Unintended Pregnancy In the United States, 1994 and 2001” that the rates of unplanned pregnancies have elevated among American women, the most prevalent populations being: women aged 18-24, low-income women, and minority women (91). The socioeconomic model generates escalating consequences that further increases the public health issues and human rights violations (Mann et al. 19-22). U.S. Healthcare Systems and Women’s Reproductive Rights Public policy plays a role in women’s reproductive rights in the United States. Most of the policy options are related to health care policies. Public health policies, programs and practices can burden human rights because reproductive and gender equity and equality are not analogous. Reproductive Rights are lawful rights and freedoms involving reproduction and reproductive health. The World Health Organization defines reproductive rights as the fundamental right of couples and individuals to choose without restraint and conscientiously the quantity and timing of their children. In addition, the rights also encompass the right to achieve the maximum paradigm of sexual and reproductive health and education/information devoid of inequity, force and aggression (World Health Organization, “Reproductive Health.”). According to the Center for Reproductive Rights in “Report on the United States’ Compliance with Its Human Rights Obligations in the Area of Women’s Reproductive and Sexual Health”, a woman’s access to inclusive reproductive healthcare in the United States is not standardized or definite. The United States Constitution does not unequivocally defend the right to health and, consequently, healthcare is obtained through public and private sectors (par. 2). The United States is a new affiliate of the United Nations Human Rights Council. In the near future, the United Nations Human Rights Council will evaluate the United States’ adherence with the human rights responsibilities as declared in the Universal Declaration of Human Rights; the United Nations Charter; and international humanitarian law (Center for Reproductive Rights; “Report on the United States’ Compliance”). This relationship will influence United States public policy as it correlates to public health issues as it exemplifies the importance of freedoms and human rights afforded people in the United States, as well as in other nations. Medical Ethics and Reproductive Health Rights There are ethical principles involved with women’s reproductive health rights. Essential to contemporary medical ethics is a value for patient autonomy and the basic principle of informed consent. Medical ethics deals with the selections by both medical professionals and patients and the responsibilities and commitments of medical professionals to their patients. In addition, medical ethics also comprises of choices developed by society, the allocation of supplies and health care access and the problems evolving from these. Four elemental principles are feasible in modern medical ethics are: respect for autonomy, the principle of beneficence, the principle of non-malfeasance, and the principle of justice. Autonomy is respected when persons are considered ethical representatives with functions and responsibilities and the aptitude to comprehend and formulate ethical conclusions. The principle of respect for autonomy gives the power for the freewill of all people. In addition, the principle of beneficence attempts to promote the good of the person by doing good; the principle of non-maleficence attempts to evade producing injury; and the principle of Justice considers all people comparatively equal (Harman 40; “Key Ethical Principles”). Modern medicine considers the medical professional and patient reciprocally united in the treatment decision making process. Respect for autonomy, informed consent and confidentiality are also important for ethical performance. In health care, respect for patient’s autonomy is imperative. Occasionally, autonomy can clash with opposing principles of ethics, such as beneficence (Pozgar 360-361). Autonomy can be limited through the position of the capability to make decisions for oneself, as in the case of a person in a coma or severely brain injured person. The principles of human dignity and respect for people are embedded within autonomy. The principle of human dignity is the fundamental worth that resides in every human being. Respect for people as a principle purports that all people should be treated as capable as they are free and responsible people (Cook, Dickens, and Fathalla 69-70; “Key Ethical Principles”). In health care contexts, the rights to informed consent and confidentiality are influential to assure decisions are made under the patient’s own free will. The principle of informed consent gives every capable woman the rights and responsibilities to progress her own health (Cook, Dickens, and Fathalla 86; “Key Ethical Principles”). These rights oblige certain associated obligations upon health care providers. To obtain informed consent of the patient, healthcare providers are obligated to divulge information of anticipated treatments and their alternatives, and they must revere her right to treatment refusal. In addition, healthcare providers are obligated to maintain privacy to permit the patient to make private decisions independent of others, including healthcare providers and family (Pozgar 278-279). Informed consent is an issue of determination. The most important characteristic is that it is patient enabling therefore providing the patient the information she requires in order to make a logical decision for her healthcare needs to be met. In U.S. health care, confidentiality is regulated by the Health Insurance Portability and Accountability Act of 1996 (HIPAA), the Privacy Rule, and many state laws (Miller 440-446). Confidentiality is generally used for discussions that occur between medical providers and patients in the course of treatment and/ or consultation. Legally, medical providers cannot disclose patient-provider discussions. In turn, the health care provider has a duty to respect the patient’s trust and keep sensitive medical information confidential (Miller 447-450; Pozgar 267-268). This necessitates the health care provider to respect the patient’s privacy by inhibiting others access to the patients private health care information thus, producing a trusting atmosphere supporting patient candidness with the health care provider. Technology and Challenges Unique to the U.S. and Developed Countries Technological advances play a role in women’s reproductive rights in the United States. Reproductive technology includes contemporary and projected uses of technology for human reproduction, including facilitated reproductive technology, such as in-vitro fertilization; contraception; and abortion. The principles of integrity and totality assert that the wellbeing of the total person should be recognized when determining technology or therapeutic intervention usage (Harman 40; “Key Ethical Principles”). Assisted Reproductive Technology In the U.S., there has been an increase in assisted reproductive technology (ART). In the United States, the first baby conceived through ART was born in 1982. Each year since, there has been a remarkable increase in the amount ART procedures performed, from 64,681 to 134,260 between 1996-2005 (Wright et al. 9). Assisted reproductive technologies pertain to a number of alternatives to assist a woman in becoming pregnant (Cook, Dickens, and Fathalla 305). Because assisted reproductive technology procedures are very costly and invasive, they are frequently employed as a final recourse for conception. These medical procedures, when employed, are frequently used along with more conservative treatment to amplify the success of the procedure. Assisted reproductive technology methods include in vitro fertilization (IVF), intracytoplasmic sperm injection (ICSI), gamete intrafallopian transfer (GIFT), and zygote intrafallopian transfer (ZIFT) (Wright et al 3-5). Donor egg or embryo and surrogacy are also considered forms of assisted reproductive technology (Cook, Dickens, and Fathalla 305-307). Recently there has been an increase in assisted reproductive technologies and in-vitro fertilization (IVF) in particular. In-vitro fertilization is the method where the ovum is fertilized by sperm outside the womb or in vitro. The fertilized ovum is then relocated to the woman’s uterus with the intention of producing a pregnancy. In-vitro fertilization is the principal remedy in infertility to other unsuccessfully facilitated reproductive technology approaches. There are examples of women’s health rights being violated with in-vitro fertilization. Women who are single, overweight, or of significant age past child bearing years may be denied the same rights as a married, normal weight, younger woman. Contraception is the utilization of a variety of techniques to inhibit pregnancy as well as thwarting sexually transmitted diseases (STD) and human immunodeficiency virus (HIV). While, for the most part, the United States exemplifies elevated concentrations of contraceptive use as a method to prevent pregnancy, it is not uniformly dispersed within the United States. Certain populations, mainly urban and rural communities, contraceptive alternatives are restricted and access is complex, ensuing an unrealized necessity for contraceptive technology. (Guttmacher Institute, “Facts on Contraceptive Use in the United States”). In spite of evolvement of contraceptive technologies, method selection is individual. Classification of contraceptive technologies is based on the length of defense. These classifications are permanent, long-term, and short-term methods. Permanent methods of contraception have a very high success rate and include male (vasectomy) and female sterilization (tubal ligation). Both procedures are invasive and increase the risks of infection and other health complications and do not prevent against HIV and STD’s. Long-term methods, while not as invasive as permanent methods, also have a very high success rate. Intrauterine devices (IUD), oral contraceptives, and hormonal injections are forms of long term contraceptive methods. This method, like permanent methods, can increase the risk of health complications and do not prevent against HIV and STD’s. Short-term methods of contraception are to some extent less successful than long-term and permanent methods. Short-term contraceptives methods include condoms, spermicides, vaginal barriers, and emergency contraceptive pills. While side effects of this method are fewer than previously mentioned methods, only the condom prevents conception and HIV and STD’s simultaneously when used appropriately (Guttmacher Institute, “Facts on Contraceptive Use in the United States”). Access to reliable, safe contraceptives is an essential component of a woman’s reproductive health and public health as a whole, with significant emphasis on the aspect of reproductive rights. It is imperative for healthcare providers to emphasize confidentiality and empower the woman’s autonomy regarding decisions about contraceptive methods. Abortion is a pregnancy that does not result in a birth (Pozgar 309). Therapeutic and elective abortions are the most common types of abortions in the United States. Therapeutic abortions are executed when there are fetal anomalies or when pregnancy endangers the mother’s health. Elective abortions are the intended disruptions of pregnancy for basis exclusive of fetal irregularities or maternal threat. These types of abortion to end unintentional pregnancies are not uncommon (Guttmacher Institute, “Facts on Induced Abortion”). Access to reliable, legal abortion is a fundamental element of a woman’s reproductive health and an important factor of reproductive rights (Germain, “Women’s Health” 193). Women must have significant procedure accession where abortion is legal. In the U.S. Supreme Court’s 1973 Roe v. Wade decision, the constitutional entitlement to abortion was acknowledged but failed to give women attainment to abortion services because of the escalating amount of limitations. Consequently, numerous state laws constrain a woman’s ability to obtain an abortion thus increasing the number of illegally obtained abortions. These laws are intended to make it more complicated for an abortion to be attained. A woman’s capability to access abortion services is additionally threatened by public persecution of abortion providers and confines on federal and private resources has produced a scarcity of services (Center for Reproductive Rights, “Report on the United States’ Compliance” par. 16-23; Guttmacher Institute, “Facts on Induced Abortion”). A resolution cannot ensue without corroboration for alteration. A considerable portion of the issues with women’s health are mortality of mothers in addition to the fetus due in part to little education and little or no maternal health care available. The association of human rights with regards to women’s reproductive health in the United States is a significant public health issue. The overall importance of women’s health and human rights is to advance the health of women and girls throughout the lifetime. Future optimal balance should be negotiated between public health goals and women’s health and human rights approaches. The extensive historical impact of women’s health and human rights emphasizes the need for endorsement and defense of health through respecting; protecting and fulfilling of women’s human and health rights that are inextricably linked. It is imperative for public health officials and law makers to understand the serious health consequences and implications of defiance of women’s health and human rights can have. The creation of universal health policies and programs to promote women’s health and human rights in their design can facilitate the support of rights to autonomy, participation, privacy, and information in health care. Finally, susceptibility to illness can be abridged by adopting measures to appreciate, defend and accomplish human rights through autonomy from inequity of race, sex, and gender roles, as well as a fundamental right to health, nutrition, and education. The focal point for women’s health issues is to remedy the inequities in research, health care services, and education that have positioned the women’s health in danger. By organizing women’s health research, health care services, and public policy new programs and ideas required to advance women’s health in the United States and internationally can increase (Gruskin, “Reproductive and Sexual Rights”). Expansion of improved women’s health practices by recognizing and duplicating thriving women’s health programs, advancement of public health education by expanding the involvement of women and girls in health education courses, and increasing access to women’s health services by involving professionals, such as health care professionals and public health officials, on women’s health issues will attempt to close the disparity gap between equality and equity of health care in and across the United States thus decreasing its public health implications.
<urn:uuid:4a31745f-1184-46f4-a315-32a08ef27842>
CC-MAIN-2019-22
https://customwritings.co/womens-reproductive-health-human-rights/
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255092.55/warc/CC-MAIN-20190519181530-20190519203530-00541.warc.gz
en
0.93348
6,059
3.359375
3
USC researchers have developed a mathematical model to forecast metastatic breast cancer survival rates using techniques usually reserved for weather prediction, financial forecasting and surfing the Web. For decades, medical schools have taught doctors that the best way to treat cancer and metastatic progression is to memorize a list of tumors and their typical migration patterns. Metastasis is the development of malignant tumor growths elsewhere from the primary site of cancer. “This is akin to back in the days when weather reporting depended solely on a barometer and experience,” said Jorge Nieva, an associate professor of clinical medicine at the Keck School of Medicine of USC and co-author of the new study. “Medical students are taught very fundamental cancer progression patterns. What the modeling does is it brings the sort of complexity of modern-day weather forecasting to trying to understand where tumors go, when they go and how they get to that location. This type of mathematical modeling is wholeheartedly different from what most medical students learn today.” The study, published online Oct. 21 in npj Breast Cancer, a Nature Partner journal, looked at 25 years of data regarding 446 breast cancer patients at Memorial Sloan Kettering Cancer Center. It focused on a subgroup of women who were diagnosed with localized disease but later relapsed with metastatic disease. The model shows that cancer metastasis is neither random nor unpredictable. Survival depends significantly on the location of the first metastatic site or “spatiotemporal patterns.” In other words, USC researchers uncovered a framework to explain how tumor cells circulate through a patient’s bloodstream over time to settle in various organs. The path varies depending on tumor makeup and treatment decisions. “There’s nothing like this in the cancer world; there’s nothing really like this in the disease progression community even though the techniques are well-developed in other contexts,” said Paul Newton, lead author of the study and an aerospace and mechanical engineering professor at the USC Viterbi School of Engineering. “Our long-term goal is to build comprehensive predictive computational simulations of metastatic cancer. Ultimately, what we want to do is tailor those models to individual patients using their individual characteristics.” The framework built by USC researchers combines scattered data points that doctors are already collecting in order to produce an understandable, comprehensive cancer map. The system design is comparable to information collected by Google to predict Web-surfing patterns and to determine its PageRank values. “If somebody is reading about breast cancer on Wikipedia, the likelihood that she is going to jump to a lung cancer page or a bone cancer page is much higher than the likelihood of her jumping to the Costco website,” said Newton, who is also a professor at the USC Norris Comprehensive Cancer Center at the Keck School of Medicine, as well as professor of mathematics. “These probabilities of jumping from one page to another are not all equal. Where you jump to next depends strongly on where you currently are. This observation lies at the heart of our model.” Breast cancer patients die when tumors have colonized an average of four metastatic sites, the study found. Women had the poorest chances of long-term survival if they had more than two initial metastatic locations; they fared much better if migrating tumor cells first landed on one organ. Roughly 35 percent of breast cancer patients developed first metastasis to the bone, while less than 5 percent contracted their first metastasis in the brain, Newton said. The five-year survival of the bone group is more than 90 percent, whereas the brain group had survival characteristics of 20 percent or less, he said. Peter Kuhn, senior author of the study, explained further. “If you have breast cancer with metastasis to the bone and your next metastasis is the liver, you are likely to die from that,” said Kuhn, Dean’s Professor of Biological Sciences and professor of medicine, biomedical engineering, and aerospace and mechanical engineering at the USC College of Letters, Arts and Sciences. “If you have breast cancer with metastasis to the bone and the next metastasis is in the lung, you are unlikely to die from that. Instead, the disease is going to spread further first.” The study’s results led the researchers to further define the words “spreaders” and “sponges” to describe metastasis, a nomenclature that eventually could tell medical teams how best to deliver personalized therapy plans. “A spreader is a site that is likely the source of new disease,” Kuhn said. “Hence, you need to avoid spreaders or eliminate the disease if it shows up at a spreader site. At a sponge site, one might just manage or stabilize. Of course, if you could eliminate all of it, you would. But if you have multiple metastasis, one would attempt to stabilize the sponge but eliminate the spreader.” Bone, chest wall and mammary lymph nodes were spreader sites in the patients sampled. Lungs, distant lymph nodes and liver were sponge sites. The future of cancer care could consist of squads that consist of a biologist, a mathematician, a physicist and a computer programmer to complement the current medical teams, Newton said. USC is working on a convergent science initiative that provides a collaborative environment for cancer experts. Construction of the USC Michelson Center for Convergent Bioscience broke ground in October 2014. It will eventually be the largest building on campus. “Over the next five to 10 years, there’s going to be a big change in the way medical schools and oncologists think about disease,” Newton said. “I could easily see a situation 10 years down the road where a patient comes in with a particularly difficult disease. The oncologists in charge will put together a team of researchers to develop a model to forecast disease progression and determine best treatment options that they would then implement.” Memorial Sloan Kettering Cancer Center contributed their clinical expertise and supplied the data set used in this study. The study was funded by a National Institutes of Health/National Cancer Institute Physical Sciences-Oncology Centers Transnetwork Grant. — Zen Vuong
<urn:uuid:f44a3c19-f8f2-4804-a0c2-400fd90a331f>
CC-MAIN-2020-29
https://hscnews.usc.edu/mathematical-model-seeks-to-forecast-the-spread-of-breast-cancer
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655880243.25/warc/CC-MAIN-20200702205206-20200702235206-00034.warc.gz
en
0.943976
1,293
3.09375
3
Donal Cam O’Sullivan Beare and his clan begin their epic march to Ulster on December 31, 1602. O’Sullivan has supported Hugh O’Neill, Earl of Tyrone, in his fight against Elizabethan England‘s attempts to destroy Gaelic Ireland once and for all. The cause O’Neill and O’Sullivan fight for is probably doomed after O’Neill’s defeat in the Battle of Kinsale in 1601, but the fight goes on, nonetheless. O’Sullivan Beare conceals 300 of the women, children and aged of his community in a stronghold on Dursey Island, but this position is attacked, and the defenders hanged. In what is later termed the Dursey Massacre, Philip O’Sullivan Beare, nephew of O’Sullivan Beare, writes that the women and children of the Dursey stronghold are massacred by the English, who tie them back-to-back, throw them from the cliffs, and shoot at them with muskets. After the fall of Dursey and Dunboy, O’Sullivan Beare, Lord of Beara and Bantry, gathers his remaining followers and sets off northwards on December 31, 1602 on a 500-kilometre march with 1,000 of his remaining people. He hopes to meet Hugh O’Neill on the shores of Lough Neagh. O’Sullivan Beare fights a long rearguard action northwards through Ireland, through Munster, Connacht and Ulster, during which the much larger English force and their Irish allies fight him all the way. The march is marked by the suffering of the fleeing and starving O’Sullivans as they seek food from an already decimated Irish countryside in winter. They face equally desperate people in this, often resulting in hostility, such as from the Mac Egans at Redwood Castle in County Tipperary and at Donohill in O’Dwyer’s country, where they raid the Earl of Ormonde‘s foodstore. O’Sullivan Beare marches through Aughrim, where he raids villages for food and meets local resistance. He is barred entrance to Glinsk Castle and leads his refugees further north. On their arrival at Brian Oge O’Rourke‘s castle in Leitrim on January 4, 1603, after a fortnight’s hard marching and fighting, only 35 of the original 1,000 remain. Many had died in battles or from exposure and hunger, and others had taken shelter or fled along the route. O’Sullivan Beare had marched over 500 kilometres, crossed the River Shannon in the dark of a midwinter night, having taken just two days to make a boat of skin and hazel rods to carry 28 at a time the half-kilometre across the river, fought battles and constant skirmishes, and lost almost all of his people during the hardships of the journey. In Leitrim, O’Sullivan Beare seeks to join with other northern chiefs to fight the English, and organises a force to this end, but resistance ends when Hugh O’Neill, 2nd Earl of Tyrone signs the Treaty of Mellifont. O’Sullivan Beare, like other members of the Gaelic nobility of Ireland who flees and seeks exile, making his escape to Spain by ship. O’Sullivan Beare settles in Spain and continues to plead with the Spanish government to send another invasion force to Ireland. King Phillip III gives him a knighthood, pension, and the title Earl of Bearhaven, but never that which he desires most, another chance to free his homeland. Many generations of O’Sullivan Beare’s family later achieve prominence in Spain. In 1618, Donal Cam O’Sullivan Beare is killed in Madrid by John Bathe, an Anglo-Irishman, but the legend of “O’Sullivan’s March” lives on. The Beara-Breifne Way long-distance walking trail follows closely the line of the historical march.
<urn:uuid:728b97e5-b878-46af-9f6d-0612552b9421>
CC-MAIN-2019-43
https://seamusdubhghaill.com/2017/12/
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986717235.56/warc/CC-MAIN-20191020160500-20191020184000-00387.warc.gz
en
0.962159
856
3.53125
4
Learning More about Animal Agriculture: A critical component of the ag industry From Kentucky Farm Bureau In Kentucky, animal agriculture represents well over half of the ag economy when taking into account all livestock sectors. Most of this can be attributed to tradition but because of the investments made in the whole industry much of which comes from the Kentucky Agricultural Development Fund, not only has livestock production grown but animal quality remains important and is exceptional in many cases. What helps solidify this excellence of animals is the attention given by producers who know and understand the importance of making animal care paramount in their operations. Drs. Flint and Patricia Harrelson also understand this practice and live it every day by teaching animal science at Morehead State University. The husband and wife team are assistant professors at MSU and lead students not only by way of books but through hands-on applications at the university’s nearby farm facility, the Derrickson Agricultural Complex. One of the first things the pair talks about with their students is career related and finding out what those new students want to do. “One of the great things about animal science but also scary, is there’s a wide spectrum of what they can do with their degree. Explaining the different avenues that they can go down and making sure they make those connections is one of the critical parts that we try to establish, initially,” said Patricia. One thing that is confusing about being animal scientists is, many people including some students think the Harrelsons are veterinarians. “Making students aware that there are so many different things they can do besides being a vet or working in a veterinarian’s office or even raising livestock. It is a broad scoping major and there are a lot of opportunities,” said Flint. While many of their students come from a farming background or have participated in organizations such as FFA or 4-H, many still don’t realize all of the career opportunities available through an animal science degree. Discovering these career avenues is but one of many steps involved in teaching these young people about animal agriculture and its importance in the overall ag industry. Even students with backgrounds in livestock production, for instance, find out so much more about the animals including improvement of genetics and understanding principles that make them more sustainable financially and within the environment. With the entire livestock sector being so valuable to the state ag industry, animal agriculture is playing a bigger and more significant role" “When you think about all of agriculture, you can’t have one area without the others,” said Patricia. “I teach animal anatomy and physiology from time to time and you can think of the industry as we would of body systems; maybe crops are the circulatory system and you have the animal side that is more of the digestive system. Without one or the other, you’re not going to have a fully functional animal.” With the entire livestock sector being so valuable to the state ag industry, animal agriculture is playing a bigger and more significant role. She pointed out that the different ag sectors depend on each other to make the industry as a whole work. “I don’t think you can have agriculture without animals and I don’t think you can replace one area or take it out,” said Flint. “The system really becomes a full circle even utilizing fecal material as fertilizer to reduce use of chemicals.” Educating more than just students Ultimately most of what is produced agriculturally will go to consumers, one way or another and often that is where many of the misconceptions related to animal agriculture originate or at least accumulate. “One of the big things we do is making sure the students know the difference between animal rights and animal welfare because those are not the same thing,” said Patricia. “Animal welfare is regarding the care and husbandry of the animal and making sure humane practices are being followed. Animal rights is where we see a lot of the advocacy groups that want to do away with animal agriculture and they try to give human feelings to animals. They think we shouldn’t be raising animals for food.” She added that those differences have to be pointed out and emphasized that all those producing livestock and raising animals want to treat them with care. “The animals will not be productive unless we take care of them. That means nutrition, bedding, shelter and keeping them healthy,” said Patricia. “A farmer is not going to raise livestock and not take care of them because they won’t be profitable. But more than that, farmers are passionate about what they do and I can’t comprehend why people would think they would mistreat an animal." Something else that adds to the misconceptions comes by way of marketing and labeling of foods in the marketplace. “There is the mislabeling and the marketing aspects verses the truth. Hormone free for instance; there’s no way an animal is hormone free because all animals have hormones just like humans do,” said Flint. “Now, added hormones, we can have a discussion there. And in the event an antibiotic has to be used, that animal can’t be put into food production until a withdrawal period has been realized. That’s the law.” The Harrelsons said fighting these misconceptions is often tiring but noted that passing on correct information and the proper care of animals to their students will in turn see those students pass on the correct information, as well. “It will hopefully create a chain reaction and we’re hoping to reach more and more people through the students who will spread this information,” said Patricia. In today’s environment, advocacy becomes as much a part of animal agriculture and raising and caring for them along with discovering the right career choice. Both Flint and Patricia see themselves as advocates for agriculture and find themselves more and more having to teach their students about being good advocates. But at the end of the day, the two care about the animals they are teaching their students about and they are passing that along in hopes of more and more people getting a better understanding about the animal side of agriculture. A real farm experience MSU Farm Manager Joe Fraley oversees the operations at both of the school’s farming facilities; the Derrickson Agricultural Complex and Browning Orchard, which is near Flemingsburg. He is involved on a daily basis with the students who are required to work on the farm a few hours each week as part of their degree program. “This is a working farm which allows students to not only get the science side of agriculture but they also see the hands-on side that makes them more employable when they graduate,” he said. “For some of the students, this is new to them but they are excited to learn new things.” Fraley added that by being there on the farm, these students are getting a better idea of what animal agriculture is all about working with the Angus cattle, cross bred hogs, cross-bred sheep and horses which are located at the facility. “It’s extremely important to educate our young people on how to handle these animals in a professional and appropriate manner,” he said. “One of the goals we have with students, especially with those who have not been around farm animals, is to make sure they don’t have any misconceptions about how livestock is raised and to raise the best product we can whether its plant or animal and taking care of it correctly is the most important part.”
<urn:uuid:d27f68d7-335f-4e3f-82f3-eeabde8677fe>
CC-MAIN-2019-22
https://www.kyfoodandfarm.com/features/2017/2/13/learning-more-about-animal-agriculture-a-critical-component-of-the-ag-industry
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232254889.43/warc/CC-MAIN-20190519141556-20190519163556-00022.warc.gz
en
0.967475
1,576
2.84375
3
For Immediate Release, March 8, 2016 Legal Petition Seeks Crackdown on Aquarium Fish Caught With Cyanide Poison OAKLAND, Calif.— Conservation groups filed a legal petition today to prevent the import of tropical aquarium fish that are caught overseas using cyanide, a practice that kills or injures tens of millions of tropical fish and causes widespread destruction of some of the world’s most important coral reefs. Each year as much as 90 percent of the 12.5 million tropical fish entering the United States as pets are caught illegally with cyanide. “The sad reality is that cyanide poisoning is causing widespread destruction of some of the world’s most stunning coral reefs. By acting on our petition the Obama administration can put a huge dent in this destructive practice,” said Nicholas Whipps, a legal fellow at the Center for Biological Diversity. “We can’t allow our love of these fish lead to the wholesale destruction of coral reefs.” Wild reef fish are caught in the Philippines, Indonesia and other countries by squirting cyanide directly onto reefs to stun tropical fish, which kills as much as 75 percent of all nearby fish on contact, as well as nearby corals. The fish that survive are then shipped to the United States and sold as aquarium fish. Today’s petition asks the National Marine Fisheries Service, U.S. Customs and Border Protection and U.S. Fish and Wildlife Service to use their authority under the Lacey Act to halt these illegal imports. “Millions of animals suffer and die each year through the careless acts of aquarium fishers removing wild fish from the oceans,” said Teresa M. Telecky, director of wildlife at Humane Society International. “The U.S. government must act now to put a stop to this cruel and illegal practice by requiring certification that imported live fish were not caught with cyanide.” Under the Lacey Act, it is illegal to import animals caught in violation of another country’s laws. The largest reef-fish-exporting countries — the Philippines, Indonesia and Sri Lanka — have banned cyanide fishing but do little to regulate the practice; the Lacey Act prohibits the import of these illegally caught fish into the United States, but enforcement is lacking. As many as 500 metric tons of cyanide are dumped annually on reefs in the Philippines alone. The petition by the Center for Biological Diversity, For the Fishes, the Humane Society of the United States and Humane Society International requests that imports of tropical aquarium fish be tested for cyanide exposure in order to enter or be sold in the United States. “Coral reefs now face unprecedented stress and die-offs from climate change. Those exposed to cyanide poisoning and other unsustainable practices may never recover,” said Rene Umberger, executive director of For the Fishes. “Saltwater aquarium hobbyists concerned about their impacts should choose from the dozens of captive-bred species now available and steer clear of all fish captured in the wild until federal enforcement is in place.” The Center for Biological Diversity is a national, nonprofit conservation organization with more than 990,000 members and online activists dedicated to the protection of endangered species and wild places. www.biologicaldiversity.org. Get For the Fishes’ award-winning mobile app, Tank Watch, and learn which saltwater aquarium fish species may be captive-bred and which are captured in the wild. Humane Society International and its partner organizations together constitute one of the world’s largest animal protection organizations. For more than 20 years, HSI has been working for the protection of all animals through the use of science, advocacy, education and hands on programs. Celebrating animals and confronting cruelty worldwide — on the Web at hsi.org. The Humane Society of the United States is the nation’s largest animal protection organization, rated most effective by our peers. For 60 years, we have celebrated the protection of all animals and confronted all forms of cruelty. We are the nation’s largest provider of hands-on services for animals, caring for more than 100,000 animals each year, and we prevent cruelty to millions more through our advocacy campaigns. Read more about our 60 years of transformational change for animals, and visit us online at humanesociety.org.
<urn:uuid:53b5d34b-0b7a-4da5-9e30-e79b5d4d91f8>
CC-MAIN-2023-50
https://www.biologicaldiversity.org/news/press_releases/2016/cyanide-03-08-2016.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100972.58/warc/CC-MAIN-20231209202131-20231209232131-00492.warc.gz
en
0.927924
894
2.765625
3
Systems administrator. Person responsible for establishing and maintaining the system, the system administrators may be members of an information technology department, it is the professional who has the responsibility of executing, maintaining, operating and ensuring the correct functioning of a computer system and/or a computer network. The systems administrator has usually completed a study program that includes areas of knowledge in software engineering, as well as network management and telecommunications. Systems administrators are usually members of the department of information technology, electronics, or telecommunications engineering. A Systems Administrator can also be IT security administrator. LINUX / UNIX / WINDOWS manager. System Administrator plays a vital role in IT industry. Check out the job description of System administrator in major countries of the world. - Maintain the development environment for both hardware and software. - Make backup copies. - Assemble a repository of the tools used in the project. - Manage user accounts (account installation and maintenance). - Check that the peripherals are working properly. - In case of hardware failure, designate repair times. - Monitor system performance. - Create file systems. - Install the software. - Create the backup and recovery policy. - Monitor network communication. - Update the systems, as new versions of Operating Systems and application software are accessible. - Apply the policies for the use of the computer system and network. - Configure security policies for users. A systems administrator must have a solid understanding of computer security (for example, firewalls and intrusion detection systems). - Knowledge of installation, configuration and administration of operating systems. - Knowledge of computer hardware and networks. - Master the architecture of the system to carry out the deployment. - Determine support mechanisms to distribute the product to end users Being able to adapt to the complex and variable situations of the current business world, as well as to respond to the demands that our organizations pose in their field, thereby contributing to the growth, development and projection of the society in which they are inserted. Ability to discern, analyze and evaluate the repercussions they have on the behavior of individuals and organizations, the application of the different philosophies, foundations, theories, concepts, techniques and procedures that are the object of the Information Systems Administration. - Development of research in the field of systems. - Artifacts and Controls - Product transition plan. - Architecture deployment view (architecture document). - Incident report. - Wide area network design and management. - Database administrator. - Systems analyst. - Security administrator. - Systems auditor. - Information systems programmer and designer. - Project manager. Roles and responsibilities of system administrator - You’ll do the same thing over and over again without wasting time on mundane tasks. - Sanctify periodic and complete endorsements. - Honor the reduced number of large partitions. - Should not covet another system that is not necessary. - You will not procrastinate. - Document and automate your tasks. - Won’t reboot a machine if you don’t know what will happen next. - Honor the resources that the operating system gives you. - Document comprehensive and effective action policies. - Keep logs of everything that happens on your servers. According to Indeed the top job site in US, the average salary of system administrator with an experience to 2 – 3 years is $85,083. And also bachelors degree in computer systems. - Programming operating systems and system maintenance techniques - Networking for system administrators - Securing networks and operating systems - Server administration for Microsoft Windows - System formulas and data modeling - Shell, Java, Perl and C+ languages Reasons Why Business Walkie Talkies can help you! Business Walkie Talkies Many have dismissed the utility of walkie talkies for business in recent years with the mobile phone… Get Location Data for Your Website with Open Source API Do you have a website or are thinking about creating one? If your answer is yes, then you need to…
<urn:uuid:14de597e-b5c8-4cdb-b6cb-f53cab2fff5d>
CC-MAIN-2020-50
https://www.computertechreviews.com/skills/what-is-a-system-administrator/
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141181482.18/warc/CC-MAIN-20201125071137-20201125101137-00294.warc.gz
en
0.841049
841
2.765625
3
The Army's Achilles' Heel in the Civil War Plains Campaigns of 1864-65 On August 18, 1864, after hastily re-mustering at Omaha from their veteran furloughs, the men of the First Nebraska Volunteer Cavalry left for Fort Kearny. Instead of returning to Arkansas where it had spent the first half of 1864, the regiment's new mission was to help defend the Platte Valley freighting, stagecoach, and telegraph route from an onslaught of Indian raids that had recently broken out. Having been issued only sixty horses for three hundred men, the mostly dismounted cavalrymen probably appreciated the irony of being sent off on foot to chase down an elusive foe known for its horsemanship. A month later the First Nebraska's Lt. Col. William Baumer notified District of Nebraska headquarters in Omaha that five companies of the regiment at Fort Kearny and at Plum Creek Station, thirty-five miles to the west, were still without horses. Over the next several months, horses were gradually issued, but never enough to mount all the men. What's more, many of the horses the regiment did receive were mediocre at best, poorly fed, and could not perform the duty expected of them, a problem that persisted. On May 19, 1865, First Nebraska Col. Robert R. Livingston told District of the Plains commander Patrick Connor what Connor already knew: "Our horses cannot run an Indian down, too poor." The First Nebraska's plight was a common experience for the volunteer cavalry on the Plains during the Civil War. The rebellion placed enormous demands on the country's equine resources at a time when animals also furnished the principal motive power in the civilian world. Armies in the field equipped with artillery, cavalry, and supply trains required one horse or mule, on average, for every two men. Some 284,000 horses were consumed by the Union cavalry alone during the first two years of the war and Army Chief of Staff Henry Halleck rated the Union's 1864 expenditure of cavalry horses at slightly fewer than 180,000 animals, an average of about five hundred per day. Between January 1864 and February 1865 the Army of the Potomac's cavalry arm had twice been remounted. From January 1, 1864, until purchases ceased on May 9, 1865, the quartermaster general's department bought approximately 193,000 cavalry horses. Only a relative handful of these made their way to the Plains. Although Secretary of War Edwin M. Stanton claimed in his postwar report, "The supply of horses and mules for the army has been regular and sufficient," apparently the secretary had not paid attention to letters coming from commanders in the West. In late February 1864 Department of Kansas commander Samuel R. Curtis wrote Stanton from Fort Leavenworth recommending the purchase of Indian ponies for government use because "better horses are now becoming very scarce." Regulations provided that the ideal cavalry horse was from 15 to 16 hands high at the withers (5 feet to 5 feet, 4 inches), five to nine years old, weighing from 750 to 1100 pounds, and "sound in all particulars . . . in full flesh and good condition." As the war with its tremendous consumption of horseflesh dragged on, the "ideal" cavalry horse became little more than an abstraction. Union cavalryman Charles Francis Adams, Jr. described how the service ruined horses. Even a walking pace of four miles an hour was "killing to horses" carrying the average load of 225 pounds comprising the soldier and his equipment. During active campaigns, said Adams, the horse remained saddled an average of fifteen hours per day. "His feed is nominally ten pounds of grain a day and, in reality, he averages about eight pounds. He has no hay and only such other feed as he can pick up during halts. The usual water he drinks is brook water, so muddy by the passage of the column as to be of the color of chocolate. Of course sore backs are our greatest trouble." Nonetheless, the horse "still has to be ridden until he lays down in sheer suffering under the saddle." Adams was describing conditions in northern Virginia in 1863, not those facing the cavalry on the distant Plains, with even fewer resources to call upon. Not only were there too few horses to mount the Plains cavalry in 1864 and 1865, they broke down quickly from overwork and a shortage of grain. The full forage ration for an army horse was 14 pounds of hay and 12 pounds of grain daily, which Adams noted was not regularly provided even in the war's eastern theater. And unlike the Indian pony that ethnologist John Ewers described as "a tough, sturdy, and long-winded beast that possessed great powers of endurance" and which was acclimated to the Plains environment, American horses could not maintain their stamina by grazing alone. This was no secret to military men with Plains experience. Capt. Randolph B. Marcy's 1859 guidebook, The Prairie Traveler, advised "for prairie service, horses which have been raised exclusively upon grass and never been fed on grain . . . are decidedly the best and will perform more hard labor than those that have been stabled and groomed." Overland emigrants also noted the contrast. John M. Shively, who went to Oregon in 1843, authored a guidebook that admonished emigrants to "Swap your horses for Indian horses and be not too particular, for the shabbiest Shawnee pony . . . will answer your purpose better than the finest horse you can take from the stables." The entire essay written by James E. Potter appears in the Winter 2011 issue.
<urn:uuid:b60c689b-7631-4c37-9254-daa458c727a4>
CC-MAIN-2021-25
https://history.nebraska.gov/blog/armys-achilles-heel-civil-war-plains-campaigns-1864-65
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488504838.98/warc/CC-MAIN-20210621212241-20210622002241-00083.warc.gz
en
0.976101
1,152
3.546875
4
What's Wrong with Timeouts? Parenting "experts" these days are united in their opposition to physical punishment, which research repeatedly shows hinders kids' moral, emotional and even intellectual development. (If you have questions about this, please see this article on spanking.) But of course, that leaves the very real question of how parents can guide a two, three or four year old, who doesn't have enough development in the prefrontal cortex yet for reason to trump emotion, and who may have no interest in following our rules! Most experts advise parents to use Timeouts. On the surface, Timeouts seem sensible. They're non-violent but still get the child's attention. Plus, they give the parent and child a much-needed break from each other while emotions run high. But any child can explain to you that timeouts ARE punishment, not any different than when you were made to stand in the corner as a child. And any time you punish a child, you make him feel worse about himself and you erode the parent-child relationship. So, not surprisingly, research shows that timeouts don't necessarily improve behavior. A study done by the National Institute of Mental Health[i] concluded that timeouts are effective in getting toddlers to cooperate, but only temporarily. The children misbehaved more than children who weren’t disciplined with timeouts, even when their mothers took the time to talk with them afterward. Michael Chapman and Carolyn Zahn-Wexler, the authors of the study, concluded that the children were reacting to the perceived “love withdrawal” by misbehaving more. That’s in keeping with the studies on love withdrawal as a punishment technique, which show that kids subjected to it tend to exhibit more misbehavior, worse emotional health, and less developed morality [ii]. These results aren’t surprising, given how much children need to feel connected to us to feel safe, and how likely they are to act out when they don’t feel safe. Alfie Kohn, in his book Unconditional Parenting, cites numerous studies on the negative effects of timeout and other love-withdrawal techniques on children's moral and psychological development. [i] Chapman, Michael and Zahn-Wexler, Carolyn. “Young Children’s Compliance and Noncompliance to Parental Discipline in a Natural Setting.” International Journal of Behavioral Development 5 (982): p. 90. [ii] Hoffman, Martin. (1970) “Moral Development.” In Carmichael’s Manual of Child Psychology, 3rd ed., volume 2, edited by Paul H. Mussen. New York: Wiley. So while it’s true that timeouts are infinitely better than hitting, they teach the wrong lessons, and they don’t work to create better behaved children. In fact, they tend to worsen kids' behavior. Here's why. 1. Timeouts make kids see themselves as bad people. You confirm what she suspected – she is a bad person. Not only does this lower self esteem, it creates bad behavior, because people who feel bad about themselves behave badly. As Otto Weininger, Ph.D. author of Time-In Parenting says: “Sending children away to get control of their anger perpetuates the feeling of 'badness" inside them...Chances are they were already feeling not very good about themselves before the outburst and the isolation just serves to confirm in their own minds that they were right.” 2. Timeouts don't help kids learn emotional regulation. The fastest way to teach kids to calm themselves is to provide a “holding environment” for the child, giving him the message that his out of control feelings are acceptable and can be regulated. When you send him off to his room by himself, he'll calm down eventually -- but he's no closer to learning to manage those emotions next time. That doesn't mean you need to physically hold your child when he's upset; he probably won't let you. A "holding environment" might also mean staying close and calm, saying very little, but reassuring him that he's safe and you're there with a hug when he's ready. (Why "safe"? Because emotional dysregulation sends the child into "fight, flight or freeze" which means by definition that an upset child feels unsafe. That's why he fights you as if you're his mortal enemy instead of his beloved parent. So your goal when your child is upset is to restore safety, before you can teach appropriate behavior.) 3. Timeouts work through fear, as a symbolic abandonment. Banishing an upset child is pushing her away just when she needs you the most. Worst of all, she only calms down and becomes more "obedient" because you've triggered the universal childhood fear of abandonment. Dan Siegel says that the relational pain of isolation in timeout is deeply wounding to young children and that when repeated over and over, the experience of timeout can “actually change the physical structure of the brain.” 4. Timeouts don't help kids with their upsetting emotions, which makes more misbehavior likely. Isolating the child with timeout gives her the message that you'll push her away if she expresses challenging emotions. Only her “pleasant” feelings are safe; her authentic, messy, difficult feelings – part of who we all are – are unacceptable and unlovable. A child can't separate herself from her feelings. So she concludes that she's unlovable. And she represses those difficult emotions, which just means they're no longer under conscious control and are ready to pop out with more force next time she gets upset. 5. Instead of reaffirming your relationship with your child so she WANTS to please you, timeouts fuel power struggles. Many parents end up in physical brawls with their child while trying to drag them to timeout. The child loses face and has plenty of time to sit around fantasizing revenge. (Did you really think she was resolving to be a better person?) 6. Timeouts, like all punishment, keep us from partnering with our child to find solutions since we're making the problem all theirs. That makes us less likely to see things from our child's perspective. It weakens our bond with our child. Unfortunately, that bond is the only reason children behave to begin with. So parents who use timeouts often find themselves in a cycle of escalating misbehavior. What to do instead of Timeouts In summary, timeouts, while infinitely better than hitting, are just another version of punishment by banishment and humiliation. To the degree that Timeouts are seen as punishment by kids – and they always are -- they are not as effective as positive guidance to encourage good behavior. So if you’re using them as punishment for transgressions, that’s a signal that you need to come up with a more effective strategy. Prevention always works best, and emotion coaching is invaluable. Managing your own emotions is also essential, because that calms, rather than inflames, the storm. See How to Use Peaceful Parenting, and Handling Your Own Anger for specific strategies. And if you’re using Timeouts to deal with your child's meltdowns, that’s actually destructive, because you’re triggering your child’s abandonment panic. Try emotion coaching and time ins. What's Time IN? If you want to teach your child emotional self-management, that’s only effective before a meltdown starts and the child can still access the reasoning capacity of the prefrontal cortex. When you see the warning signs, take your child to a "Time IN" to help her calm down. This signals to your child that you understand that she's got some big emotions going on and you're right there with her. If she's just a bit wound-up and wants to snuggle or even read a book, fine. If she's ready for a melt-down, you're there to help. Just let her know you're there and she's safe. Once the meltdown starts and your child is swept with emotion, it’s too late for teaching. Don't try to talk or negotiate or convince him of anything; he's in "fight or flight" emergency mode and the thinking parts of his brain aren't working right now. Just stay nearby so you don’t trigger his abandonment panic, and stay calm. Don’t give in to whatever caused the meltdown (in other words, don't give him that cookie you said no to), but offer your total loving attention. Tell him he's safe. Be ready to reassure him of your love once he calms down. When You're Losing It Timeouts are a terrific management technique for keeping your own emotions regulated. But use them on yourself, not your child. When you find yourself losing it, take five. This keeps you from doing anything you’ll be sorry about later. It models wonderful self-management for your kids. And it ultimately it makes your discipline more effective because you aren’t making threats that you won’t carry out. Parents who use timeouts are often shocked to learn that there are families who never hit, never use timeouts, and rarely raise their voices to their children. But you shouldn’t need to use these methods of discipline, and if you're using them now, you'll probably be quite relieved to hear that you can wean yourself away from them. Check out the section on this website called How to Use Peaceful Parenting for more specifics. And remember, this too shall pass!
<urn:uuid:35b4cd56-a8ee-4a5a-8f23-dba4b604d8e6>
CC-MAIN-2020-50
https://www.ahaparenting.com/parenting-tools/positive-discipline/timeouts
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141180636.17/warc/CC-MAIN-20201125012933-20201125042933-00237.warc.gz
en
0.966521
2,014
2.703125
3
What is period of the Earth’s rotation around its axis? As a rule, answer to this question is that that one turn around the axis of the Earth requires, approximately, 24 hours and that this period is cold the mean solar day. Either, more precisely, some four minutes shorter stellar day. In fact, neither solar, nor the stellar day stands for the Earth’s full spin period. Earth turns around its axis considerably more slowly. The only frame of reference, in which its intrinsic rotation is clearly defined, while its orbital motions are excluded, has to be the one related with the Moon. The full spin period of our planet is about 50 minutes longer lunar day – the time between one lunar zenith to the next. The term tidal day is also in use in oceanography. Earth-Moon binary system Consider two celestial bodies forming a gravitational binary system. In astronomy, a commonly accepted criterion dividing a planet-satellite from the double-planet system is based on the location of the barycenter of these two objects. If their center of masses is not situated under the surface of either body, then one may refer to this pair as a double-planet system. In accordance with that criterion, since the Earth – Moon barycenter is 4671 km distant from the Earth’s center, the Moon represents the satellite of the Earth. The late Isaac Asimov proposed a distinction between planet-moon and double-planet systems founded on, what he called, a “tug-of-war” value: if the interaction between the star and and the smaller celestial body is greater than interaction between two bodies, than the double planet takes the place, otherwise, the smaller body represents satelite of the planet. In the case of the Earth’s Moon, the Sun actually wins the tug of war, since its gravitational effect on the Moon is more than twice that of Earth’s, so Asimov reasoned that the Earth and the Moon must form a double-planet system. That is the reason why the resulting Earth’s orbit and the corresponding Moon’s orbit are everywhere concave toward the Sun. Avoiding these semantic classifications, here we refer only to the incontestable fact that Earth and the Moon turn around their common center of mass, i.e., that they form a gravitational binary system (within the Sun –Earth – Moon three body system, of course). Two celestial bodies, with masses and form a gravitational binary system if the kinetic energies of relative motions of eider bodies with respect to the common center of masses are always numerically smaller than the potential of their gravitational interaction In this expression, G represents the gravitational constante, and D – the distance between two bodies. If the condition (1) is satisfied, orbits of the bodies are two similar ellipses. Their lines of apsides coincide and the common center of masses C lies in their inversely disposed foci (Fig. 1). Fig. 1 – Motions of the bodies forming a binary system Let us consider Earth and the Moon now. We introduce two systems of coordinates: xCy (x coincides with the line of apsides) and C ( coincide with the Earth-Moon direction). The second one represents the lunar frame of reference. The average velocity of the Moon with respect to Earth is 1,022 km/s. We shall compare this quantity with the capture velocity of the Moon obtained from the maximum (numerical minimum) of the potential. into expression (1), one obtains the capture velocity of the Moon = 1,40 km/s . This result is in accordance with the fact that Earth and the Moon form the binary system. Provisionally accepting that the barycenter C is the origin of an inertial reference frame xCy (Fig. 1), we can write down the following expressions in this frame + = 0, (2) + = 0, (3) + = 0, (4) Absolute values proportions of all the kinematic characteristics are inversely proportionate to the masses ratio. Because of that, periods of the orbital rotations have to be equal. So, these two bodies move in the ideal resonance 1:1. The Moon’s and Earth’s orbits Parameters of the elliptical orbits: a – semi major axis, c – semi distance of the foci, e = c/a – eccentricity and b = = a – the semi minor axis. Since max D = D = 406720 km and the Moon’s orbit eccentricity e = 0,0549 , a = 380868 km, b=380249 km, c=20910 km, a = 4685 km, b=4678 km, c=257 km. Fig. 2 – Earth – Moon binary system Relative velocity of the Moon with respect to Earth is 1,022 km/s. From the eq. (3) it follows out that the “absolute” Earth’s and Moon’s velocities in their small orbits are Taking 379.730 km as the average distance from the Moon to the mass center C , we can evaluate the circumference of its orbit ~ 2.385.916 km. This haevenly body completes one turn along this path in 2,363 229 10 s = 27,35 mean solar days. The same time requires Earth to cover the circumference of its own small orbit. During one mean solar day, Earth and the Moon make ~ 0,23 rad = 13, 16 around the mass center C (Fig. 2). The spin periods The answer to the question what is the period of the Earth’s rotation around its axis will be, as a rule, eider that it is the 24 hours long mean solar day – rotational period relative to the Sun, or some four minutes shorter (23 hours, 56 minutes and 4,09 seconds) stellar, or, sidereal day – period relative to the fixed stars. Let us quote some of sources in which the stellar day was represented as the period of the Earth’s rotation. Period of Rotation of the Earth - Moche, Dinah. Astronomy. 4th ed. Wiley, 1993: 24.”23 hours, 56 minutes, 4 seconds long”86,164 s. - “Earth.” Microsoft Encarta 97 Encyclopedia. Microsoft, 1996.”… the earth rotates on its axis every 23 hours, 56 minutes and 4.1 seconds based on the solar year.”86,164.1 s - Daily, Robert. Earth. USA, 1994: 20.”… it takes earth 23 hours, 56 minutes and 4.09seconds, a period of time we call a day.”86,164.09 s. - “Earth.“ Encyclopedia Britannica. Chicago: Encyclopedia Britannica, 1998: 320.”… the earth spins on its axis and rotates completely once every 23 hours, 56 minutes and 4 second.”86,164 s. - Earth’s Rotation. Liftoff to Space Exploration. NASA. Marshall Space Flight Center.”The actual value is 23 hours, 56 minutes and 4 seconds.”86,164 s. As seen, together with the noun “day” usually goes its qualifier defining the frame of reference in consideration. “Sun” represents a relative and “fixed stars”, the absolute reference frame. Our planet performs, at least, five different rotations: spin, precession together with its axis, orbital motion around the Earth-Moon baricenter, turning together with this baricenter around the Sun and finally, rotation with the solar system around the center of the galaxy. The planes of all these rotations are completely different. The solar day represents the period of the first three and the stellar day of all these finite rotations, united in one. That is the reason why the stellar day is shorter than the solar one. In fact, neither of these days corresponds to the Earth’s full spin period. Does anybody adjoints the angular velocity arised because of the path curvature to the number of revolutions of the motor car engine? Eider the corresponding angular velocity to the spin of the gyrocompasse, or gyrostabiliser? As known, our planet represents a giant gyroscope and by all means, one has to be aware of its spin pace. Generally, study of the planet’s motion is only possible if all the rotations were separated. Since Earth and the Moon move in 1:1 resonance along their orbits around their common mass center, their intrinsic rotations are clearly defined and their orbital motions excluded, only in the lunar frame of reference C (Figs. 1 and 2). The full spin period of our planet is the time interval between one lunar zenith to the next, i.e., between two successive intersections of the Earth – Moon direction with the same meridian, about 24 h 50 min and 28 s, that is, 89 028 mean solar seconds. Its name is the lunar day and it is in use in oceanography. This day is ~ 3, 32 % longer the sidereal one. The real Earth’s spin velocity is roughly 4, 6 % smaller than its (total and averaged) velocity in the absolute frame of reference. The similar situation corresponds to the Moon’s spin. Neglecting its libration the Moon does not turn around its axis and this fact is quite clear in the C reference frame. Observer on the Earth always perceives the same face of the Moon. However, in the xCy reference frame, the Moon (neglecting precession of its orbit) accomplishes one full rotation around its axis in 27, 35 mean solar days. Intrinsic and orbital rotations of the Moon are in 1:1 resonance. Approximatelly, this period corresponds to the stellar or sidereal month. If, on the other hand, motion of this body is considered in the solar frame of reference then the angle of the Moon’s rotation has to be reduced by the corresponding angle of the baricenter C turnover Sun. In that case, rotation of the Moon appears slower and it comes to 29, 54 mean solar days. This is the solar or synodic month. As a paradigm, let us mention here the planet Mercury. In the stellar, that is inertial frame of reference it moves prograde, in the rotational/orbital resonace 3/2. If one subtracts two orbital rotations from this fraction, he obtains -1/2. The proper rotation of this planet is retrograde. It makes one full backward spin in two full orbital rotations. Another indicator of the Earth’s spin pace is the rhythm of tides, that is, of the periodic rises and fallings of the large bodies of water. Namely, due to the the Moon’s tidal force, the body of the ocean water is, practically, “pulled apart” along the direction C. In the zone closest to the Moon, this force raises masses of the water from the ocean bed, that is, from the Earth, while, in the farthest zone, it pulls the bed (Earth) from the water. That is the reason why the oceans tend to bulge toward and away from the Moon in these zones (Fig. 3). As seen in the same figure, the low water occurs at abour right angles to the Earth – Moon direction. The Sun’s gravitational pull does also influence tides to some degree, of course, but the effect of it on the Earth tides is less than half that of the Moon. Particularly, large tides are experienced in the Earth’s oceans when the Sun and the Moon ar aligned with the Earth, at new and full phases of the Moon. These tides are called spring tides. Conversely, when the Moon is at the right angle toward the Earth Sun direction (first quarter and last quarter phases) tidal bulges are generally weaker and this are called neap tides. Fig. 3 – High and low tide Depending on the angle between the C axis and the Earth’s spin axis (by the way, it does not coincide with the axis of the Earth’s total rotation), on the disposition of the Sun with respect to the Earth-Moon direction, but on the ocean depth and on the coast line form, as well, the period between two successive high (low) tides varies from the semidiurnal ~ 12 h 25 min to diurnal ~ 24 h 58 min tides. Because of that, the lunar day is also named tidal day in oceanography. Conventional answer to the question what is the period of the Earth’s full rotation around its axis is either that it is, the 24 hours long, mean solar day, or, about four minutes shorter, stellar day. Our planet performs, at least, five different rotations: spin, precession together with its axis, orbital motion around the Earth – Moon baricenter, turning together with this baricenter around the Sun and finally, rotation with the solar system around the center of the galaxy. The solar day represents the period of the first three and the stellar day, of all these finite rotations, united in one. That is why the stellar day is shorter then the solar one. In the solar and the stellar frames of reference it seems that Earth rotates faster, that the Moon rotates around its axis in the, so-called, ideal resonance, that Mercury has the prograde (3/2 spin-orbit resonance) rotation, while, in fact, it has the retrograde spin (-1/2 resonance) and so on. If one wants to study the Earth’s real motion, its rotations have to be uncoupled. The only frame of reference in which the Earth’s spin is separated from all the other rotations is the lunar system C. Actually, the Earth’s spin period is more than 50 minutes longer then the solar and stellar day. Astronomical Almanac Online (2010). United States Naval Observatory. s.v. solar time, apparent; diurnal motion; apparent place. Astronomical Information Sheet No. 58. (2006). HM Nautical Almanac Office. Fred Hoyle, Astronomy, Rathbone Books Limited, London 1962. Jean Meeus, Mathematical astronomy morsels (Richmond, Virginia: Willmann-Bell, 1997) 345–6. John Walker, Inconstant Moon: The Moon at Perigee and Apogee, www.fourmilab.ch/earthview/moon_ap_per.html J. H. Argyris, An excursion into large rotations, Comp. Meths. Appl. Mech. Engrg., 32, 85–155, 1982. Mellor, George L. (1996). Introduction to physical oceanography. Springer. p. 169. ISBN 1563962101. M. Marjanov: Gravitational Interaction of Two Real Bodies: Complex Harmony of Motions on Stable Orbits. Lecture on the Mathematical Institute SANU, Department of Mechanics, March 2007.
<urn:uuid:a92a4b8c-853d-4477-8799-a271d0f15072>
CC-MAIN-2021-04
http://www.milutinmarjanov.com/moon-and-tides-indicators-of-the-real-earths-spin-velocity/
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704821253.82/warc/CC-MAIN-20210127055122-20210127085122-00551.warc.gz
en
0.887892
3,202
4
4
Preface Chapter 1: Introduction REQUIREMENTS ANALYSIS Chapter 2: User Profiles Chapter 3: Contextual Task Analysis Chapter 4: Usability Goal Setting Chapter 5: Platform Capabilities and Constraints Chapter 6: General Design Principles DESIGN/TESTING/DEVELOPMENT Design Level 1 Chapter 7: Work Reengineering Chapter 8: Conceptual Model Design Chapter 9: Conceptual Model Mockups Chapter 10: Iterative Conceptual Model Evaluation Design Level 2 Chapter 11: Screen Design Standards Chapter 12: Screen Design Standards Prototyping Chapter 13: Iterative Screen Design Standards Evaluation Chapter 14: Style Guide Development Design Level 3 Chapter 15: Detailed User Interface Design Chapter 16: Iterative Detailed User Interface Design Evaluation INSTALLATION Chapter 17: User Feedback ORGANIZATIONAL ISSUES Chapter 18: Promoting and Implementing The Usability Engineering Lifecycle Chapter 19: Usability Project Planning Chapter 20: Cost-Justification Chapter 21: Organizational Roles and Structures A commitment to usability in user interface design and development offers enormous benefits, including greater user productivity, more competitive products, lower support costs, and a more efficient development process. But what does it mean to be committed to usability? Inside, a twenty-year expert answers this question in full, presenting the techniques of Usability Engineering as a series of product lifecycle tasks that result directly in easier-to-learn, easier-to-use software. You'll learn to perform a complete requirements analysis and then incorporate the resulting goals and constraints in a highly structured, iterative design and development process. This process doesn't end with installation but instead begins anew with the collection of user feedback that will guide further development. Also covered are organizational issues related to the implementation of Usability Engineering, including cost justification, project planning, and organizational structures. - Unites all current UE techniques in a single, authoritative resource, presenting a coherent lifecycle process in which each clearly defined task leads directly the next. - Teaches concrete, immediately usable skills to practitioners in all kinds of product development organizations-from internal departments to commercial developers to consultants. - Contains examples of actual software development projects and the ways in which they have benefited from Usability Engineering. - Deals in specifics, not generalities-provides detailed templates and instructions for every phase of the Usability Engineering lifecycle. - Pays special attention to Web site development and explains how Usability Engineering principles can be applied to the development of any interactive product. Software developers, user interface designers, usability engineers, and web designers and developers. - No. of pages: - © Morgan Kaufmann 1999 - 22nd March 1999 - Morgan Kaufmann - eBook ISBN: - Paperback ISBN: Dr. Deborah J. Mayhew is owner and principal of Deborah J. Mayhew & Associates, a consulting firm based in Massachusetts, offering courses and consulting on all aspects of Usability Engineering and user interface design. Clients include American Airlines, AT&T, Ford,Harvard University, and NASA. Dr. Mayhew received her Ph.D. in Experimental Cognitive Psychology from Tufts University. She is the author of Principles and Guidelines in Software User Interface Design (Prentice Hall), a coeditor of Cost-Justifying Usability (Academic Press), and a contributor to Human Factors and Web Development. Deborah J. Mayhew and Associates, West Tisbury, MA, U.S.A.
<urn:uuid:b1bd0da3-1f6c-43a6-b0b4-6278eaa8cbcc>
CC-MAIN-2017-34
https://www.elsevier.com/books/the-usability-engineering-lifecycle/mayhew/978-1-55860-561-9
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105291.88/warc/CC-MAIN-20170819012514-20170819032514-00053.warc.gz
en
0.853792
708
2.5625
3
Table of Contents Air pollution occurs when the air contains gases, dust, smoke from fires, or fumes in harmful amounts. Tiny atmospheric particles - aerosols - are a subset of air pollution that are suspended in our atmosphere. Aerosol can be both solid and liquid. Most are produced by natural processes such as erupting volcanoes, and some are from human industrial and agricultural activities. Aerosols have a measurable effect on climate change. Light-colored aerosol particles can reflect incoming energy from the sun in cloud-free air and dark particles can absorb it. Over the historic period, the net effect globally was for aerosols to partially offset the rise in global mean surface temperature. Aerosols can modify how much energy clouds reflect and they can change atmospheric circulation patterns. Aerosol sources, composition, and removal processes Worldwide, most atmospheric aerosol particles are produced by natural processes such as grinding and erosion of land surfaces resulting in dust, salt-spray formation in oceanic breaking waves, biological decay, forest fires, chemical reactions of atmospheric gases, and volcanic injection. Some particles, on the other hand, have human origins—industry, agriculture, transport (including aviation), and construction. The composition of atmospheric aerosol particles varies widely depending on their source—they may contain salts (predominantly sulfates), minerals (such as silicon), organic materials, and, in most cases, water. The particles grow by absorbing water vapor and other gases. In moist air, clouds form when water vapor condenses onto these ‘cloud condensation nuclei’. These then grow into cloud drops, which eventually fall to the surface as rain or snow, depositing the particles on land or in the ocean.[caption] [/caption] Although dust plumes from the Sahara and Gobi deserts can be seen circling most of the globe in satellite pictures, aerosol particles in the lower troposphere (the lowest layer of the atmosphere, where weather occurs) are usually removed from the atmosphere by settling and precipitation within several days to weeks after they were produced. In the stratosphere (the atmosphere layer above the troposphere), chemical reactions of gases from volcanoes produce sulfate particles that can remain for one or more years, spreading over much of the globe. Aerosol particles and climate Although we are familiar with local particulate ‘air pollution’ due to human activities, the fact that atmospheric particles of both natural and human origin have substantial influence on our climate is less widely understood. The particles can play important climatic roles both outside and inside clouds. In clear air, tiny aerosol particles interact with the solar beam. Particles containing little or no carbon are effectively ‘white.’ They reflect solar radiation, making the air and Earth surface below them a bit cooler than they would otherwise be. Sulfate particles in the stratosphere from the Pinatubo volcanic eruption in 1991, for example, produced measurable cooling for two years over much of the globe. In contrast, particles containing substantial amounts of black carbon (e. g., soot, which is typically produced from combustion of fossil fuels, biofuels, and biomass burning) warm their surroundings by absorbing solar radiation before it reaches the ground. Since black carbon reflects the incoming sunlight, it also acts a shade and the ground surface below becomes cooler. These tiny particles also create cloud droplets in the lower troposphere. Water droplets and ice particles are basically white, so they reflect solar radiation; on the other hand, the condensed water also traps and emits long wave radiation, producing heat. Thus clouds can have either cooling or warming effects on a local area, depending on the altitude of the cloud and whether the reflecting or trapping effect is strongest. Because of many unknowns relating to aerosol particles, the magnitude of aerosol impacts is one of the major advancements in understanding that occurred between the fourth and fifth IPCC climate assessments. Particularly noteworthy is the higher confidence regarding aerosol radiation interactions and volcanic aerosols. NASA currently has several aerosol monitoring sites across the world, whose data are used for better understanding of climate and air quality, and for better pollution management.[caption] Aerosol monitoring network: This figure displays the surface stations where NOAA conducts aerosol measurements.[/caption] How is human-caused air pollution changing our climate? Human-caused particulate air pollution has a relatively minor—and likely decreasing—impact on our climate. Aerosol particles of human origin can have a net effect of diminishing the energy that arrives at the Earth’s surface. Scientists estimate that particles produced by human activities have led to a net loss of solar energy (heat) at the ground by as much as 8 percent in densely populated areas over the past few decades. This effect, sometimes referred to as ‘solar dimming,’ may have masked some of the late 20th century global warming due to heat-trapping gases. Human activities that result in production of both reflecting and absorbing aerosol particle have been curtailed by legislation and modern technology in many places. The ‘pea soup fogs’ that so bedeviled London in Sherlock Holmes’ day, for example, were caused by particles produced by incomplete combustion of coal for heating. These ‘fogs’ in London are now a thing of the past, thanks to mandatory scrubbers and other advanced combustion techniques. Regional and temporal aerosols trends indicate shifts in regions of influence for aerosols produced from human activities. Haze clouds seen over urban regions give dramatic proof of the effects of human-induced particles in the United States, while atmospheric soot production from burning fossil fuels for energy production is still very high in many parts of Asia, including large clouds of pollution across much of China. The primary cause of global warming is too much CO2 Global warming is primarily caused by emissions of too much carbon dioxide (CO2) and other heat-trapping gases into the atmosphere when we burn fossil fuels to generate electricity, drive our cars, and power our lives. These heat-trapping gases spread worldwide and remain in the atmosphere for decades to centuries. Thus, as we continue to emit these gases, their atmospheric concentrations build up over time. In contrast, atmospheric aerosol particles are largely localized near their sources, and do not linger in the atmosphere for long so that, even if we continue to emit them at current rates, their atmospheric concentrations will not build up markedly over time. The effect of long-lived global warming emissions will far outweigh the cooling effect of short-lived atmospheric particles. Can climate intervention with aerosols save us from global warming? Because global warming is such a serious threat, some scientists and engineers have explored the idea of harnessing the reflective power of some aerosol particles to temporarily combat global warming while non fossil fuel energy sources are being more fully developed. Several climate intervention (also-called ‘geoengineering’) strategies for reducing global warming propose using atmospheric aerosol particles to reflect the sun’s energy away from Earth. The idea is to artificially increase the concentrations of ‘white’ atmospheric aerosol particles above the surface of the ocean and/or in the lower stratosphere (above where weather occurs) in order to reflect more of the sun’s energy away from Earth. The field of climate intervention (still in its infancy), has the potential to buy us some time in the attempt to maintain relatively slow warming rates. Such actions would have large hurdles regarding international governance, and experimentation with our very complex climate system by dramatically increasing reflecting aerosols carries with it the potential for large unintended, and potentially dangerous, side effects on ecosystems, agriculture, and human health. Therefore, any intentional increase in aerosol particles would have to be considered carefully and thoroughly before a possible deployment. Foremost, because aerosol particles do not stay in the atmosphere for very long—and global warming gases stay in the atmosphere for decades to centuries—accumulated heat-trapping gases will overpower any temporary cooling due to short-lived aerosol particles. Climate intervention is not considered a replacement for the reduction of carbon emissions.
<urn:uuid:098b91fb-c82c-4f4d-b791-c2be2e4d7d22>
CC-MAIN-2021-10
https://www.ucsusa.org/resources/does-air-pollution-affect-global-warming
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359082.48/warc/CC-MAIN-20210227174711-20210227204711-00143.warc.gz
en
0.923223
1,668
4.125
4
Kicking off this week with a review of traumatic brain injury. What is traumatic brain injury? A traumatic brain injury (TBI) involves any injury to the brain from an external force (Tipton-Burton, 2018). A TBI may be caused by a fall, motor vehicle accident, gun shot wound, or assault, among other causes. Any impact to the head may cause a TBI of varying severity. Fatal TBIs are more common in men and adults older than 65 see the highest rate of TBI (mostly due to falls) (Tipton-Burton, 2018). Two words that you should know include coup (pronounced “coo”) and contrecoup (pronounced “counter-coo”). Coup involves the directly injured area where contrecoup involves the indirectly injured area (Tipton-Burton, 2018). Another way to think about this is that coup is the first impact of the brain against the skull and the contrecoup is the second impact of the brain against the skull. Think of it like whiplash. Experiencing a TBI may or may not result in coma. The Glasgow Coma Scale is used to assess an individual’s level of consciousness (Tipton-Burton, 2018). This scale takes into consideration the individual’s verbal response, motor response, and eyes opening. The Ranchos Los Amigos Scale is then used to measure levels of awareness and cognitive function for individuals who are emerging from a coma and throughout their length of recovery (Tipton-Burton, 2018). In one of my fieldwork experiences, I got to work with several individuals who were recovering from TBI. Some individuals were in their 20s, some were in their 60s, 70s, and 80s. TBI knows no age. Some individuals were in tragic motor vehicle accidents, some were assaulted, some were caused by poor decisions, some were caused by pregnancy complications. We must remember that the brain is the control center of the body. Any damage to the control center can affect many areas of one’s life. What does TBI “look like”? Individuals recovering from TBI may experience physical, cognitive, visual, emotional, and behavioral symptoms. The severity of the symptoms depends on the severity of the TBI and the location of the damage to the brain. Every individual recovering from a TBI will present with a different combination of symptoms. Some may be stronger physically but struggle with cognitive functions, or vice versa. Physical challenges may include rigidity, muscle spasticity, re-emergence of primitive reflexes (the reflexes we all had when we were babies), decreased muscle strength and endurance, changes in range of motion and/or sensation, and postural deficits (Tipton-Burton, 2018). Cognitive challenges may include difficulty with executive functioning (decision making), attention, concentration, memory, self-awareness/judgement, safety, processing, and initiation/termination of activities (being able to start and stop activities in an appropriate timeframe and/or with/without need for cueing) (Tipton-Burton, 2018). The cognitive aspect of TBI is extremely interesting to me and I was privileged to be able to work on cognition with clients during my fieldwork experience. Visual challenges may include changes in acuity (ability to see clearly), accommodation (ability to adjust to near/far objects), convergence (ability to fixate vision on a single object), and scanning (ability to look around one’s environment in an organized manner) (Tipton-Burton, 2018). Other visual diagnoses related to TBI include nystagmus (“shaky” eyes), hemianopia (vision field cuts), reduced blink rates, ptosis (drooping eyelid), lagophthalmos (incomplete eyelid closure) (Tipton-Burton, 2018). Emotional and behavioral symptoms are related to emotional regulation, changes in social roles and independent living, and dealing with loss. Often, upon emerging from a coma an individual may be angry and threatening. This will eventually lessen as an individual continues to recover. Other behavioral symptoms may include impulsivity, yelling, swearing, or displaying inappropriate gestures. An individual may experience depression and anxiety as a result of changes in social roles (i.e. changes in friend groups as a result of one’s disability or changes in one’s ability to take care of their children). As mentioned previously, these challenges related to TBI recovery vary from individual to individual and may include more or less than what I just reviewed. How do OTs help individuals with TBI? OTs are part of the recovery process from the beginning. If the individual is still in a coma, the OT can help maintain range of motion, prevent pressure sores, and provide family education to keep the family involved and informed throughout the recovery process. Depending on the severity of the coma, sensory stimulation may also be utilized to start the “rewiring” process of the brain. When the individual emerges from a coma, further treatment can be provided to assist with the presenting physical, cognitive, emotional/behavioral symptoms. Additional family education would be provided to keep the family involved in the recovery process. As the continuum of care progresses, OTs can be involved with cognitive and visual retraining by teaching compensatory strategies. Cognitive strategies may include using phone reminders for important appointments/occasions, breaking down instructions into smaller chunks, and keeping important identifying information readily available in the case of emergency (among many many others!). One of my favorite visual strategies is called the lighthouse strategy. Think of the eyes as the light of a lighthouse. When reading the eyes should start left, scan the right, then return to the left side, just like a lighthouse. Back and forth. Family education is imperative throughout the continuum of treatment and recovery. I have read several books (true stories) this summer about traumatic brain injury recovery. Most of the books were told from a family member’s perspective (often the spouse). These books have taught me a lot about the impact TBI has on family members. Often, the family member will grieve the loss of their spouse/parent/child/sibling as a brain injury often changes many aspects of one’s personality and their related family roles. The severity of the brain injury may cause the individual to appear like a whole new person. I highly recommend these books for both clinicians and family members of an individual recovering from brain injury as I believe these books provide wonderful insight to brain injury recovery. The books are listed below: Left Neglected by Lisa Genova – This book provides insight from both the individual and family’s perspective after a tragic motor vehicle accident. Where is the Mango Princess? by Cathy Crimmins – This book provides insight from the family’s perspective after a tragic boating accident. In an Instant: A Family’s Journey of Love & Healing by Bob Woodruff & Lee Woodruff – This book provides insight from both the individual and family’s perspective after Bob Woodruff, an ABC world news reporter, experiences an IED explosion in Iraq. I feel like this post doesn’t dive deep enough and it’s a little all over the place considering how I find this population to be so interesting to work with. For that, I apologize. However, I hope you found this post informative and learned at least one thing. If you didn’t, pick up one of those books to get a glimpse into the world of traumatic brain injury. It will enlighten you! Stay tuned for more posts this week. *Note: These examples of OT involvement are strictly my own. Information on this post was provided through the reference referred to below. Tipton-Burton, M. (2018). Traumatic brain injury. In H. Pendleton & W. Schultz-Krohn (Eds.), Pedretti’s occupational therapy practice skills for physical dysfunction (8th ed., pp. 841-870). St. Louis, MO: Elsevier.
<urn:uuid:ec540678-6dd3-4394-b1db-09ce90058572>
CC-MAIN-2022-27
https://adventureawaits321.wordpress.com/2020/09/
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104240553.67/warc/CC-MAIN-20220703104037-20220703134037-00271.warc.gz
en
0.942342
1,697
3.15625
3
Species-rich family (Auchenorrhyncha) of insects, also called "chirps" because of the characteristic sound. Most of the approximately 40,000 species are very strikingly coloured. All of them have a proboscis and a sucking pump for feeding, which is done by piercing and sucking out certain parts of the plant. Most species are restricted or specialised to very specific food plants. The body length is usually between 2 and 40 millimetres, in a few species up to 70 millimetres. Because of their jumping ability (hence leafhoppers), they are often confused with grasshoppers, to which they are not related. They can jump up to 70 centimetres from a standing position. In comparison, a human being would have to jump 200 metres. The Wine lexicon helps me to stay up to date and refresh my knowledge. Thank you for this Lexicon that will never end in terms of topicality! That's what makes it so exciting to visit more often.Thorsten Rahn Restaurantleiter, Sommelier, Weindozent und Autor; Dresden
<urn:uuid:9588b61e-3644-4c56-87ce-3eea5e1dab16>
CC-MAIN-2022-27
https://glossary.wein.plus/cicadas
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103573995.30/warc/CC-MAIN-20220628173131-20220628203131-00088.warc.gz
en
0.956165
232
2.953125
3
“Dialectical" means integration of opposites and DBT treatment is a powerful cognitive-behavioural approach, which helps people integrate the seemingly opposite strategies of acceptance and change. Rather than being counselling focused, DBT focuses on learning skills to manage emotions. While well known for its usefulness as a treatment for personality disorders, DBT is also helpful for mood disorders and for changing behaviour patterns such as self-harm, suicide ideation and substance abuse. It has also shown good outcomes for people with eating disorders and acquired brain injury (ABI). This practical course introduces the concepts behind DBT and provides a suite of practical tools and activities for personal or client use. - Understanding DBT and integration - Mindful awareness - Distress tolerance - Interpersonal effectiveness - Emotional regulation - Reality testing - Self-soothing skills - Accessing the “Wise Mind" This course includes - Engaging, up-to-date materials. - Take-home reference package and free online resources. - All day catering and beverages - with dietary needs catered for. - Certificate of attendance.
<urn:uuid:275fd4f6-2fbd-4596-99c8-14d502bdeca6>
CC-MAIN-2021-43
https://evolvevents.com.au/mental-health-and-wellbeing/dialectical-behaviour-therapy-dbt-tools/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588113.25/warc/CC-MAIN-20211027084718-20211027114718-00675.warc.gz
en
0.929381
236
2.8125
3
STEM has been the focus of much attention in our schools recently, but most of the effort in this arena has been devoted to preparing elite students for baccalaureate and postgraduate programs. While college-educated STEM workers with bachelor's degrees or higher make up the largest proportion of STEM workers, there are also many opportunities in STEM for students with associate's degrees, certificates, and even some for students straight out of high school. Consequently, we should be teaching STEM skills to students at all levels, educating them about opportunities in STEM careers, and providing them with clear pathways to higher education. |Different Meanings of STEM There is no standard definition of STEM and different reports use different definitions. For this "STEM" report, CEW included the following occupation groups: Computer occupations; engineering and engineering technicians, life and physical science occupations; architects, surveyors, and technicians, and mathematical occupations. The Department of Commerce reports discussed in previous posts exclude architects from their definition of STEM, but include STEM managers. "One of the most difficult challenges in the American education system will be improving the middle of the test score distribution. The demand for high-level skills, including STEM skills, has grown well beyond the elite careers that require a Bachelor’s or graduate degree...Strengthening the relationship between education and STEM competencies begins with a stronger focus on the middle in education policy." (page 76)Not only does our economy need high school and community college graduates to fill nearly 1 million STEM jobs in the next decade, but those students will reap enormous benefits from an education that prepares them for these jobs. The "STEM" report finds that people with a high school diploma or less have higher lifetime earnings working in STEM jobs than in any other field. STEM jobs also pay more than all other fields for people with an associate's degree, some college, or a postsecondary certificate. The graph to the right shows how employees with less than a college education can earn more working in a STEM job than in other fields. In Virginia, the majority of projected STEM employment still requires a 4-year degree or higher, but more than 30% of Virginia STEM jobs will be at the associate's degree level or lower. According to CEW projections, in 2018 there will be about 25,000 jobs in STEM fields in Virginia that require a high school diploma, 31,000 that required some college, and 56,000 that require an associate's degree. In total, their report predicts 113,000 jobs that require less than a bachelor's degree in STEM fields. [See the 2010 report, Help Wanted] In order to qualify for these STEM careers, however, students need to have a clear pathway laid out for them in high school and to be provided with a clear and effective transition to a two-year college. However, unless students are part of the minority who take a CTE program focused on STEM, they are rarely shown how to connect STEM courses to STEM career. According to the report, nationwide: "American high schools offer very little career and technical education or any substantial on-ramps to postsecondary career and technical education. As a result, students who don’t get career and technical preparation in high school and don’t succeed in the transition to postsecondary programs are left behind." (page 76)In our current general education system, courses are organized into hierarchies — a student moves from algebra I to algebra II, or from biology to chemistry to physics — and teachers focus primarily on moving students from one academic level to the next. The report argues that to better prepare students for STEM "we should focus on developing curricula that put academic competencies into applied career and technical pedagogies and link them to postsecondary programs in the same career clusters" (page 76). This is the strategy that is currently being applied in the majority of Virginia's CTE programs, with our firm emphasis on career education and counseling, dual enrollment, and industry certification. Furthermore, Virginia's CTE programs do reach reach a large percentage of secondary students. But there is more work to do to reach all of the students who need to build these skills. The "STEM" report concludes that to improve the STEM workforce and the American economy as a whole, we need to begin by expanding and broadening access to STEM-based career and technical education programs. The report recommends four steps: - High school and postsecondary career and technical programs should focus on broadly conceived career clusters that maximize further educational choices as well as employability, as many students will need jobs if they are to pay for college and related expenses. - To the extent possible, “learn and earn” programs in STEM should allow students to work and study in the same field beginning in high school. - Programs of study that align high school and postsecondary STEM curricula should be strengthened. - Hybrid programs that mix solid technical knowledge with the development of more general skills and abilities should be encouraged in a broader range of schools. Virginia's Governor's STEM Academies, which combine work-based learning with academic and CTE STEM courses articulated to postsecondary programs, are exactly the kind of programs that the "STEM" report recommends. But they are not the only programs working toward this goal. The majority of our CTE programs include science, technology, and mathematics instruction coordinated with academic instruction; clearly articulated pathways to careers and postsecondary education; and some opportunities for work-based learning. However programs of this sort are complex, expensive to teach, and expensive to administer. Without substantial support from school divisions and the state, it will be impossible to expand and broaden access to these programs in the way that "STEM" recommends.
<urn:uuid:d24a18bb-4fb1-415a-be64-a102cff494b7>
CC-MAIN-2015-11
http://www.ctetrailblazers.org/2011/11/why-invest-in-cte-and-stem-in-high.html
s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463103.84/warc/CC-MAIN-20150226074103-00115-ip-10-28-5-156.ec2.internal.warc.gz
en
0.959693
1,156
3.609375
4
Growth and fruiting characteristics of eight apple genotypes assessed as unpruned trees on 'm.9' rootstock and as own-rooted trees in southern france The influence of root system and genotype on vegetative and reproductive growth was characterized on untrained apple (Malus domestica) genotypes that were own-rooted or grafted onto M.9 rootstock. The eight genotypes assessed were selected at INRA for resistance to scab (Venturia inaequalis) and low susceptibility to mildew (Podosphera leucotricha), good fruit quality and aptitude to storage, and depending on genotype, other traits such as regular bearing and one fruit per inflorescence. The two main objectives were to determine the influence of (1) the scion genotype, and (2) the root system genotype on tree growth and yield. Trunk cross-sectional area (TCSA), branch cross-sectional area (BCSA) and position of branches with a basal diameter of more than one centimeter were measured at the end of the third year of growth in the orchard. Yield and fruit size data were collected during the first four years of tree growth. Different genotypes had different TCSA and total BCSA but all had a smaller TCSA and total BCSA when grown on M.9 compared with own-rooted trees. The relationship between TCSA and total BCSA was also different depending on genotype but remained unaffected by root system. The relative location of BCSA, or basitony of the trunk, was influenced by the type of root system. Own-rooted trees were more basitonic than trees on M.9. Yield, precocity and fruit size differences were attributed to both genotype and root system. In all genotypes, yield efficiency (kg of fruit/cm2 TCSA) was higher with M.9. This may not be the defining characteristic since some genotypes expressed similar or even higher yields and fruit size in the 3rd and/or 4th year when own-rooted. Precocious own-rooted trees, which in our study belong to type IV architectural class (acrotonic), may be more interesting in the long-term because, although they have later entrance into production, they may have higher cumulative yields as early as the 4th year, and a better distribution of fruit within the canopy.
<urn:uuid:e87d8917-d591-43de-98a8-c97effc02c27>
CC-MAIN-2016-44
http://scholar.sun.ac.za/handle/10019.1/9846
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988725470.56/warc/CC-MAIN-20161020183845-00475-ip-10-171-6-4.ec2.internal.warc.gz
en
0.972194
496
2.59375
3
A literary analysis essay does not summarize the e-book content material. Frank Bonkowski is an academic author, English language trainer, and e-studying specialist, passionate about learning and educating. As a lover of writing, Frank has a twofold mission: to show A Literary Analysis English learners to write better and to coach language academics in teaching efficient tutorial writing. He was a instructor coach at a number of universities, together with McGill, Concordia, and TELUQ, a middle of distance training. Effective Methods For literature essay examples Around The Usa As soon as you’ve read and critically appraised your articles, using a evaluate matrix can help you to compare and contrast analysis, word important data or issues, and observe ideas or analysis over time. There are https://literatureessaysamples.com/a-foreshadowing-conversation-in-romeo-and-juliet/ numerous strategies and matrices to help you synthesize research. You possibly can develop your individual matrix or choose from the many examples discovered on-line. Any matrix though, ought to respond to your research area and the varieties of analysis you might be reviewing. Under are two common examples. Realistic Systems For literature essay Explained Although some folks argue of the validity of literary opinions, understanding to jot down it does not only sharpen your considering abilities but additionally How To Write A Literary Analysis Paper familiarizes you with scientific writing. Conclusion. It consists of the restatement of your essential thesis and conclusions on it. The body paragraphs reveal your evaluation of the ebook; offering proof that can support your statements. A brief essay would possibly solely have one body paragraph. A serious Literary Essay Example analysis paper might have 10 or 12. This doesn’t matter, because each body paragraph can follow the same construction. To have the ability to make cheap conclusions in your essay, you must gather all types of evidence while reading. It means that completely different details How To Write A Literary Analysis Thesis, expressions and other types of proof matter, even the information about the author. For that objective, it could be good to make notes whereas studying a e book. Nonetheless, it is actually possible to beat all of those issues. You can buy particular books on Amazon, communicate with other students and try to write a solid Literary research essay a number of times with a purpose to just be sure you can are prepared to do What Is The Purpose Of A Literary Analysis Essay? this. However, it’s best to take note the truth that these tasks are useless. Sure, it isn’t a joke, they’re absolutely ineffective. You do not figure out something new. You don’t grow to be smarter. You just rewrite articles written by someone else. What are matter sentences? They state the main point of every paragraph, function its mini-thesis, and could be signposts for readers to alter them about essential literary analysis Ap Literature Sample Essays factors. Matter sentences relate all paragraphs to your thesis, acts as signposts in your major argument, and outline the scope of every section. After giving the reader some context, present the thesis assertion. To make components listed above efficient for the reader’s notion, be certain that your literary evaluation essay is effectively-organized. A literary evaluation is a careful examination of the mechanism of a literary work and a dialogue of how that mechanism features to reveal meaning. Uncomplicated Products For literature essay samples – What’s Required Whether or not or not you believe in outlining and whether or not or not you’ll be able to define, one clear reality remains: the faculty essays that receive the most effective grades look as though they have been outlined. Your thesis sentence will recommend the group of your paper. Find more literary essay examples
<urn:uuid:ed8dff0f-98eb-4997-863d-807a31667650>
CC-MAIN-2023-23
http://www.chantiwangroup.com/real-world-products-of-literary-analysis-conclusion-some-insights/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644855.6/warc/CC-MAIN-20230529105815-20230529135815-00124.warc.gz
en
0.920026
872
2.59375
3
How does the solubility of aldehydes and ketones change in water as the number of carbon atoms changes? The solubilities of both aldehydes and ketones in water decrease as the number of carbon atoms increases. However, alkyl groups are electron-donating groups, so ketones are more polar than aldehydes. Thus, ketones are slightly more soluble than aldehydes with the same number of carbon atoms. Here's a table of the solubilities The dividing line for "soluble" is about four carbon atoms for aldehydes and five carbon atoms for ketones.
<urn:uuid:c3a6bb1d-8478-437f-b1d5-701eea777b73>
CC-MAIN-2020-29
https://socratic.org/questions/5939fd217c01497939899df1
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655919952.68/warc/CC-MAIN-20200711001811-20200711031811-00595.warc.gz
en
0.875374
135
2.859375
3
Full Form of WTO – World Trade Organization World Trade Organization is the full form of WTO. It operates as an intergovernmental institution (international organization) which is responsible for taking care of the foreign trade between two or more nations and it has its headquarters in Geneva, Switzerland and it was founded on the 1st of January, 1995 with the purpose of reducing tariffs and such other underlying barriers to international trade and at present it has a membership of not less than 164 member states. WTO was enacted to act as a replacement of GATT (General Agreement on Tariffs and Trade). GATT was enacted after the end of World War 2 solely for the purpose of establishing global economic cooperation. GATT was created in the year 1947 and it consisted of 23 members. GATT had its headquarters in Geneva, Switzerland and it was a part of the Breton Woods System. The purpose behind introducing GATT was to ensure the practice of a stable trade as well as an economic world environment. Later International Trade Organization (ITO) came into the picture and it was believed that GATT might become a part of ITO and even a negotiation was done for this very reason in the year 1948 in Havana. The purpose behind introducing ITO was to lay out the general basic rules with respect to foreign trade and such other global economic matters. The charter that was submitted failed to receive approval of the U.S. congress and hence, WTO came into existence. WTO was established in the year 1995 and it acted as a full proof replacement for GATT. This is also why WTO is termed as the successor to the GATT. WTO is the only intergovernmental organization in the world that deals with the rules pertaining to foreign trade happening between countries. Objectives of WTO The objectives of the World Trade Organization are discussed below: - WTO aims at improving the standard of living of every individual belonging to its member nations. - WTO aims at ensuring a hundred percent employment as well as a rise in demand for goods and services. - WTO aims at enlarging the production and trading of products and services. - WTO also aims at ensuring that there is full utilization of national and international resources. - WTO even aims at safeguarding the environment from getting depleted as a result of human interference. - WTO aims at ensuring that all companies accept and abide by the concept of sustainable development. - WTO also aims at implementing a new foreign trade mechanism in a way as it was provided in the Agreement. - WTO aims at promoting international trade that can certainly benefit all the countries. - To remove the existing obstacles present in an open global trading system. - WTO even aims at taking special steps towards the development of the poorest and underdeveloped countries. - WTO even aims at enhancing competitiveness between all the member countries in order to benefit the maximum number of customers. Functions of WTO The functions of the World Trade Organization are discussed below: - The World Trade Organization shall administer the TPRM (Trade Policy Review Mechanism). - The World Trade Organization shall administer the World Trade Organization agreements. - The World Trade Organization shall monitor domestic trade policies. - The World Trade Organization shall handle trade-related disputes. - The World Trade Organization shall provide an open forum for trade-related negotiations. - The World Trade Organization shall offer technical assistance for countries that are on the developing front. - The World Trade Organization shall cooperate with similar intergovernmental organizations. - The World Trade Organization shall cooperate with the IMF (International Monetary Fund) and IBRD (International Bank for Reconstruction and Development). The advantages of WTO are discussed below: - WTO helps in the promotion of peace and wellness amongst countries. - With WTO, disputes between member nations can be constructively handled. - WTO helps in the stimulation of economic growth. - WTO provides assistance to developing nations - WTO ensures that there is an adequate level of corporate governance and ensures free trade which means the cost of living is reduced. - Trade between nations under the governance of WTO raises employment and income opportunities for the participants. - WTO protects the government from attacks like lobbying. - The free trade ensured by WTO offers better and more choice with respect to goods and services. - WTO even boosts agricultural exports and international trade. - WTO even enhances the inflow of FDI (foreign direct investment) and helps in restricting dumping. - WTO provides huge benefits for industries like cloth and textiles. The world trade organization has a lot of drawbacks too. The dark side of the World Trade Organization is discussed below: - The world trade organization is threatening for the agricultural sector. This is because of the fact that it reduces subsidy and leverages the import of food crops. - World trade organization imposes a huge threat on industries that operates on a national scale. - World trade organization even has significant impacts on human and employee rights. - World trade organization undermines national sovereignty and decision making that is made at the local level. - WTO even increases economic instability at the national level. - New industries may find it difficult to get themselves established in an extensive competitive environment. WTO is the short form for the World Trade Organization. It was established in the year 1995. It has its headquarters in Geneva, Switzerland. At present, the WTO has around 164 member states and 117 developing nations. WTO regulates the foreign trade that takes between two or more nations. It was introduced as a better version of GATT. WTO is run by the minister of every member nation and it trades in various industrial products, agricultural goods, and services. The important objective of WTO is to enrich the standard of living of the individuals belonging to member countries, to safeguard the environment, to promote peace, to ensure 100 percent employment and to stimulate free trade which would ultimately result in economic growth. This has been a guide to the Full Form of WTO and its definition. Here we learn the objectives, functions of WTO, along with advantages and disadvantages. You may refer to the following articles to learn more about finance –
<urn:uuid:f72f48c2-f7af-4252-85d5-c9d782835233>
CC-MAIN-2020-10
https://www.wallstreetmojo.com/full-form-of-wto/
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146809.98/warc/CC-MAIN-20200227191150-20200227221150-00327.warc.gz
en
0.952241
1,234
3.578125
4
Your Library Online! Quality Information at the Library Our online resources are searchable collections of newpaper and magazine articles, encyclopedias, government information, and other kinds of information from reliable sources that are evaluated and selected by librarians. Using the internet to search won’t turn up some of the most authoritative sources you need for your homework. Using the library's online resources is not the same as using the internet. Often. the information you find in our databases is a digitized version of what you would find in a reference book, a newspaper, a magazine, or an encyclopedia.
<urn:uuid:6c154644-b95a-45dd-b61b-00084edb225c>
CC-MAIN-2014-10
http://www.berkeleypubliclibrary.org/kids/research-learning-kids
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021767149/warc/CC-MAIN-20140305121607-00038-ip-10-183-142-35.ec2.internal.warc.gz
en
0.905137
125
2.546875
3
In this section Lumbar puncture may be performed as part of the initial work up of a sick child, or later in the course of an illness once the child has stabilised if there were initial contraindications. It is preferable to obtain a CSF specimen prior to antibiotic administration, however this should not be unduly delayed in a child with signs of meningitis or sepsis. You must always discuss with a senior registrar or consultant before doing a lumbar You must always discuss with a senior registrar or consultant before doing a lumbar Do not do a lumbar puncture if the child is so sick that you will give antibiotics for meningitis even if the CSF is normal on microscopy. The clinical findings that suggest you should antibiotics immediately, and delay lumbar puncture for 1-2 days until the child is improving are: Informed verbal consent should be obtained. This should include a discussion of the benefits of the procedure in terms of possible diagnoses and potential complications. Complications of LP The LP Parent Information Sheet may be useful in talking to parents about the procedure. ( The most important determinant of a successful lumbar puncture is a strong, calm, experienced assistant to hold the patient. Position of the patient is critical. Cover the puncture site with a band-aid or occlusive dressing (eg Tegaderm) Bed-rest following lumbar puncture is of no benefit in preventing headache in children. Information Sheet (Print version - Parent Information Sheet (HTML version)
<urn:uuid:4362e573-ca0c-4c01-a48a-1d6eeab959eb>
CC-MAIN-2017-43
https://www.rch.org.au/clinicalguide/guideline_index/Lumbar_Puncture_Guideline/
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826642.70/warc/CC-MAIN-20171023202120-20171023222120-00693.warc.gz
en
0.880005
342
2.546875
3
The book, published by the Hellenic Maritime Museum, contains 150 photos, documents for the loss of the submarine, and digital photos of “Perseus” exactly as it is lying on the seabed. With the assistance of the Company for Tourist Development and Promotion of the island of Kefalonia and Skala Community, the research team identified and recorded the British submarine “HMS Perseus”, at a depth of 52 meters, for the first time since it sank by an Italian mine on December 6, 1941. The naval tragedy that claimed the lives of 60 officers and sailors, including the Greek Lieutenant Nicolaos Merlin, became widely known thanks to the perseverance of Greek divers and the detailed historical research by Ms. Rena Giatropoulou. A single sailor, stoker John Cape, who was 31 years old, managed to escape from the sunk submarine and after an Odyssey that lasted for months, Cape found himself in Alexandria, Egypt. He later died in the 1980’s. The identification of the submarine, almost 60 years later, was based in part on the testimony of the sole survivor. The experienced diver Kostas Thoctarides notes: “When I first saw the picture of the submarine on the sonar display, my heart leaped.“ “A wreck is not just a mass of steel, but has many human stories. We approached respectfully, in order not to disrupt the final resting place of the 60 souls who drowned that night of December 6, 1941. I feel a moral obligation and ask divers not to remove anything from the wreck.” The “Perseus” is resting on the sandy bottom, almost upright, with a slight right tilt 18 degrees. The hole that was caused by the explosion of the mine is visible on the left side of the bow, the only visible damage to the steel hull with a length of almost 90 meters. The manhole from which John Cape escaped is still open while inside the submarine broken instruments and clutter attest to the violence of the explosion that sank “HMS Perseus”. Today many certified recreational divers visit the wreck of the “HMS Perseus”, which is now a popular diving destination in Kefalonia. One of the Fallen Thomas “Tommy” Craig, was born in Fairfieldstreet, Govan in Glasgow. He joined the Royal Navy when he was 17 years young and started his naval career on the Destroyer HMS Repulse. He was then transferred to the Submarine HMS Perseus. At the age of 23, he went down with Perseus on December 6, 1941. Identity of Perseus The submarine HMS PERSEUS was designed in 1927 and built in England, shipyards Vickers – Armstrong in 1930. Type: PARTHIAN. Length: 88.4 meters. Displacement 1,475 tons on the surface and 2,040 tons submerged.
<urn:uuid:4cde460d-840b-44fe-9d1c-83b3b8170f3b>
CC-MAIN-2017-47
https://argunners.com/sole-survivors-story-from-sunk-british-submarine-hms-perseus/
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806258.88/warc/CC-MAIN-20171120223020-20171121003020-00124.warc.gz
en
0.961874
613
3.296875
3
A small, elusive sparrow of wet grasslands and marshes of interior North America, the buffy-faced Le Conte's Sparrow has been likened to a “twenty-dollar gold piece” if seen up close (Roberts 1932c). More often it is viewed only briefly as a “bit of wind-blown straw” as it flies away to drop again into dense cover (Mengel 1980). It reluctantly flushes from observers, usually remaining “stubbornly in the field, creeping about like mice under mats of grass” (Mengel 1965b). When searching for Le Conte's Sparrows in Wisconsin, Robbins (Robbins 1969, Robbins 1991) found only 8 of 86 singing males on perches exposed enough to provide an identifiable view. This secretive behavior made it difficult for early collectors to procure specimens of this species, and for ornithologists to study it. The species was first described by John Latham in 1790 from a specimen taken in Georgia; the species was named for John L. Le Conte, a physician friend of J.J. Audubon and a distinguished entomologist. The second specimen was taken in the early 1830s in North Dakota by Prince Maximilian von Wied; the third, also from North Dakota, in 1843 by John Bell (in company with J. J. Audubon); and the fourth in 1872 by Gideon Lincecum in Texas (Wied 1858, Allen 1886b, Walkinshaw 1968c). In 1873 Elliot Coues discovered the species on its breeding grounds in North Dakota (Coues 1878d, Walkinshaw 1937a). The first nest was found in Manitoba, in 1882 as first claimed by E. T. Seton (Seton 1890a), or perhaps in 1883 by W. Raine (Raine 1894b). By the time Walkinshaw (Walkinshaw 1968c) prepared his species account of Le Conte's Sparrow for A. C. Bent's life-history series, he estimated that not more than 50 nests had been found; in the 30 years since then not that many more nests have been found (Lowther 1996a); but in 1998 began a 4-year study of grassland birds in North Dakota and Minnesota that accumulated observations of 50 more nests (Winter et al. 2005b).
<urn:uuid:be56d601-c012-4e09-813b-cd3baa81d902>
CC-MAIN-2018-17
https://test.birdsna.org/Species-Account/bna/species/lecspa/introduction
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948125.20/warc/CC-MAIN-20180426090041-20180426110041-00023.warc.gz
en
0.954876
489
3.265625
3
Hypocaust of ancient public baths |Coordinates: 40°10′N 22°29′E / 40.167°N 22.483°ECoordinates: 40°10′N 22°29′E / 40.167°N 22.483°E| |Administrative region||Central Macedonia| |• Municipal unit||172.743 km2 (66.696 sq mi)| |Elevation||40 m (130 ft)| |• Municipal unit||10,885| |• Municipal unit density||63/km2 (160/sq mi)| |• Population||1,424 (2011)| |• Area (km2)||31.375| |Time zone||EET (UTC+2)| |• Summer (DST)||EEST (UTC+3)| |Postal code||601 00| Dion or Dio (Ancient Greek: Δίον, Greek: Δίο, Latin: Dium) is a village and a former municipality in the Pieria regional unit, Greece. Since the 2011 local government reform, it is part of the municipality Dio-Olympos, of which it is a municipal unit. It is located at the foot of Mount Olympus. The ancient city owes its name to the most important Macedonian sanctuary dedicated to Zeus (Dios, "of Zeus"), leader of the gods who dwelt on Mount Olympus; as recorded by Hesiod's Catalogue of Women, Thyia, daughter of Deucalion, bore Zeus two sons, Magnes and Makednos, eponym of Macedonians, who dwelt in Pieria at the foot of Mount Olympus. Hence from very ancient times, a large altar had been set up for the worship of Olympian Zeus and his daughters, the Muses, in a unique environment characterised by rich vegetation, towering trees, countless springs and a navigable river. For this reason Dion was the "sacred place" of the Ancient Macedonians. It was the place where the kings made splendid sacrifices to celebrate the new year of the Macedonian calendar at the end of September. In the Spring, purification rites of the army and victory feasts were held. The first mention of Dion in history comes from Thucydides, who reports that it was the first city reached by the Spartan general Brasidas after crossing from Thessaly into Macedon on his way through the realm of his ally Perdiccas II during his expedition against the Athenian colonies of Thrace in 424 BC. According to Diodorus Siculus, it was Archelaus I who, at the end of the 5th century BC when the Macedonian state acquired great power and emerged onto the stage of history, gave the city and its sanctuary their subsequent importance by instituting a nine-day festival of games that included athletic and dramatic competitions in honor of Zeus and the Muses, whose organisation was overseen by the Macedonian kings themselves. Many ancient authors speak of the sculptural bronze masterpiece by Lysippos made for Alexander depicting 25 mounted companions who fell at the Battle of the Granicus and later taken to Rome by Metellus. A city was built adjacent to the sacred sites that acquired monumental form during the reigns of Alexander the Great's successors and Cassander took a great interest in the city erecting strong walls and public buildings, so that in Hellenistic times Dion was renowned far and wide for its fortification and splendid monuments. Dion and its sanctuary was destroyed in 219BC by Aetolian invaders but was immediately rebuilt by Philip V. Many of the dedications from the sanctuary that had been destroyed were buried in pits, including royal inscriptions and treaties, and these have been discovered recently. It fell to the Romans in 169BC and the city was given a new lease of life in 32/31BC when Octavian founded the Colony of COLONIA JULIA AUGUSTA DIENSIS here. It experienced its second heyday during the reigns of 2nd- and 3rd-century AD Roman emperors who were fond of Alexander the Great. Dion's final important period was in the 4th and 5th centuries AD when it became the seat of a bishopric. It was abandoned following major earthquakes and floods. The site of ancient Dion was first identified by the English traveler William Martin Leake on December 2, 1806, in the ruins adjoining the village of Malathria. He published his discovery in the third volume of his Travels in Northern Greece in 1835. Léon Heuzey visited the site during his famous Macedonian archaeological mission of 1855 and again in 1861. Later, the epigraphist G. Oikonomos published the first series of inscriptions. Nevertheless, systematic archaeological exploration did not begin until 1928. From then until 1931, G. Sotiriadis carried out a series of surveys, uncovering a 4th-century BC Macedonian tomb and an early Christian basilica. Excavations were not resumed until 1960 under the direction of G. Bakalakis in the area of the theatre and the wall. Since 1973, Professor D. Pandermalis of the Aristotle University of Thessaloniki has conducted archaeological research in the city. Dion is the site of a large temple dedicated to Zeus, as well as a series of temples to Demeter and to Isis (the Egyptian goddess was a favorite of Alexander). Excavation of the magnificent House of Dionysos revealed a mosaic of exceptionally fine quality. A rare and unusual find in the museum is a bronze "hydraulis" or hydraulic musical pipe organ found in a former workshop. In 2006, a statue of Hera was found built into the walls of the city. The statue, 2200 years old, had been used by the early Christians of Dion as filling for the city's defensive wall. In October 1992, the municipality Dio (Dimos Diou) was formed. At the 1997 Kapodistrias reform, it was expanded with the former communities Agios Spyridonas, Karitsa, Kondariotissa, Nea Efesos and Vrontou. The administrative center was in the village of Kondariotissa. At the 2011 local government reform Dio merged with the former municipalities East Olympos and Litochoro to form the new municipality Dio-Olympos. Dio became a municipal unit of the newly formed municipality, and the former municipal districts became communities. The community of Dion consists of the village of the same name and Platanakia. The municipal unit has an area of 172.743 km2, the community 31.375 km2. |Year||Community population||Municipal unit population| - View of the archeological site - Ruins at the archaeological site - Ancient column - Sanctuary of Isis - View of the villa of Dionysus containing the large Dionysus mosaic - Sanctuary of Demeter - The sacred spring with the sanctuary of Zeus Hypsistos in the background - Sanctuary of Isis - Four-columned temple dedicated to Isis Lochia, Sanctuary of Isis - View of the Hellenistic theater - Baths of ancient Dion - Public toilets along the central road - Mosaic floor in the Great Baths complex - Detail of a mosaic floor, Great Baths complex - The hypocaust of the Great Baths complex - Shields dedicated by Alexander the Great on his victory over the Persians at the Granicus river - Large mosaic at the Archaeological Museum of Dion - Inscription from the Archaeological Museum of Dion reading "ΒΑΣΣΙΛΕΩΣ ΦΙΛΙΠΠΟΥ" [King Philip] - "Απογραφή Πληθυσμού - Κατοικιών 2011. ΜΟΝΙΜΟΣ Πληθυσμός" (in Greek). Hellenic Statistical Authority. - Kallikratis law Greece Ministry of Interior (Greek) - Hesiod, Catalogue of Women fr. 7. - Diodorus Siculus: Bibliotheca historica - Name changes of settlements in Greece - Kantouris, Costas. Greek archaeologists find Hera statue. Associated Press. March 1, 2007. - EETAA local government changes - "Population & housing census 2001 (incl. area and average elevation)" (PDF) (in Greek). National Statistical Service of Greece. - F. Papazoglou, Les villes de Macédoine romaine, Supplément 18 du BCH, Paris, 1988. - D. Pandermalis, Dion, the archaeological site and the museum, Athens, 1997. |Wikimedia Commons has media related to Dion (Greece).| - Municipality of Dion website - Official website of the archaeological park of Dion - Archaeological site of Dion - Images from the archaeological site
<urn:uuid:2651945d-ab86-4d1e-bcc3-5b03cc41b726>
CC-MAIN-2021-39
https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Dion%2C_Pieria.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057524.58/warc/CC-MAIN-20210924110455-20210924140455-00192.warc.gz
en
0.937764
1,924
2.546875
3
You are not currently logged in. Access your personal account or get JSTOR access through your library or other institution: A Study of the Transpiration Surfaces of Avena sterilis L. var. Algerian Leaves using Monosilicic Acid as a Tracer for Water Movement M.J. Aston and Madeleine M. Jones Vol. 130, No. 2 (1976), pp. 121-129 Published by: Springer Stable URL: http://www.jstor.org/stable/23372068 Page Count: 9 Preview not available The sites and pathways of transpiration from leaves of Avena sterilis L. var. Algerian were studied using the accumulation of monosilicic acid as a tracer for water movement. Seedlings of Algerian oats were grown under silicon free conditions and fed monosilicic acid, in a normal nutrient solution, via the roots. The silicon component of monosilicic acid was located in freeze substituted tissue by means of x-ray microprobe analysis. Methods of tissue fixation preventing post treatment movement of tracer were developed and it was determined that monosilicic acid is a suitable tracer for water. Sites of water loss were marked by accumulation of silicon. Internal evaporating surfaces having a high intensity of water loss were demonstrated. Evaporation from epidermal surfaces was most intense over the guard and subsidiary cells with very little evaporation from the cuticular surfaces of normal epidermal cells. Moderately high evaporation occurred from epidermal fibre cells located above the veins. Evaporation from all exposed walls of guard cells including the wall adjacent to the pore was intense. Smaller amounts of tracer were located in the unexposed anticlinal walls of epidermal cells as well as within the unexposed walls of mesophyll cells. The results are interpreted as demonstrating the extent of internal transpiration surfaces and that cuticular epidermal transpiration is low. Strong support is given to the existence of peristomatal transpiration. Internal pathways of water movement are defined and the occurrence of these is discussed in relation to cuticular transpiration and lateral water movement in the epidermis. Planta © 1976 Springer
<urn:uuid:341a6eb7-fc42-4936-b864-af4f0f0c2a73>
CC-MAIN-2016-44
http://www.jstor.org/stable/23372068
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719877.27/warc/CC-MAIN-20161020183839-00037-ip-10-171-6-4.ec2.internal.warc.gz
en
0.950855
461
2.65625
3
Do neonicotinoids harm solitary bees? While there is still debate among environmentalists, researchers, and chemical companies about the effects of neonicotinoids (also known as neonics) on bees, our stance is that natural is best. There have been recent studies on traditionally managed bees (like honey and bumblebees) that shows that neonics interfere with a bee’s ability to learn to to feed from complex plants, to remember where home is, and ability to reproduce. Neonics have been shown to reduce fertility in both male and female bees. We do not recommend using toxic chemicals in a yard because they can place our local ecology out of balance. We believe when both prey and predator are in balance, our ecosystem is healthy. When you remove the pest, predators have no food source, and thus either starve or leave. When the pest shows up again, there seemingly is no recourse other than to reach out to the chemical shelf for a solution. This creates a costly chemical cycle in your yard and garden. Avoiding chemicals may appear to tip the balance to the side of too many pests, but predators will find the pests and your yard will return to balance within a few years at most.
<urn:uuid:889c1a86-047a-4166-8967-15f303ba12cd>
CC-MAIN-2019-43
https://crownbees.com/faq-central/post/do-neonicotinoids-harm-solitary-bees.html
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986658566.9/warc/CC-MAIN-20191015104838-20191015132338-00335.warc.gz
en
0.954366
245
3.15625
3
The Beginning: inFORM In 2015, Sean Follmer, human-computer interaction researcher and designer at MIT, introduced inForm: breakthrough technology developed alongside his team to create a “smart” tool able to adapt itself to the user needs. They called it, Shape-Shifting Technology. Built with linear actuations, inFORM is a tool that is able to render both physical content and user interface elements. Able to create 3D data graphs, respond to movement, or even create entire cities of interactive architecture models, it changes its 3D display as needed and promises to be a tool that revolutionises the way we work, and the way our homes work. Creating a visual tool like no other, inForm is able to interact with the physical world around it, recognising and responding to movement. This allows for remote distance collaborations, as with inFORM, users can simply reach out and touch objects in front of them, and the device will mimic the movement on the other party’s interface. The benefits in the current international market are obvious. The inFORM team went further. Explaining how their ambition to create mainstream responsive furniture, and take the meaning of “smart” homes to a whole new level, they introduced their responsive table. Using the same technology used in their work interface, the team was able to create a table that is able to respond to the environment around it and mould itself to what you need. Follmer demonstrated the uses this could bring to your home by showing how the table is able to shape itself around different objects. It can change its shape to better accommodate your laptop, thus creating an instant work environment. It can also recognise objects placed on it by their shape, identifying them by importance, such as your keys, and then able to raise them above other objects to stop you from forgetting them. The Present: Shape-Shifting Micro-Robot Now, shape-shifting technology has made its debut in synthetic biology. But this time, as the first ever shape-shifting robot- made from DNA and protein. Yes. You read it right. It was developed by a research team at Tohoku University and the Japan Advanced Institute of Science and Technology. The robot, which is able to perform important living functions, is believed to be the first step into creating a bio-inspired robot designed on a molecular basis. Image Source: Science Daily In the past, chemistry and synthetic biology fields have been able to integrate bio-modules into their work. However, what makes this robot unique is that the researchers were able to push the boundaries of bio-robotics further, by integrating molecular machines into artificial cell membranes, effectively creating a molecular robot. The robot’s body is able to change shape through its integrated actuator, responsible for putting mechanical devices into motion. Thanks to the molecular clutch, and the fact that it is made from protein and DNA, the robot is able to start and stop its shape-shifting behaviour by responding to specific DNA signals. Though the molecular robot is currently extremely small in size, a millionth of a millimetre, it is a major advancement towards the world of robotics engineering, as: it is the first molecular robot able to function in environments like the human body. The Future: The Potential in Medicine Much advancement has already been made over the recent years in shape-shifting technology. Scientists have progressed from working with material which needed external triggers to tell it to transform, to material which can be encoded to follow a sequence of shape shifting transformations without stimuli. Like this, they have been able to create things like synthetic flowers that bloom at pre-determined times. These developments have major implications for medical and scientific fields. Through the creation of a micro-robot that is able to shape-shift and live in the human body, the development of the field of implants and prosthetics promises to benefit greatly. Technologies like these continue to evolve and revolutionise industries. And medicine is the field that will be most promisingly impacted by shape-shifting technology and materials. Research into adaptation of the pre-programmed shape-shifting technology into medical applications has already begun. Scientists are working on developing implants that can deliver medicine from within the human body, or react to conditions in a predetermined manner without necessary stimuli beyond the pre-programmed. The development of shape-shifting robot by Japanese researchers seems to be the catalyst to a new age of medicine, where humans will be able to seamlessly join technology and medicine to unknown limits.
<urn:uuid:4ebbd326-fa6e-438f-b6cc-2493f635b28d>
CC-MAIN-2017-22
http://dispatchweekly.com/2017/03/future-tools-medicine-shape-shifting-technology/
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607593.1/warc/CC-MAIN-20170523083809-20170523103809-00083.warc.gz
en
0.954018
931
3.328125
3
Isn’t full inclusion a child’s right? Suzanne writes: I am a parent advocate and the parent of a child with autism. I attended an IEP meeting for a 6 year old child with autism in a neighboring school district. The child’s IEP Team plans to place the child in a self-contained "learning handicapped" class. The child’s mother wants her daughter to be fully included with an aide. The IEP Team won't budge. Am I missing something? I was able to place my five-year-old daughter in a regular Kindergarten class with an aide. I didn't run into this problem. Isn't full inclusion a child's right? full inclusion is not a right. Many parents and educators are surprised to learn that the word "inclusion" is not in the statute (although The IDEA requires that disabled chidren receive a free appropriate public education (FAPE) in the least restrictive environment (LRE). The least restrictive environment (LRE) requirement is often referred to as inclusion or mainstreaming. school districts place children with disabilities in separate special education programs where they are segregated from children who are not disabled? Sometimes. Policies about inclusion vary from one state or jurisdiction to another -- and even between neighboring When you finish reading IDEA Requirements: Least Restrictive Environment (LRE) and FAPE, you will understand why we cannot give a clear "yes" or "no" answer to your questions about inclusion.
<urn:uuid:26c2c094-f412-4088-b89b-92b94951100b>
CC-MAIN-2017-26
http://www.wrightslaw.com/advoc/ltrs/inclusion_right_suzanne.htm
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321309.25/warc/CC-MAIN-20170627101436-20170627121436-00267.warc.gz
en
0.946274
320
3.359375
3
Our Dr. Seuss lesson plan introduces students to the life and contributions of Dr. Seuss. This engaging lesson includes the design of a poster advertisement to familiarize students with Dr. Seuss’ books, as well as a group presentation of the created poster. Students are asked to complete an interactive activity in which they cut out words and word 10 Dr. Seuss book titles using them. Students are also asked to read a series of Dr. Suess quotes and tell what they think they mean, and how it can be helpful to them. At the end of the lesson, students will be able to identify Dr. Seuss and list facts about his life and explain his significance in history. Common Core State Standards: CCSS.ELA-Literacy.RI.1.7, CCSS.ELA-Literacy.RI.2.3, CCSS.ELA-Literacy.RI.2.7, CCSS.ELA-Literacy.RI.3.3, CCSS.ELA-Literacy.RI.3.7
<urn:uuid:69dac348-ca1e-4419-8a01-963ad7bf9518>
CC-MAIN-2021-39
https://learnbright.org/lessons/reading/dr-seuss/
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053657.29/warc/CC-MAIN-20210916145123-20210916175123-00345.warc.gz
en
0.921789
222
4.4375
4
What is a toddler tooth decay? Do you have children in your house? Then you must face the problem of dental cavities and tooth decay and have to deal with these as well. Usually, the decay of the tooth is a breakdown or destruction of tooth enamel. We all know the fact that enamel is the outer and hard part of the tooth. The decay of the tooth can lead to cavities in the case of your little one. These will make holes in the teeth of your children. It is one of the most common chronic diseases for children and toddlers. You can get some valuable insights and treatment where necessary from Eschenbach’s Pediatric Dentistry. Causes behind tooth decay in a toddler: It can be happened due to the bacteria and other things such as foods containing carbohydrates and all left in the tooth and joints after eating. These foods can be soda, candy, bread, cake, fruit juices, raisins, cereal, and all. Bacteria generally live in our mouths, and they change these foods in acid. The combination of bacteria, acid, and food makes plaque that will stick to the teeth and causing cavities and decaying problems. Habits of eating food: The decaying problem of a tooth in toddlers may happen due to the changing behavior of their food habits. We all know that our kids like foods such as sweets, chocolates, ice creams, and other types of sugar contained foods, and these types of foods making their teeth prone to decay and cavities as well. Drinks they prefer: Your toddler may like to have various types of drinks such as shakes, sports drinks, and juices as well. Frequently drinking these types of beverages can cause infection to the teeth, and bacteria will also settle in the grooves and pits of the tooth. Some severe medical issues: If your toddler may suffer from chronic allergies, then the flow of saliva will decrease, and this makes the chances of increasing the risk of cavities. Feeding at bedtime: If your toddler takes the bottle of milk, juice, and all at the bedtime, then you must know the fact that the liquids will sit on your tooth for hours and set the ground for bacteria as well. Decreased amount of fluoride: If you do not know the fact, then you should know that fluoride is one kind of natural mineral, and it helps to avoid the formation of cavities. This type of mineral is used to make toothpaste, mouth rinsing products, and all. Which makes it risky for tooth decay? If you notice the fact, then you will understand that all children have bacteria in their mouths, and it is prevalent at their age as well. So, they all have the risk of decaying tooth. But some of these factors may also raise your child’s risk: - Diet with a high amount of sugar and scratch. - High levels of bacteria can cause problems of cavities. - Limited water supply, and that means less amount of fluoride. - Very poor oral hygiene and care. The symptoms from that you know: There are some common ways of tooth problems and decay in the case of toddlers. These are such as follows: - White spots are the signs of enamel breaking. You will have to notice these signs in your toddler. This can also lead to the sensitivity of the teeth in children. - The early cavities can also appear in the tooth. It is light brown in color. - If the cavity becomes more rooted, then you will notice that the color will change from light brown to dark brown or black. But these types of symptoms may vary from children to children. But if the problem resists then your toddler will feel some of these: - Your toddler may feel the acute pain around the tooth and gum area. - Your children will also feel the sensitivity to some kinds of foods and drinks such as hot, cold, and sweet as well. You can also watch this useful video to clear the fact about cavities and enamel breaking as well. How to reverse your toddler’s tooth decay? So, you need to do your best for your toddler tooth decay reversal. If your toddler has severe dental issues and cavities problem, even though you can make some differences by implementing some changing food habits and ensure that they will get some essential nutrients as well. 1. Provide real food to your kid: As a responsible parent, you need to provide your kid with the top quality food. You will have to eliminate the packaged and processed foods from the buying list. You will have to give them the diet of nutrient-dense foods and drinks. The fresh foods are very much important for your children, and it ensures that your toddler will get all types of essential vitamins and minerals. This will necessary for perfect bone structure and formation of teeth. 2. Give them the fat-soluble vitamins: You will have to take care of the fact that your toddler must get the fat-soluble vitamins such as A, D, E, K. This will help your kids to form healthy bones and teeth as well. You can find these sources of vitamins in pastured eggs, grass-fed dairy, organ meats, and cod liver oil. You can also check this chart for your benefits: - Vitamin A: You will get this from butter, cod liver oil, egg yolks, and fish. - Vitamin D: You will get this from cod liver oil, beef liver, egg yolks, wild salmon, and all. - Vitamin E: You can get this from butter, nuts, organ meats, and green leafy vegetables. - Vitamin K: You can get this from vegetables from the cabbage family, butter, dark leafy vegetables, egg yolks, and all. 3. Reduce the rate of phytic acid: You will have to cut back on the phytic acid amount from your toddler’s daily diet. You can cut it by reducing the grains in regular diets such as pasta, bread, rice, cereal, and all. You can also soak the nuts and seeds before giving it to them. 4. Reduce the amounts of sweets: You will have to reduce the number of sweets, even natural sweets as well. Sugar can cause decay and cavities in the tooth of your toddler. Apart from these solutions, you can also use the remineralization toothpaste for your children. It will promote healthy teeth and ensure the mouth health for your toddler.
<urn:uuid:5f116b15-7c3a-4882-9674-69d4f0e6df7f>
CC-MAIN-2022-21
https://www.pharmamirror.com/news-center/health-tips-research-updates/toddler-has-tooth-decay-how-to-reverse-it/
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663013003.96/warc/CC-MAIN-20220528062047-20220528092047-00376.warc.gz
en
0.956495
1,354
3.078125
3
Hello. In this video we are going to speak about regex tries. When we have a list of words to match, we usually concatenate them using a pipe symbol. A pipe symbol is an alternation operator in regex. However, when we build dynamic regular expressions using a list of words that is supplied by some user and we do not know beforehand what these words look like, we need to be very careful. Why? You know that in an NFA regex alternations are processed from left to right and the first alternative "wins" and the others are not processed. However, an alternation group is only efficient when each alternative cannot match at the same position inside the string. If one or more alternatives can match at the same position inside the string, backtracking can come into play and ruin the whole performance. The more there are alternatives that can match at the same position inside the string, the less efficient is the pattern. Now, lets have a look at a concrete example. You can see an input string like this: "Afoos,foo,food, fool-foolish, foods". So, the initial naive attempt to match these words would be to construct a regular expression based on this list of words that will look like this. And you can see a test at regex101.com that we match all these strings, not as whole words inside the string, and it takes about 29 steps. However, you can see that the result is not expected because in the word "food", we would like to match "food" and not "foo". Thats why we might want to try Solution 2. Here, the words in the list are sorted by length in the descending order and put into the regular expression like this. Now lets have a test and we see that now we matched "food", "fool", "foolish" fully. However, we see that we also matched "foo" in "Afoos" and this is not expected. We need word boundaries. Good, Solution #3. Words are sorted by length in descending order inside the list and we added word boundaries here. So in Python we can build this regular expression using this code and the regular expression looks like this now. Good. You can see there are correct matches. "Afoos" does not match, but it still takes 92 steps and you can see that each alternative here starts with the same prefix, "foo", you can see here as well. So this is where regex tries come into play. Have a look at this example here and the test. You can see that it took 67 steps for this regular expression to find all matches in the string. You can see that "foo" is only tried once. Then we can only try "l" or "d", after "d" there is an optional "s" and after "l" there is an optional "ish" substring. So what is a trie? A trie is a kind of a tree data structure used for locating specific keys from within a set. So these keys are in our case, of course, strings and there are links between nodes in this trie that are defined not by the entire key but by individual characters, or in our case, subpatterns. So, this is a trie here for keys "A", "to", "te", "ted", "ten", "i", "in" and "inn". A regex trie is a regular expression that is built in a tree- like way, so you can have a look here. How can we build such patterns? Usually, there are such specific libraries that help you create regex tries. For example, in Python, you can use the trieregex library. You can see it here. All you need to do is to install this library using "pip install trieregex", and then you can build your patterns using this library in a very simple and short way. However, you can also use other ways to build regex tries and one of these ways is described in this StackOverflow answer, in a "Speed up millions of regex replacements in Python 3" post, which is really very-very nice and highly customizable. Now lets continue with our testing. I have found some arbitrary text on the Internet and it will be our sample input text. In order to make our test more close to real life situations, scenarios, I decided to extract all words from this text. Thats why Im using this part of code here with "re.findall", and you can see all the words extracted here. So there are a lot of words. Some of them start with the same letters, some of them dont. So this is really close to the situations you might come across in your real life. So we build two patterns. One of them is not trie-based, and the other one is trie-based. We compile them and you can see the patterns here. So trie regexps look very cryptic, but in fact, thats why they are much more efficient in the end. In this block, I benchmarked the two regular expressions using the Python "re" library. You can see that I ran each of them 10,000 times and it took the first pattern longer than the second pattern, which means that the trie pattern is more efficient than the regular expression that is not trie-based. Trie-based regular expressions are very efficient to match very-very long lists of words and even such things as emojis. For example, you can see this post at StackOverflow where all current emojis can be matched and extracted using a huge pattern, but this pattern is still quite efficient when you use it in your code. You can learn more about this construct if you follow the links in this FURTHER LINKS section, and if you liked my video, please click "Like" and subscribe to my channel if you havent done it yet. Thank you for watching and happy regexing.
<urn:uuid:d1890bdb-f004-4536-99c9-709f1fbabe9e>
CC-MAIN-2022-49
https://python.engineering/YlN-ZBa5DvU-regex-2022-05-06/
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710421.14/warc/CC-MAIN-20221210074242-20221210104242-00676.warc.gz
en
0.944527
1,272
3.71875
4
* Any views expressed in this article are those of the author and not of Thomson Reuters Foundation.As sudden frost hits Pakistan’s prized guavas, farmers hesitate to replant Every winter, I spend two or three weeks in my home district of Larkana in southern Pakistan, where I can relax with my family and childhood friends in the guava orchards, enjoying the juicy fruit known for its many health benefits. But on my most recent visit I was disappointed. Instead of a healthy harvest I found dried up guava trees, with their leaves and fruit turning a deathly black and brown. I could see a desperate look in the eyes of the guava farmers. In January, a sharp frost, the worst in the last five decades, caused widespread damage to the guava orchards just as they were about to be harvested. The cold snap struck guava trees across thousands of hectares in Larkana, a district of Sindh province. The guava is a tropical fruit that can tolerate brief periods of cold but not much frost. The sudden cold spell killed tens of thousands of trees, which will now need to be replaced with new plants that take four years to mature and bear fruit. In Larkana district, where guava orchards spread over 10,000 hectares that usually yield about 90,000 metric tonnes of fruit, district agriculture officials put losses at over 75,000 tonnes – or one billion Pakistani rupees ($9.5 million). The farmers I spoke to say since they have never experienced such cold weather before, and were caught unprepared. Many of them are unaware of ways to protect their orchards in the future if temperatures plunge again. In normal circumstances, agriculture department officials would give the farmers guidance. But officials too are unfamiliar with new, erratic weather patterns that are resulting in shorter but colder winters and damage to crops. Tree damage is the biggest worry for guava farmers, for whom the loss of trees has a much greater impact than the loss of one harvest because of the long waiting time for trees to bear fruit. ‘PUSHED BACKWARD FIVE YEARS’ “We have been pushed backward five years in terms of our income from these guava orchards,” farmer Nabi Khan told me. He leaned against a dried-up tree, with dead leaves and fruit falling from the branches when the wind blew. Another guava farmer, Razak Solangi, my childhood friend, who is the sole breadwinner of an eight-member family, says he has suffered financial losses of $20,000. Given the weather’s unpredictability, Solangi and other farmers are considering uprooting dead trees and replacing them with vegetable cash crops that yield two to three harvests a year. Smallholder farmers wonder whether to continue with guava farming, which is labour-intensive because of the need for constant vigilance against pest attacks. Karam Ali sits by a waterway that passes alongside Aghani road, a major thoroughfare that connects Larkana city with rural villages where guava is cultivated. Watching labourers clearing his 11-hectare guava field of dead fruit, he talks about opting out of guava farming and switching to the cultivation of tomatoes, potatoes and onions. “How can I persuade myself to replant the guava trees when I am not certain that next year we will not suffer the same kind of cold weather?” he asked. “Maybe, next year it is worse.” Guavas are produced and enjoyed all over Pakistan, either eaten fresh or made into jams and jellies. But the guavas of Larkana district, which account for over 70 percent of the country’s production, are considered to be the finest. Although there have been reports of weather-related damage to guava trees in other parts of Pakistan, the losses have not been so severe. The guava harvesting season from mid-January to mid-February is observed as a festival season all around the district. Farmers look forward to watching their produce being packed into wooden boxes for local consumption and export. Fathers send baskets of freshly-harvested guava to the homes of their married daughters as a token of love, a tradition followed in my own family. Friends gather to enjoy the fruit, and exchange baskets with each other. But this time the atmosphere during what should have been harvest-time is gloomy. Some wonder whether they will ever get to enjoy the prized Larkana guava again. Saleem Shaikh is a climate change and development correspondent based in Islamabad. Our Standards: The Thomson Reuters Trust Principles.
<urn:uuid:e0e1e131-e69d-4c08-a9f6-cb9ad26dc2e3>
CC-MAIN-2018-26
http://news.trust.org/item/20140202233931-ghjek/?source=hpeditorial
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864822.44/warc/CC-MAIN-20180622220911-20180623000911-00234.warc.gz
en
0.963467
967
3.15625
3
Sometimes a review really catches you by surprise. You think it’s going to be a ‘simple, little program’, and you find out that it is actually so much more. That’s what we’ve discovered with UberSmart Software’s UberSmart Math Facts program! Although I was very excited about this review (seriously!), I thought we’d be ‘doing flashcards’ on the computer screen. That’s all. I thought it would be helpful, but I wasn’t sure that UberSmart Math Facts would be anything amazing. Let me just put this out there: I. WAS. WRONG. Though UberSmart Math Facts seems so simple, and it’s offered for a very reasonable price, it’s not such a ‘simple, little math program’ after all. It’s packed full of terrific math learning, practicing, testing, and assessing. UberSmart Math Facts was created by homeschool dad, David Kocur, in response to his wife’s request for a flashcard program for their children. This downloadable software program teaches basic math facts (addition, subtraction, multiplication, and division) to children of all ages. It uses a flashcard and rote memorization learning method, which has proven so effective over the years. The main goal of UberSmart Math Facts is to encourage children (or users of any age, really) to memorize basic facts so they will be able to answer with automaticity. All users may begin with an assessment test to determine the level at which they are currently able to work. It begins with counting dots, moves on to sequencing, number relationships, and odd/even numbers. From there, it goes on to test basic addition, subtraction, multiplication, and division. (There is an “I don’t know” option to keep your child from feeling frustrated if the problems become too difficult.) Once the user has completed this assessment test, the program offers a written (and printable) assessment of the user’s math skills to show you where your children should be working in the UberSmart Math Facts program. I was very impressed with this report! The levels begin with simple ‘dot cards’ that look somewhat similar to dominoes. These are suitable for young learners, and they are a wonderful way to learn counting and addition skills in a simple way. (Our 6 and 5 year old boys are beyond this, and our 3 year old daughter is not quite ready for the dots, so we have not spent much time in this portion of the program.) If your child is ready for more, he or she may begin to work on basic addition facts. You may choose a level, such as focusing only on “+1” facts or “+2” facts. Flash cards appear with a blank in which to type the answer. Once you’ve typed an answer, you hit the enter key and the next fact appears. At the end of the fact practice (around 15 facts), you’ll see the results. These are worded in an encouraging way, and the results show whether the user has mastered this level or needs to continue practicing. UberSmart Math Facts works like typical flashcards, but because it is a software program, it has the ability to do so much more! For example: - If you choose, the program will automatically filter out facts that your child has mastered. - It will let you know when it’s time to move from practicing a set of facts to testing on that set of facts. - You can adjust time limits and track response times and test results with graphs. - It allows you to print a certificate when your child has completed all facts. UberSmart Math Facts can be used by all members of your family. In a passworded parent account, you may set up a username for each family member who will be using the software program. In this way, multiple children can utilize the program at the same time. This is an amazing benefit for homeschool families, but it’s also terrific for public or private school families who would like their children to have a little extra practice with basic math facts. UberSmart Math Facts is not a high-tech, game-like program with fancy bells and whistles. It has a very basic appearance, as you’ve likely noted throughout this post. It IS, however, a very effective method of learning and retaining basic math facts. This program is full of features to help you assess your child’s math abilities and encourage math success. I very much appreciate UberSmart Math Facts, and I believe we will use this program with all three of our children over the next decade! For now, we will DEFINITELY continue to use this program with Alex, our 6 year old. With Alex, the fancy ‘bells and whistles’ can be a distraction. He does well when he is able to focus on the task at hand without feeling rushed. I feel that UberSmart Math Facts is a great fit for him! It is absolutely perfect for his needs right now, and I am incredibly thankful that we had the opportunity to review UberSmart Math Facts! My only complaint about UberSmart Math Facts is that we noticed a few typos scattered throughout the program. These were minor issues like spelling errors (your instead of you’re) and pop-up messages that told me I mastered an addition section even though I’d just completed a multiplication section. I only noticed this twice, and reviewers were using the pre-release version of Ubersmart 4.0, so it is possible that these errors have now been corrected. The typos do NOT affect the program’s ability to assess or teach your child. UberSmart Math is available for the low price of $24.95, but for a limited time, you can enter “v4 Early Bird” in the discount code box on the purchase page for a 30% discount! This discount is only valid through September 30, 2014. System Requirements: UberSmart Math is designed for Windows 7, 9, XP, and Vista. To read more reviews on UberSmart, please CLICK HERE or on the image below.
<urn:uuid:0008740f-5dca-490f-a293-00401b6721a6>
CC-MAIN-2017-22
http://mamasmonkeys.blogspot.com/2014/08/review-ubersmart-software.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607702.58/warc/CC-MAIN-20170523202350-20170523222350-00105.warc.gz
en
0.959286
1,306
2.6875
3
First Person user interfaces allow people to digitally interact with the real world as they are currently experiencing it. Through a set of "always on" sensors, these applications layer information on people's immediate view of the world and turn the objects and people around them into interactive elements. Simply place a sensor-rich computing device in a specific location, near a specific object or person, and automatically get relevant output based on who you are, where you are, and who or what is near you. This enables digital interactions with the real world that help people: navigate the space around them; augment their immediate surroundings; and interact with nearby objects, locations, or people. Through handheld and embedded GPS units, we’ve had digital tools that help people navigate the space around them for quite a while. But our interactions with these interfaces were primarily managed from above. In other words, we saw our current position in the World as a point an a map. More recently, GPS units began including three dimensional representations of spatial navigation became more widespread. This meant navigation cues designed from our current perspective. Consider the difference between the two screens from the TomTom navigation system shown above. The screen on the left provides a two-dimensional, overhead view of a driver’s current position and route. The screen on the right provides the same information but from a first person perspective. This first person user interface mirrors your perspective of the world, which hopefully allows you to more easily follow a route. When people are in motion, first person interfaces can help them orient quickly and stay on track without having to translate two-dimensional information to the real world. TomTom’s latest software version goes even further toward mirroring our perspective of the world by using colors and graphics that more accurately match real surroundings. But why re-draw the world when you can provide navigation information directly on it? Google Maps Navigation uses actual satellite and street view images of the World around you and overlays directions and routes on them. In lots of cases, there are great reasons for software not to directly mimic reality. Not doing so allows us to create interfaces that enable people to be more productive, communicate in new ways, or manage an increasing amount of information. In other words, to do things we can’t otherwise do in real life. But sometimes, it makes sense to think of the real world as an interface. To design interactions that make use of how people actually see the World. In the case of navigating physical space It may make sense to take advantage of first person user interfaces.
<urn:uuid:1407e719-4a15-4f4b-bea2-cacf3544e35f>
CC-MAIN-2014-42
http://www.lukew.com/ff/entry.asp?1101
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507447660.26/warc/CC-MAIN-20141017005727-00302-ip-10-16-133-185.ec2.internal.warc.gz
en
0.928716
518
3.640625
4
Estimating a Margin of Error Day 37 - Lesson 3.4 Use simulation to approximate the margin of error for a sample proportion and interpret the margin of error. Use simulation to approximate the margin of error for a sample mean and interpret the margin of error. Activity: How much TV do students watch? Students are expected to do the first page of this Activity in pairs. The second page is done as a whole group. On the first page, we are trying to get students to see the reason why we multiply the standard deviation by 2 in order to get the margin of error. The reason is because a majority or our estimates will be within 2 standard deviations away from the mean (should be around 95%). Since our students have already seen the normal distribution and the 68-95-99 rule in their Algebra 2 class, we can also make this connection. Mean – 2 * S.D. = 5.019 – 2 * 0.262 = 4.495 Mean + 2 * S.D. = 5.019 + 2 * 0.262 = 5.543 29 out of the 30 (97%) of the estimates are between 4.495 and 5.543 hours of TV. On page two of the activity, we show students this calculation (which is really a 95% confidence interval…a preview of what’s to come!). We felt that we needed the idea of a confidence interval in order to discuss the margin or error. We worked as a whole group to take the students through the second example concerning the proportion of students who text during class. As an exit slip, students worked individually on the application. The Application allows students a chance to try to write an interpretation of a margin of error on their own. It also gives them their first chance to assess whether a claim is plausible (is the claimed value within the confidence interval?). We like that students are already practicing inferential thinking before we have formally reached statistical inference. We were very specific with how we wanted students to interpret the margin or error: “we expect the true proportion/mean (context) to be at most _______ away from our estimate of _________.”
<urn:uuid:929c5158-96ff-45ae-aa33-9c819478821f>
CC-MAIN-2022-49
https://www.calc-medic.com/intro-day37
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710710.91/warc/CC-MAIN-20221129164449-20221129194449-00840.warc.gz
en
0.919806
474
3.78125
4
Mary Mahoney Award As nurses continue to advance health and health care for all, it is important that the profession grows in diversity to reflect the shifting demographics of the United States. While the registered nurse population is growing in diversity, minority backgrounds still represent significantly less of the workforce than their proportion of the country’s population. The American Nurses Association (ANA) recognizes there is still work to do and The Mary Mahoney Award represents one of the ways we are living up to our commitment to maintaining and improving diversity in our health care system. The award is named in honor of Mary Eliza Mahoney, the first African American graduate nurse in the U.S. The award recognizes an individual nurse or group of nurses for special efforts they have made towards increasing diversity and inclusion within the nursing profession. Mary Mahoney: A True Pioneer of Nursing Mahoney graduated from the Training School of Nurses, New England Hospital for Women and Children, in 1879. One of only three out of her 40-strong class to graduate, she became the first African-American nurse to qualify in the U.S. A passionate believer in equality in nursing, Mahoney spent much of her life working to challenge perceptions and abolish discrimination in the field. In 1908 she became co-founder of the National Association of Colored Graduate Nurses (NACGN), and in 1909 gave the address at their first conference. During her 40 years in nursing, she provided exemplary patient care and made outstanding contributions to nursing organizations. NACGN established The Mary Mahoney Award in 1936, in recognition of Mahoney’s example to nurses of all races. The award has been conferred by ANA since 1952, following the dissolution of NACGN and its merger with ANA in 1951. When ANA established our Nursing Hall Of Fame in 1976, Mahoney was one of the first inductees. She was inducted into the National Women’s Hall Of Fame in 1993. You are now leaving the American Nurses Foundation The American Nurses Foundation is a separate charitable organization under Section 501(c)(3) of the Internal Revenue Code. The Foundation does not engage in political campaign activities or communications. The Foundation expressly disclaims any political views or communications published on or accessible from this website.Continue Cancel
<urn:uuid:7da8dfdf-6cb1-44af-8383-bda382794640>
CC-MAIN-2018-26
https://www.nursingworld.org/ana/national-awards-program/mary-mahoney-award/
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867424.77/warc/CC-MAIN-20180625033646-20180625053646-00398.warc.gz
en
0.95504
470
2.578125
3
The Plymouth Brethren is a [[Christianity|Christian] religious movement founded in Dublin, Ireland in 1828 EV and made prominent by John Nelson Darby, Edward Cronin, John Bellett, and Francis Hutchinson. As the movement spread, a large group of adherents assembled in Plymouth and the members were called the Plymouth Brethren. Between the years 1845 and 1848 a difference over the "independence" of local meetings resulted in the first division, causing a distinction to be made between the Open Brethren mainly referred to by the name Plymouth Brethren, and the Exclusive Brethren. Open Brethren remain loosely affiliated and over the years have come to resemble Protestant evangelical churches in doctrine, except that there are no officially recognized clergy and the Lord's Supper is celebrated weekly - both of which are common to open and exclusive groups alike. Plymouth Brethren and Crowley Crowley's parents were members of the Plymouth Brethren, an extremely devout Christian sect. The Bible was Crowley's only reading material as a youngster, and like many young people he particularly enjoyed the "exciting bits" and in particular the stories about the Beast 666. This and his mothers referring to him as "The Beast" are what got him is now infamous nickname. - Brethren Online.Org (http://www.brethrenonline.org/) - My Brethren.Org (http://www.mybrethren.org/) - Plymouth Brethren.Com (http://www.plymouthbrethren.com/) - Wikipedia. (2004). Plymouth_Brethren (http://en.wikipedia.org/wiki/Plymouth_Brethren). Retrieved Sept. 23, 2004.
<urn:uuid:64180c3f-3c07-4b07-ae41-15a39b01ae62>
CC-MAIN-2018-51
http://www.thelemapedia.org/index.php/Plymouth_Brethren
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829542.89/warc/CC-MAIN-20181218164121-20181218190121-00300.warc.gz
en
0.927342
346
2.78125
3
A Community Devoted to the Preservation and Practice of Celestial Navigation and Other Methods of Traditional Wayfinding From: Frank Reed Date: 2017 Aug 18, 12:53 -0700 Start simple. You could shoot an ordinary Sun-Moon fix, though you'll have to separate the sights in time, like a running fix, to get enough azimuth spread. For example, here in New England I could shoot the altitude of the Sun just after the beginning of the eclipse around 1:30 when its azimuth is around 200°. Then I could shoot the altitude of the Moon's upper limb (in front of the Sun!) at mid-eclipse about 2:45 when the azimuth of both bodies is nearly 230°, far enough from the first sight to cross the lines of position. And I could shoot the Sun's altitude again just before the end of the eclipse. Needless to say, I could skip the Moon entirely and shoot a Sun upper limb in the middle, but where's the fun in that? This is strictly for the amusement value of getting a line of position from the New Moon, which is otherwise impossible. For lunars-style sights, you could estimate the times of first and last contact. This is not actually easy, but we can use our sextants to make it more accurate. The two cusps at the Moon's limb crossing the Sun's face move quite rapidly under some geometric conditions. Just after the Moon begins to cross onto the Sun's face, and you detect that little bite taken out, measure the angle between the two cusps. Do this repeatedly, maybe once a minute for ten minutes, and then plot the results on a graph. You'll see that the rate of increase of the angle is high at the beginning and then rapidly slows down. If the limbs of both bodies were perfect circles, the rate at the beginning would be formally infinite. Can you extrapolate back to get the time to the second when the eclipse began? Reverse the process at the end of the eclipse. You could also get those times by measuring the angle from the inside limb to the Sun's opposing limb (like the phase as described) and extrapolate more-or-less linearly to the time when that angle would be equal to the Sun's diameter (from tables or measured before and after the eclipse). Those times correspond to instants of zero apparent lunar distance, and you can reduce them as lunar distance sights with the usual caveats about very short distances. For observers relatively close to the path of totality, try measuring the altitudes of the cusps near the time of maximum eclipse. They will change rapidly and at different rates. Can you do anything with that?? Yet another lunar-like observation: measure the minimum angle during the eclipse between the Moon's limb on the face of the Sun and the limb of the Sun parallel to it. This is a measure of the maximum "phase" of the eclipse and provides a nice line of position without measured altitudes. This is very similar to getting a fix by observing artificial satellites against background stars though the potential accuracy is only about 5-10 nautical miles since the Moon is so far away. When using artificial satellites, since they are typically a thousand times closer, you can get results as much as 1000 times more accurate which is near GPS accuracy.
<urn:uuid:d6908b07-fec3-48ac-bfb6-4a8bac160a24>
CC-MAIN-2020-29
http://fer3.com/arc/m2.aspx/I-got-an-eclipse-FrankReed-aug-2017-g39789
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657143354.77/warc/CC-MAIN-20200713064946-20200713094946-00212.warc.gz
en
0.953011
681
3
3
Damage or defect in a composite structure can significantly alter the average resistivity of the structure. A method to estimate the resistivity of composite structures using an inverse problem solving algorithm is presented that uses voltage distribution on the structure as data. Electrodes attached to the surface of the structure are used to obtain voltage data in response to current injection through a pair of these electrodes. The forward problem involves using the finite element method to predict the voltages at the electrodes using known values of resistivity. The inverse problem involves solving for the resistivity values using the experimentally measured voltage data. If the material does not have uniform properties, the computed resistivity values are average values. To explore the possibility of using this approach to detect defects in manufacturing or damage due to loading, the effect of artificially induced damage/defect on the overall resistivity of the structure is studied. - Design Engineering Division and Computers in Engineering Division Damage Sensing by Inverse Estimation of Electrical Resistivity of Composite Structure - Views Icon Views - Share Icon Share - Search Site Zhang, S, & Kumar, AV. "Damage Sensing by Inverse Estimation of Electrical Resistivity of Composite Structure." Proceedings of the ASME 2010 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. Volume 3: 30th Computers and Information in Engineering Conference, Parts A and B. Montreal, Quebec, Canada. August 15–18, 2010. pp. 843-849. ASME. https://doi.org/10.1115/DETC2010-28859 Download citation file:
<urn:uuid:61727cc1-e029-4284-93a5-cd9818993a8b>
CC-MAIN-2022-21
https://mechanismsrobotics.asmedigitalcollection.asme.org/IDETC-CIE/proceedings-abstract/IDETC-CIE2010/44113/843/359332
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522270.37/warc/CC-MAIN-20220518115411-20220518145411-00729.warc.gz
en
0.856859
331
2.671875
3
Welcome to the Starlink Constellation Simulator Starlink satellites are low earth orbit communications satellites currently being launched and operated by SpaceX. Unlike geostationary satellites, Starlink will provide very low latency internet access across large distances on earth. In order to provide coverage over most of the inhabited areas of earth, SpaceX is planning to launch a staggering 12,000 satellites. Each satellite will be placed into one of three orbital shells at various altitudes ranging from about 340 kilometers to about 1,150 kilometers. Some have expressed concern that this huge number of satellites in low earth orbit will dramatically change the night sky and affect long exposure astrophotography and astronomy. This site is designed to simulate a view of the sky once all 12,000 SpaceX Starlink satellites are in orbit. You can customize the number of satellites, the number of orbital planes, and the altitudes of the orbital shells of the simulated constellation.
<urn:uuid:b838ea02-b83a-4a98-a380-71dd1c0d39eb>
CC-MAIN-2020-45
https://www.howmanystarlinkswillfillyoursky.com/
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107903419.77/warc/CC-MAIN-20201029065424-20201029095424-00290.warc.gz
en
0.892032
183
3.109375
3
Fast food is well-known to be an unhealthy food choice, yet many people continue to indulge. After all, it's quick, filling and, with the arrival of promotions such as the "dollar" and "value" menus, is often promoted as an affordable meal option. Today's fast-paced lifestyles also align with the "drive through" approach, however, due to the negative effects associated with eating too much fast food, it is a good idea to limit portions and how frequently you eat these foods. Here are just a few of the reasons why cutting back on fast food is beneficial to your health. High Fat Content, Lots of Sodium and Excess Calories Weight gain is an obvious drawback to fast food consumption. While some chains do offer healthier choices, a large percentage of the menu choices at these eateries are fried and come with complementary fried side dishes, and are topped off with a sugary drink and/or dessert. These are the menu staples and the ones people mostly buy. In terms of fat content and calories, many burgers contain an excessively high number of both, sodium and percentage of fat content which is not good for a regular diet. A 2013 report by the World Health Organization pointed out links to obesity, increased body mass index (BMI) and fast foods.1 Too much salt is another problem. Despite the increased emphasis on lowered sodium to improve cardiovascular health, a report in 2013 showed the level of sodium in both processed and fast foods was still pretty high. According to a published in the Journal of American Medical Association, evaluation done between 2005 and 2011 showed levels of sodium haven't changed much despite increased awareness.3 Too much sodium increases blood pressure and puts extra stress on the kidneys. High blood pressure is a leading cause of many negative health effects including heart failure, heart attack, stroke and kidney disease, to name a few. Recommendations of sodium intake are placed at 2,300 mg a day, less for people with high blood pressure or other cardio-related disease. Reportedly, the average American takes in between 3,300 and 3,400 mg of sodium a day. Fast foods are a big offender of adding too much sodium. While calories are needed for energy and a level of fat is also required by the body, the fat and calorie content in the typical fast food meal is "bad" fat. Even pizza can be very deceptive, as earlier research has shown some types of pizza are just as bad as the drive-through meals.4 Possibly even more so, since people often eat two or more slices of pizza (as opposed to eating one burger). One fast food meal can equate to the total recommended daily amount of fat for the entire day. For individuals who eat these on-the-go foods more than once a day, daily recommended allowances can easily be doubled which causes both rapid weight gain and negative health effects. Selecting healthier alternatives, complete with "good" fats and less salt, is a much better option for both your figure and your health. Organ Damage and Health Risks A 2008 study that examined 18 slim and healthy Swedish men and women concluded that after eating a fast food-centered diet and eliminating exercise, they gained an average of 16 pounds and had liver damage. The study consisted of the subjects eating these quickie meals twice a day for four weeks; during this time period the participants did not exercise. At the time ABC News' Good Morning America had reported: Studies have shown that a diet high in fat and calories — the magic recipe for delicious, greasy fast food — puts people at greater risk for obesity and type 2 diabetes, both of which can lead to cardiovascular diseases and heart failure. 5 There are a few more very good reasons to avoid eating too much fast food in a regular diet. The extra fat contained in these meals can result in severe liver damage if prolonged consumption in a diet occurs. The excess sodium can lead to high blood pressure and heart disease. Aside from the obvious things, such as weight gain, some experts say eating processed foods can also have an effect on the physical appearance of a person. Ways junk food may affect physical appearance include: - Increased frequency of acne (although it is not the consumption necessarily, but the handling of and then touching the face) - Skin aging at a faster rate - Contributing to mood swings - A "puffy" appearance due to water retention (high sodium) According to the University of West Virginia, "What you eat directly affects how you feel and look and definitely your academic performance." 12 [for those not in school, work or home life]. Health of the Family Drive-through meals can also have a negative impact on family life. In a world that has rapidly evolved to embrace convenience, fast food meets these requirements, however, it also encourages eating on the run. This equates to less time to focus on prepared meals at home eaten around a dinner table. Additionally, through on-the-go meals, children do not learn how to develop healthy eating choices. Not only is eating on the run, with poor diet choices, bad from a health perspective, it can also deteriorate family meal time which is a good time for bonding, learning what members have been up to during the day and simply enjoying one another's company. Kids tend to emulate their parents and if everyone is indulging in processed foods, this can set a life-long habit and establish unhealthy eating patterns. The GMO Factor Last, but not least, there is the GMO factor. Did you know almost all of the corn grown in the U.S. and Canada contain genetically modified ingredients? Most of the soy grown is GMO-based also, and sugar is another food that is steadily becoming GM-based (various types of corn syrups have replaced ordinary table sugar at this time as a sweetener in processed foods). As a result, it is almost certain GMOs are in fast foods. Yet, there is still much debate on the long-term effects of these foods on health and the environment. At this time, in countries, such as the United States and Canada, do not require GMO foods to be labeled. In essence, people are eating these foods without knowing they are ingesting GMOs to make informed decisions. On another note, there are even suggestions fast food can be potentially addictive due to sugar and other factors associated with it. 10 While fast food is a great convenience, there are just so many negative effects that easily dominate any possible benefits. There is really no health worth in consuming it on a regular basis. You can improve your health, energy and overall well-being by eliminating, or significantly limiting these foods in a routine diet. Like anything else, moderation is key, and where fast food is concerned, less is probably more. [Related reading: Reasons to Avoid Drinking Soda ]
<urn:uuid:e492b8fd-b04d-4c3d-9b1e-9c52595e5c4a>
CC-MAIN-2018-17
http://www.infobarrel.com/Reasons_to_Cut_Back_on_Eating_Fast_Food__
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937113.3/warc/CC-MAIN-20180420022906-20180420042906-00549.warc.gz
en
0.96899
1,396
3.078125
3
Topic: The Oldman Collection Is part of topic Pacific Cultures at Te Papa In 1948, the New Zealand government purchased the Maori and Pacific collection of the London dealer W O Oldman. The collection was divided on indefinite loan among the four large New Zealand metropolitan museums, with small amounts also going to smaller public museums with adequate fireproof buildings. The Dominion Museum received the bulk of the Maori, Marquesan, New Caledonian, and Admiralty Island components of the collection together with small numbers of items from other island groups. Because these items had passed through various sale rooms in Britain, they often lack detailed information on their origins or historical context, but their quality is outstanding. Find additional information about this topic at these sites - National Film Unit clip at YouTube
<urn:uuid:5d2c9433-9d63-4c67-9478-a33048469bb6>
CC-MAIN-2015-32
http://collections.tepapa.govt.nz/topic/1337
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042992201.62/warc/CC-MAIN-20150728002312-00273-ip-10-236-191-2.ec2.internal.warc.gz
en
0.948987
159
2.90625
3
Sex differences in group composition and habitat use of Iberian free-range pigs The aim of the present work was to study group size, group composition and habitat use of Iberian pigs along the year when reared outdoor. This consists of a regimen in which animals are reared free range from 2 months of age until at least 14 months of age. In a first stage, animals are supplemented with concentrates, and in a second, called montanera, pigs eat just natural resources in areas with no more than two pigs per hectare. In these systems, males are castrated to avoid boar taint and females spayed to avoid the attraction and mounting by wild boars. The study was carried out in five different farms allocated in the south-west of Spain during 2 consecutive years, from March 2012 to February 2014, under the montanera regimen, and with a total of 995 animals observed (498 males and 497 females). The data were analyzed with SAS by means of general models and proc mixed. Mean group size along the year was of 17 ± 12.9 individuals, but this was significantly lower (P < 0.05) during the montanera (12 ± 0.8) and at midday (13 ± 0.8). Groups were bigger (P < 0.05) when they were more than 50 m from a tree (23 ± 1.8), or <10 m from the shelter (25 ± 1.5), the feeding area (31 ± 3.1) and the water-bath area (25 ± 1.5). Nine percent of the groups were solitary animals, being higher (P = 0.0286) during the montanera (11%) than the rest of the year (8%) and being formed in 68% by males. Males were less involved in mixed groups than were females (75% vs. 91%), especially in spring, where the largest (P < 0.0001) male groups were found. Female groups were less frequent and smaller (P < 0.0001) than were male and mixed groups. In conclusion, although males were castrated at a very young age, they showed a different behavior than females, forming in bachelor groups during the spring and being less involved in mixed groups and with more solitary animals. During the montanera, when animals were feeding on acorns and other natural resources, groups were smaller and closer to the trees, solitary males reaching a maximum percent. Tipo de documento Versión del documento 636 - Explotación y cría de animales. Cría del ganado y de animales domésticos Frontiers in Veterinary Science Dalmau, Antoni, Míriam Martínez-Macipe, Xavier Manteca, and Eva Mainau. 2020. "Sex Differences In Group Composition And Habitat Use Of Iberian Free-Range Pigs". Frontiers In Veterinary Science 7. doi:10.3389/fvets.2020.600259. Número del acuerdo de la subvención INIA/Programa Nacional de Proyectos de Investigación Fundamental/RTA2010-00062-C02-01/ES/Bienestar del cerdo Ibérico en montanera: Evaluación mediante los protocolos Welfare Quality y efecto de las alternativas a la castración quirúrgica de machos y hembras sobre el comportamiento, la calidad de la carne y la aceptabilidad del consumidor/ Este ítem aparece en la(s) siguiente(s) colección(ones) - ARTICLES CIENTÍFICS El ítem tiene asociados los siguientes ficheros de licencia: Excepto si se señala otra cosa, la licencia del ítem se describe como http://creativecommons.org/licenses/by/4.0/
<urn:uuid:4726da96-bf02-48a8-a499-15712d62578e>
CC-MAIN-2021-39
https://repositori.irta.cat/handle/20.500.12327/1016?locale-attribute=es
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057083.64/warc/CC-MAIN-20210920161518-20210920191518-00686.warc.gz
en
0.814819
855
3.21875
3
Spencer Wells has a job that most people would kill for. He is explorer-in-residence for National Geographic and his work has taken him to every corner of the globe. His particular interests have nothing to do with wild places, however. His fascination lies with the people who inhabit these remote corners: how did they get there and what are their biological relations with other inhabitants of the planet? Wells is a geneticist and leader of the Genographic project, funded by National Geographic, which has traced the movements of human populations since we first emerged from our sub-Saharan homeland 100,000 years ago and colonised the planet. In the process of this work, Wells noted that a swath of genetic changes occurred to our species around 12,000 years ago. This was a crucial period for humanity as it was around this time that agriculture began its inexorable spread across Europe and Asia, changing Homo sapiens from hunter-gatherers to farming folk. The consequences were profound and not always beneficial. They also point to future problems for our species, as Wells argues in his latest book, Pandora's Seed: The Unforeseen Cost of Civilization (Allen Lane). Why did humanity turn to agriculture 12,000 years ago? We were backed into a corner. Conditions changed at the end of the ice age 17,000 years ago and, as the climate got warmer, populations began to expand. Then there was a sudden reversal to ice age conditions during the Younger Dryas period 12,500 years ago. The land could no longer support that growing population and we had to innovate – by developing farming. It made sense in the short term, but there were consequences, both pleasant and unpleasant. Give us some examples. Before agriculture, humans were living on diets that included more than 150 plant species. Then we started farming and that figure went down to around eight. In fact, most calories came from wheat and barley, which are full of carbohydrates but have little protein. Human health declined sharply. We got shorter and life expectancy plunged. In many parts of the world, it has yet to recover. You argue that there were other consequences for society. Yes. When we were hunter-gatherers, we were relatively egalitarian. Then the population expanded and the first towns appeared. We had to find a way of ruling those people. So governments came into play. In addition, most hunter-gatherer societies today have a panoply of deities. Gods are everywhere. But as we started to control nature, we saw ourselves above nature. Gods start to take on a human form, so monotheism appears around this time. Then there is the issue of sexual relationships, which were probably egalitarian, as they are among hunter-gatherers today. However, as we built more cities and, ultimately, empires, military might was needed and being physically strong and having military prowess became important. Men, being stronger than women, probably developed a higher social standing this way. What about the future? It is difficult to say what life is going to be like in 100 years, but certain things have clearly been set in motion, among them climate change. That is why I have developed the concept of transgenerational power – the idea that we are making decisions locally, in the here and now, though these will take generations to play out. We need more energy, so we pump the stuff out of the ground, but have only now realised, generations down the road, that there are unanticipated consequences. Similarly, we are developing the ability to chose the genes we want for our offspring and, therefore, for our grandchildren and our great-grandchildren and so on. Are we going to make the right decisions for the next century or the next millennium? We have not adapted psychologically to the notion of long-term consequences. Can Homo sapiens do that? We have to become capable of it. At the dawn of agriculture, there were around 5 million people on the planet. There are 6.8 billion today and this figure is expected to peak at around 9.5 billion by 2050. For the first time in 70,000 years, for lots of reasons, human numbers will have reached a steady state. At that time, there will be more people moving into the retirement category and fewer people in the young worker category. There will be more people over 60 than under 15 for the first time in history – all over the world, not just in the developed world. Will we be able to cope by 2050? It is a question of utilising resources in a more intelligent way. I have a phrase for it: want less. I think that is the lesson we can take from current hunter-gatherer groups, people who still live in a way that our ancestors did. They live within constraints. We have got used to expansion and dominance. Learning to recognise that we have limitations is going to be important. Are you hopeful? I am because I think humans have the ability to innovate. The issue is seeing the consequences, realising that there is a cost to what we are doing and recognising that now. We are not adapted to think in those terms. But if we can see that there are tangible consequences to what we are doing in the here and now, then I think we will be spurred into action.
<urn:uuid:fa9d3d34-1394-4722-b051-217a1cfe67f0>
CC-MAIN-2017-34
https://www.theguardian.com/technology/2010/jun/06/my-bright-idea-spencer-wells
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104681.22/warc/CC-MAIN-20170818140908-20170818160908-00203.warc.gz
en
0.980578
1,089
3.671875
4
Topics Covered: Conservation related issues. Green panel allows Great Nicobar plan to advance: The Environment Appraisal Committee (EAC) – Infrastructure I of the Ministry of Environment, Forest and Climate Change (MoEFCC) has flagged serious concerns about NITI Aayog’s ambitious project for Great Nicobar Island. - The committee has, however, “recommended” it “for grant of terms of reference (TOR)” for Environmental Impact Assessment (EIA) studies, which in the first instance will include baseline studies over three months. About the project for Great Nicobar Island: The proposal includes an international container transshipment terminal, a greenfield international airport, a power plant and a township complex spread over 166 sq. km. (mainly pristine coastal systems and tropical forests), and is estimated to cost ₹75,000 crore. What are the main concerns? - The plan document has no information about a note on seismic and tsunami hazards, freshwater requirement details and details of the impact on the Giant Leatherback turtle. - Besides, there were no details of the trees to be felled — a number that could run into millions since 130 sq. km. of the project area has some of the finest tropical forests in India. - The committee raised a number of additional issues, including about Galathea Bay, the site of the port and the centrepiece of the NITI Aayog proposal. Galathea Bay is an iconic nesting site in India of the enigmatic Giant Leatherback, the world’s largest marine turtle. Action points listed out by the committee: - The need for an independent assessment of terrestrial and marine biodiversity. - A study on the impact of dredging, reclamation and port operations, including oil spills. - The need for studies of alternative sites for the port with a focus on environmental and ecological impact, especially on turtles, analysis of risk-handling capabilities. - A seismic and tsunami hazard map, a disaster management plan, details of labour, labour camps and their requirements. - An assessment of the cumulative impact, and a hydro-geological study to assess impact on round and surface water regimes. Need for conservation: Ecological surveys in the last few years have reported a number of new species. These include the critically endangered Nicobar shrew, the Great Nicobar crake, the Nicobar frog, the Nicobar cat snake, a new skink (Lipinia sp), a new lizard (Dibamus sp,) and a snake of the Lycodon sp that is yet to be described. Sources: the Hindu.
<urn:uuid:e7ac0b36-0de2-4afc-b7ad-6310878b13d7>
CC-MAIN-2022-40
https://www.insightsonindia.com/2021/05/11/green-panel-allows-great-nicobar-plan-to-advance/
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00091.warc.gz
en
0.893221
552
2.734375
3
Effects of global warming are irreversible, report says Researcher advises quick action before situation worsens WASHINGTON Many damaging effects of climate change are already basically irreversible, researchers declared yesterday, warning that even if carbon emissions can somehow be halted, temperatures around the globe will remain high until at least the year 3000. “People have imagined that if we stopped emitting carbon dioxide the climate would go back to normal in 100 years, 200 years; that's not true,” climate researcher Susan Solomon said in a teleconference. Solomon of the National Oceanic and Atmospheric Administration's Earth System Research Laboratory in Boulder, Colo., is lead author of an international team's paper reporting irreversible damage from climate change, being published in today's edition of Proceedings of the National Academy of Sciences. She defines “irreversible” as change that would remain for 1,000 years even if humans stopped adding carbon to the atmosphere immediately. The findings were announced as President Barack Obama ordered reviews that could lead to greater fuel efficiency and cleaner air, saying the Earth's future depends on cutting air pollution. Said Solomon, “Climate change is slow, but it is unstoppable” – all the more reason to act quickly, so the long-term situation doesn't get even worse. Alan Robock of the Center for Environmental Prediction at Rutgers University agreed with the report's assessment. “It's not like air pollution where if we turn off a smokestack, in a few days the air is clear,” said Robock, who was not part of Solomon's research team. “It means we have to try even harder to reduce emissions,” he said in a telephone interview. Solomon's report “is quite important, not alarmist, and very important for the current debates on climate policy,” added Jonathan Overpeck, a climate researcher at the University of Arizona. In her paper, Solomon, a leader of the International Panel on Climate Change and one of the world's best known researchers on the subject, noted that temperatures around the globe have risen and changes in rainfall patterns have been observed in areas around the Mediterranean, southern Africa and southwestern North America. Warmer climate also is causing expansion of the ocean, and that is expected to increase with the melting of ice on Greenland and Antarctica, the researchers said. “I don't think that the very long time scale of the persistence of these effects has been understood,” Solomon said. Global warming has been slowed by the ocean, Solomon said, because water absorbs a lot of energy to warm up. But that good effect will not only wane over time, the ocean will help keep the planet warmer by giving off its accumulated heat to the air. Climate change has been driven by gases in the atmosphere that trap heat from solar radiation and raise the planet's temperature – the “greenhouse effect.” Carbon dioxide has been the most important of those gases because it remains in the air for hundreds of years. While other gases are responsible for nearly half of the warming, they degrade more rapidly, Solomon said. Before the industrial revolution the air contained about 280 parts per million of carbon dioxide. That has risen to 385 ppm today, and politicians and scientists have debated at what level it could be stabilized. Solomon's paper concludes that if CO2 is allowed to peak at 450-600 parts per million, the results would include persistent decreases in dry-season rainfall that are comparable to the 1930s North American Dust Bowl in zones including southern Europe, northern Africa, southwestern North America, southern Africa and western Australia. Gerald Meehl, a senior scientist at the National Center for Atmospheric Research, said, “The real concern is that the longer we wait to do something, the higher the level of irreversible climate change to which we'll have to adapt.” Meehl was not part of Solomon's research team. While scientists have been aware of the long-term aspects of climate change, the new report highlights and provides more specifics on them, said Kevin Trenberth, head of climate analysis at the center. “This aspect is one that is poorly appreciated by policymakers and the general public and it is real,” said Trenberth, who was not part of the research group. “The temperature changes and the sea level changes are, if anything underestimated and quite conservative, especially for sea level,” he said.
<urn:uuid:2361e8a1-2286-4032-88e5-7d7e7dcf209b>
CC-MAIN-2015-32
http://www.sandiegouniontribune.com/news/2009/jan/27/1n27damage00318-effects-global-warming-are-irrever/
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989042.37/warc/CC-MAIN-20150728002309-00086-ip-10-236-191-2.ec2.internal.warc.gz
en
0.956785
914
3.28125
3
Teething – Separating Fact from Fiction There are many misconceptions surrounding teething in the community. Teething tends to be blamed for miserable children aged from 3 months to 3 years! Our lead dentist, Claire, decided to look into some of the issues surrounding teething when her daughter’s teeth started to come through at 7 months old. When do the baby teeth develop? The 20 baby teeth (also known as milk, primary or deciduous teeth) begin their development very early in pregnancy, when a foetus is 5 weeks old. They are completed by the 14th week of pregnancy. The first baby teeth to come through are the two lower front teeth (central incisors), at around 4-8 months of age. However, when the teeth start to come through varies from child to child, although the order/sequence that the teeth come through doesn’t usually vary. What are the symptoms of teething? Studies have found that symptoms related to teething occur over an 8-day period from before the tooth comes through to when the tooth has emerged through the gum. Teeth coming through (erupting) is often accompanied by: - Rash around mouth/face - Pulling/rubbing ear - Low level fever - Gum rubbing/chewing fingers - Loss of appetite A third of children display no symptoms of teething prior to a tooth emerging through the gum. What is unlikely to be caused by teething? The first teeth come through at around the same time as passive immunity (as a result of maternal antibodies from the placenta) declines and exposure to a wide variety of childhood illnesses occurs. This most likely explains why so many aliments are attributed to teething by parents and the wider community. One study found that a number of infants who were reported to be ‘teething’ by a parent actually had a herpes simplex virus. There is no scientific evidence to link the following symptoms with teething and a more appropriate explanation/diagnosis should be sought for: - Rash (other than on the face) - Nasal congestion - High fever What treatments work? Cold teething rings The cold temperature and pressure on the gums caused by chewing causes the blood vessels in the gums to reduce in size which decreases inflammation and can help to relieve pain. Similar items include: - Hard (sugar-free) teething rusks/bread-sticks - Chilled or frozen fruit - Chilled pacifier - Rubbing gums with a cool spoon or wet gauze Paracetamol is a painkiller (analgesic). It reduces the production of proteins called prostaglandins which are made by the body when inflammation is present. Paracetamol is also antipyretic, which means it is able to reduce the low level fever associated with teething. What treatments are not very helpful? Teething gels are very popular with parents. However, caution should be exercised when using topical gels in children as they could actually be harmful due to excessive application and swallowing. Teething gels containing choline salicylate (such as Bonjela and Ora-Sed) have been removed from sale in the UK and USA as the choline salicylate (found in aspirin) is associated with a condition in children called Reye’s syndrome. Bonjela and Ora-Sed sold in New Zealand still contain choline salicylate. Frequent application of these kind of teething gels can also cause a chemical burn. Teething gels may also contain local anaesthetics (numbing agents) such as lidocaine or benzocaine. A rare condition called methemoglobinemia can occur when benzocaine is ingested or absorbed through the mouth. Methemoglobinemia interferes with the blood’s ability to transport oxygen. In 2014, the US Food and Drug Administration (FDA) reviewed 22 cases of serious reactions to teething gels containing local anaesthetics. The FDA have since advised against local anaesthetic gels being used to treat infants and children with teething pain. Teething Jewellery/Amber beads The US Food and Drug Administration (FDA) notes that the risks of using teething jewellery include choking, strangulation, injury to the mouth, and infection. Choking may occur if the jewellery breaks and small beads or the whole piece of jewellery enter the child's throat or airway. There is also no peer-reviewed, scientific evidence base that proves amber beads release succinic acid (which is supposed to provide a painkilling action) as it would need to be heated to at least 200C or that the succinic acid could be absorbed through the mouth. However, biting on the beads will place pressure on the gums and decrease inflammation via the same action as teething rings. As a parent, it is difficult to see your little one experiencing any kind of pain and very natural to seek methods to reduce their discomfort. However, teething is a natural bodily process that we all go through and the actual symptoms of teething are very short-lived. TLC, paracetamol and teething rings are the only proven methods of helping your child. For further information, come and have a chat with us at the Dental Studio.
<urn:uuid:a332ec0a-afbf-4eb9-9d54-ac5416be872c>
CC-MAIN-2019-09
https://www.dentalstudio.nz/oral-care/teething
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249414450.79/warc/CC-MAIN-20190223001001-20190223023001-00093.warc.gz
en
0.953833
1,149
3.46875
3
by Sandi Watson Have you ever seen a llama or alpaca up close? They are beautiful animals, with their big eyes, flirty eyelashes, long legs, and soft fleece. |alpacas in Pacchanta, Peru April 2010| For the Heifer project families who raise them, llamas and alpacas are also tremendously useful. After turning the fleece into yarn, families can create blankets, hats, ponchos, and other items. During our volunteers study tour in Peru, we learned that these beautiful weavings are also part of a rich cultural heritage. Special symbols such as condors, alpacas, mountains, and rivers honor Mother Earth. Sometimes white fibers are dyed, using inks made from local plants. Other times, the weavers use only the naturally occurring colors rich browns, pale taupes, creamy whites. All these hand-crafted pieces are an important source of income for the families. Llamas thrive at high altitudes, as we saw when we visited Pacchanta (over 13,000 feet above sea level). They are nimble and strong, able to carry loads to market. And because they are related to camels, they dont need much water. The next time you need a fun gift for a loved one, consider giving a llama in your beloveds name. The person you honor will be thrilled and youll make a tremendous difference in the lives of the people who receive your gift! This post originally appeared on the Heifer in Boston volunteer blog.
<urn:uuid:60abe126-e986-4790-a1a9-550787cb346c>
CC-MAIN-2017-43
https://www.heifer.org/join-the-conversation/blog/2011/February/llama-love.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825575.93/warc/CC-MAIN-20171023035656-20171023055656-00162.warc.gz
en
0.923113
324
3
3
The CLP Regulation The Classification, Labelling and Packaging Regulation (EC) No. 1272/2008, more commonly known as CLP implements the Globally Harmonised System of Classification and Labelling of Chemicals (GHS) in the European Union. CLP has replaced the previous system for classifying and labelling chemicals set out in the Dangerous Substances Directive (DSD) 67/548/EEC and the Dangerous Preparations Directive (DPD) 1999/45/EC (implemented through the CHIP Regulations 2009 in the UK). CLP sets out a number of requirements for suppliers of chemical products To classify (assess the hazards of) chemicals before placing them on the market - If classified as hazardous, to label and package them appropriately - For suppliers at the top of the EU supply chain, i.e. manufacturers and importers, to notify the classifications of hazardous chemicals to the classification and labelling inventory - To document their classifications and keep them up to date as necessary The transitional period for the introduction of CLP finished on the 1st June 2015 and all chemical products must now be classified, labelled and packaged according to CLP. However, for mixtures that were already placed on the market before 1 June 2015 and were classified, packaged and labelled according to the Dangerous Preparations Directive (DPD), a further 2 years transition period, until 1 June 2017, is still available to allow them to work their way through the supply chain. WHAT IS THE GHS? Development of the GHS began following the 1992 Rio de Janeiro UN Conference on Environment and Development with the aim of providing a consistent, harmonised system for the identification and communication of hazards about chemicals, so as to enable their safe use, transport and disposal. The GHS is regularly updated, on a 2 yearly basis. The 6th Edition of the GHS will be published in July 2015. More information about GHS can be found on the GHS pages of the UNECE website. The GHS is a framework for regulation and, as such, has no legal status. It must be implemented in each country or region by appropriate legislation. Many countries have adopted, or are in the process of adopting the GHS into their regulatory systems. Further information on the progress of these countries, together with links to relevant national legislation is available here. International transport rules for dangerous goods are also aligned with GHS. CLP IS NOT THE SAME AS GHS The GHS is based upon a “building block” approach, allowing countries to select those elements they believe to be most relevant. In the EU, only those building blocks that most closely matched the DSD and DPD system have been incorporated into CLP. This may result in different classifications for some products in the EU compared to other parts of the world where different building blocks have been selected. CLP also includes some additional classification elements from the DSD and DPD which are not yet included in the GHS, and some special phrases to be added to labels for certain types of mixtures. It includes requirements for the content, size and format of labels, and additional requirements such as child resistant packaging and tactile warnings. The EU has also sought to retain some of the benefits from the 40 years of experience with the DSD and DPD, such as the list of harmonised classifications now included in Annex VI of CLP. CLP allows for two types of classification: - harmonised classification, in which the classification and labelling of a substance for some or all hazard endpoints is agreed at EU level; and - self classification where the substance or mixture is classified for some or all hazard endpoints by the supplier. The system of harmonised classification introduced through the DSD has been transferred to CLP, and the existing classifications from Annex 1 of the DSD have been transferred to Annex VI of CLP, together with their new CLP equivalents. Annex VI is regularly updated through amendments to CLP known as ATPs (Adaptations to Technical Progress). 6 ATPs have so far been published, and more are in progress. It is expected that Annex VI will be updated at least annually. The process of agreeing a harmonised classification for a substance can be followed here on the ECHA website. Harmonised classifications do not normally cover all possible hazard end-points, and suppliers using these harmonised classifications will need to check and, if necessary, self-classify for all other hazard end-points. Current policy is to focus chiefly on CMR effects, and on respiratory sensitisation, and harmonised classifications will normally only be made for other hazard end-points on a case by case basis. New harmonised classifications for active biocides and plant protection substances, however, will normally cover all hazard end-points. Where no harmonised classification exists for a chemical substance, or there are end-points not covered by the harmonised classification, suppliers will need to self-classify. Self-classification is also needed for mixtures. CLP allows for a number of different approaches to self-classification, depending on the data available for the substance or the mixture, and the components of mixtures. The classification criteria for physical hazards within the GHS, and therefore CLP, were largely based upon those of the UN Recommendations on the Transport of Dangerous Goods (commonly known as the Orange Book) and are therefore harmonised with them. Classification is usually based upon test data, although screening criteria are available for many end-points that will often indicate that that particular hazard is unlikely and therefore testing is not required. However, if there is insufficient information to allow an assessment to be made for a particular end-point, then CLP imposes an obligation on the supplier to carry out new testing for physical hazards. Since 1 January 2014 such tests need to be carried out to a recognised quality standard. HEALTH AND ENVIRONMENTAL HAZARDS The selection of building blocks by the Commission was designed to avoid significant changes in scope between CLP and the DSD and the DPD. However, there have been some significant changes in the way that classifications for mixtures are calculated, particularly for corrosives and irritants that mean that many more mixtures will be classified as hazardous under CLP than were classified under the DPD. A key difference between the health and environmental hazards, and the physical hazards, is that where there is insufficient information to classify for a particular end-point, CLP does not oblige suppliers to carry out new testing. CLP makes clear that new testing involving animals should only be carried out as a last resort. Any decisions on new testing should also take into account REACH obligations for the product. A number of labelling elements have changed under CLP. There are new pictograms; the descriptive Indications of Danger, such as “Flammable” and “Harmful” are replaced by the simpler Signal Words “Danger” and “Warning”; and Risk and Safety Phrases are replaced by Hazard Statements and Precautionary Statements. In many cases these new phrases are very similar to the old phrases, but there are also many new phrases, particularly for the Precautionary Statements. Following the GHS, and CLP, criteria, many substances and mixtures will be assigned a large number of Precautionary (P) statements –often 20-30 or even more if the product has several hazards. The CLP Regulation, following on from the normal practice under the DSD and DPD, recommends that normally no more than six P statements should appear on the label. Selection of these P Statements may be a complex process for some products, and further guidance is available in the Guidance on Labelling and Packaging in Accordance with the CLP Regulation. Labels under CLP must include the following elements: - name, address, telephone number - nominal quantity for packages available to the general public, unless elsewhere on the package - product identifiers - signal words - hazard statements - appropriate precautionary statements - supplemental information CLP sets out some basic rules for the arrangement of labels, e.g. obligatory elements must be located together, and there are minimum sizes for labels and pictograms. Additional information required by, for example, the BPD or VOC legislation is considered to be ‘supplemental information’ for the purposes of labelling. For multilingual labels, the hazard and precautionary statements need to be kept together for each language. This page was last updated on 4th June 2015.
<urn:uuid:0cade128-66b3-45c0-97a9-e076b473cfc7>
CC-MAIN-2018-13
http://www.denehurst.co.uk/CLP.html
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645248.22/warc/CC-MAIN-20180317155348-20180317175348-00561.warc.gz
en
0.945067
1,786
3.015625
3
ICD 9 code for knee pain is not a very familiar term for common people and is mostly used by doctors or patients who are looking for a solution to knee injuries. These codes are used and implemented by medical professionals to understand the condition of the patient and the treatment done or required for the cure of the disease. Different codes are used for different disease and we can say that it is a universal medical language used for analysis and treatment of a disease. For knee joint related problems, ICD 9 code is the standard form used in different countries. Under this code there are sub codes which are used for different disease under knee joint category. What is ICD 9 Code for knee pain? It is the coded form of communication which explains the medical situation of the knee. Any injury, symptom or medical condition related to injuries on knee is explained in a coded form. Knee injuries can be due to various reasons such as fracture, dislocation, ligament tear or any other cause. The effect of these injuries can be swelling, chronic pain or any other severe symptom. ICD which means International Statistical Classification of Disease is a coded way of understanding and analyzing different disease and the treatment required. ICD 9 coding is different for different types of knee pain as for normal knee pain i.e. patellofemoral pain, the ICD code used is 719.46 whereas for lateral knee pain which in medical term is called enthesopathy, the ICD code is 726.64 and for anterior knee pain i.e. patellar tendinitis the code used is 726.60. ISD 9 codes are commonly accepted way of analyzing the various details for knee injuries. Since the code is globally accepted hence it is easy for a patient to undergo treatment in any country without any trouble. It not only helps in understanding the medical situation of the knee pain and the treatment required for the cure of the disease but it also helps in making decisions regarding the kind of treatment required and medicines that can be used for the disease. The ISD 9 code is also a great help for the insurance people as it helps them in availing the details of the disease thereby making it easy for them to provide compensation for the treatment. The code is important for people who visits a different hospital and explains the doctor about his knee problem. Since there are a number of diseases caused due to knee pain hence it becomes difficult for a doctor to treat a patient without a confirmation about his disease. This ICD 9 code makes it easy for the patients as well as doctors to explain the disease and avail the treatment in the best possible way. Since different joint pains have different codes hence a proper code is essential to diagnose a disease properly. ICD 9 code ?is helpful for doctors, patients, nurses and health insurance providers by saving their time required in finding the details of the disease. So if you are looking for a knee treatment in your country or outside the country then you need not to worry about explaining the disease to the doctor as they can easily detect and understand the condition of your knee injury.
<urn:uuid:41faeae8-d4b9-4822-810f-1e1c3837a000>
CC-MAIN-2017-13
http://www.health-guidances.com/icd-9-code-for-knee-pain.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189802.18/warc/CC-MAIN-20170322212949-00514-ip-10-233-31-227.ec2.internal.warc.gz
en
0.947414
625
2.921875
3
1. What essential question does one still need to ask in order to help make the diagnosis? 2. What is the tentative diagnosis? 3. Can you list at least three differential diagnoses? 4. What features of this condition differentiate it from other conditions in your differential diagnosis? 5. What is the suitable treatment for this condition? Treating A Patient With Multiple, Pruritic Open Lesions On Both Feet A 32-year-old female presents to the clinic with a chief complaint of multiple pruritic lesions on the tops of both feet. The lesions have been present for several months and appear to be increasing in number and size. The patient has not seen any other physician for this problem and she has not been putting any medications on the condition. The patient reports the lesions start as very small red bumps and itch a great deal. After scratching the bumps, she says the lesions get bigger and new itchy bumps occur around the area in a few days or so. The condition is so bad at this time that she cannot wear closed shoes, go to work or take care of her family. After further questioning, the patient stated that prior to the current skin condition, she had no known exposure to any chemicals, paints, toxins, irritants or other potential allergens. She also notes she is not taking any medication, vitamins or supplements. The patient also has no known allergies to any medications or environmental agents. No one else in her household or within her family has any similar skin conditions. What Does The Physical Examination Reveal? The physical examination revealed a large number of scratches, excoriations and open areas with surrounding erythema on the dorsum of both feet. Some of the lesions appeared new and some looked much older. There were no primary lesions anywhere else on the feet or lower legs. There were no other rashes, skin changes or edema. At the time of the visit, the lesions were symptomatic and the patient was reporting pruritus as the main complaint. A careful examination found no other similar appearing lesions on the upper extremities, torso, head and neck region. There were no other obvious dermatological findings other than the ones noted on the initial examination. There were no other positive findings during the rest of the physical examination. What You Should Know About Factitial Dermatitis Factitial dermatitis (dermatitis artefacta) means self-inflicted lesions of the skin. The lesions are in sites that are readily accessible to the patient’s hands. In many cases, patient may cause deep excoriations with the fingernails but they may also be caused by sharp instruments such as knives, the application of caustic chemicals and burning, sometimes with cigarettes or matches. The most common locations are the extensor surfaces of the extremities, the tops of the feet, the face, the upper shoulders and back. The patients may or may not be aware that they caused the skin damage themselves, and they usually deny having intentionally inflicted the injury. There are several reasons for patients to self-inflict wounds on their own bodies. Most patients with dermatitis artefacta have some underlying psychological issue that may be caused by stress, anxiety, depression or drugs. If the dermatitis is a single episode that was triggered by a particularly difficult situation (such as divorce, loss of job, death in the family), about 70 percent of all patients will stop the self-injury once the situation is resolved. However, about 30 percent of the cases of dermatitis artefacta are ongoing and recurrent, and represent a long history of psychological problems. Other issues, such as the use of street drugs, especially methamphetamine, may cause some patients to see or feel bugs on their skin (crank bug bites). They attempt to remove them by picking at them until they create open wounds or sores. Patients with factitial dermatitis, in which the skin lesions are directly produced or inflicted by their own actions, usually present with this condition as a result or manifestation of a psychological problem. It could be a form of emotional release in situations of distress, anxiety or depression or part of an attention-seeking behavior (usually seen among younger women). In a few cases, the cause may be an underlying attempt to secure a work-related insurance claim or disability payment. However, in all cases of dermatitis artefacta, the presenting lesions are difficult to recognize and do not conform to those of known dermatoses. In other words, there are no primary skin lesions (those that are a direct expression of a skin disease such as macules, papules, plaques, nodules, vesicles, pustules or cysts). There are only secondary skin lesions (those lesions that follow a skin condition such as ulcers, erosions, excoriations, crusts, scabs, scars or atrophy). This typically will give the doctor a clue as to the origin of the condition. There is a 4:1 female to male ratio for factitial dermatitis. Some associated traits include low self-confidence, generalized apprehension, meticulousness, depressive mood disorder and hypersensitivity to perceived negativism toward themselves. Concurrent symptoms of severe headache or menstrual disorders are common in many of these patients. The lesions in very young children are characteristically not self-inflicted but are caused by abusive adults. There is also a condition called Munchausen’s syndrome by proxy, whereby a parent or guardian will inflict skin injuries on a child in an attempt to convince doctors that their child has a serious dermatitis or needs ongoing medical care. One would diagnose factitial dermatitis via classical clinical findings. A patient’s history may suggest some obvious reasons for the pruritus. These reasons may include preexisting atopic dermatitis, contact dermatitis, insect bites or food allergies. In order to exclude any medical causes of generalized pruritus, physicians may perform the following simple tests: complete blood count with differential; chemistry profile; thyroid-stimulating hormone levels; and fasting plasma glucose level. Patch testing for allergens and fungal cultures may be necessary when the condition appears to be non-responsive to the initial treatment of covering the lesions. Perform the appropriate workup for malignancy if this is indicated by the patient’s history. In persistent cases, a simple biopsy will be beneficial. Xerosis, or generalized dry skin, is the most common cause of pruritus among older patients. These patients usually lack certain fatty acids in the skin that augment hydration and barrier function, leading to the development of dry itchy skin. This may then generate the “itch-scratch” cycle that, in some patients, develops into chronic dermatitis. The generalized pruritus that results can also lead to emotional conditions such as anxiety or depression and, subsequently, progression to self-inflicted skin conditions. Unlike xerosis in older patients, atopic dermatitis predominantly affects infants, children and young adults. Approximately 60 percent of the cases of atopic dermatitis are diagnosed with the first year of life and 90 percent of all cases are diagnosed by the age of 5. Only 10 percent of atopic dermatitis cases are diagnosed over the age of 5 and it is rare for the condition not to be identified before a patient reaches his or her teens. The condition follows a relapsing course and most adults who suffer from atopic dermatitis have had it nearly all of their lives. In both of these conditions, xerosis and atopic dermatitis, simply rehydrating the skin, applying moisturizing creams or applying products like MimyX™ cream (Stiefel Laboratories) will replace the fatty acids and repair the skin barrier function, and thereby decrease most of the patient’s symptoms. A Guide To Prevention And Treatment Prevention of factitial dermatitis includes getting patients to understand that their actions are making the condition worse and if they stop rubbing, scratching and excoriating the skin, the problem will quickly resolve. This is not as easy as it might seem. Most authorities agree that when one suspects dermatitis artefacta, one should avoid direct confrontation. One should evaluate the patient’s emotional situation or stresses, and refer the patient for psychiatric counseling. In some cases, referral to a university-based dermatologist with experience treating psychocutaneous disorders is the best approach. Treatment options for self-inflicted lower extremity dermatitis are relatively limited. As stated earlier, the initial approach when one suspects this diagnosis is to cover the affected area with a medicated paste Unna’s boot dressing. Other dermatologic approaches include the use of antibiotics, topical steroids and lubricants, as well as adjunctive therapy such as MimyX cream. If there is significant crusting and/or secondary bacterial infection of the erosions and excoriations, antibiotic therapy (topical mupirocin 2% ointment) is indicated. Applying steroid topicals twice a day can be very effective in reducing the erythema and inflammation of the area. Try low-potency (group IV–V) topical steroids first and gradually progress to high-potency steroids (group I–II) if there is slow response. Long-term use of topical steroids is not recommended due to the increasing side effects with chronic usage. I have found that one can reduce much of the compulsive scratching and rubbing by having the patient apply a corticosteroid-impregnated occlusive tape cover (Cordran Tape) to problem areas. This provides both a physical barrier to skin trauma as well as an effective form of short-term relief. As with other dermatology conditions, it is best to recommend that the patient learn to use only mild soaps and decrease the frequency of bathing. Reducing the temperature of the bath water or showers also helps to reduce drying of the skin. They should try to increase the moisture in their home environment by adding humidifiers whenever possible. Additionally, patients can also try substituting regular application of skin lubricants and lotions that are without fragrance or alcohol in place of rubbing and scratching. The most difficult time for many patients is at night and, in these cases, the patient may sleep with a pair of thin cotton gloves in an attempt to reduce the amount of scratching that occurs subconsciously. Counseling should be supportive and empathic but should also be open to other approaches as new issues emerge. Cognitive-behavioral approaches may focus on helping the patient understand his or her illness through education and finding alternative responses to the pruritic sensations. The podiatric physician should maintain a close working relationship with the patient’s family physician and therapist, and offer education and explanations to the patient’s family. Treatment aimed at a primary psychiatric diagnosis is usually fundamental for effective results in these patients. Factitial dermatitis, also known as dermatitis artefacta, is a psychocutaneous disorder in which patients damage their skin but usually deny their self-involvement. This disorder encompasses a wide range of potential lesions including blisters, cuts, excoriations, ulcers and burns. Patients often are unable to describe how the lesions evolved. Upon examining the lesions, practitioners may see bizarre patterns that are not characteristic of any known skin disease. Factitial dermatitis more commonly affects young adults and adolescents, and it is four times more common among women than men. Psychological disorders involved with factitial dermatitis include personality disorders, anxiety, depression and posttraumatic stress disorder. Dr. Dockery is a Fellow of the American College of Foot and Ankle Surgeons, and the American Society of Podiatric Dermatology. He is board certified in foot and ankle surgery. Dr. Dockery is the Chairman of the Board and Director of Scientific Affairs for the Northwest Podiatric Foundation for Education & Research, USA. Dr. Dockery is the author of Cutaneous Disorders of the Lower Extremity (Saunders, 1997) and Lower Extremity Soft Tissue & Cutaneous Plastic Surgery (Elsevier Sciences, 2006). Suggested Reading 1. Antony SJ, Mannion SM: Dermatitis artefacta revisited. Cutis. 1995;55(6):362-364. 2. Cyr PR, Dreher GK: Neurotic excoriations. Am Fam Physician 2001;64:1981-1984. 3. Dockery GL: Psychocutaneous disorders: Some lower extremity presentations. J Am Podiatr Assoc. 1982;72;388-395. 4. Dockery GL: Psychocutaneous Disorders, ch 17, Cutaneous Disorders of the Lower Extremity, WB Saunders, Phila. Pp 288-298, 1997. 5. Dockery GL: How to Detect and Treat Pruritus. Podiatry Today. 19(8); 52-64, 2006. 6. Joe EK, Li VW, Magro CM, et al: Diagnostic clues to dermatitis artefacta. Cutis; 1999; 63(4):209-214. 7. Koblenzer CS: Neurotic excoriations and dermatitis artefacta. Dermatol Clin. 1996;14(3):447-455. 8. Koo J, Lebwohl A: Psychodermatology: the mind and skin connection. Am Fam Physician. 2001;64:1873-1878. 9. Nielsen K, Jeppesen M, Simmelsgaard L, et al: Self-inflicted skin diseases. A retrospective analysis of 57 patients with dermatitis artefacta seen in a dermatology department. Acta Derm Venereol. 2005;85(6):512-515.
<urn:uuid:ea6c4498-e31c-44e7-a008-d8a5d5381e30>
CC-MAIN-2016-50
http://www.podiatrytoday.com/article/7200?page=3
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542668.98/warc/CC-MAIN-20161202170902-00367-ip-10-31-129-80.ec2.internal.warc.gz
en
0.941077
2,796
2.609375
3
Mention the word trauma to Americans in the 21st century, and their thoughts are likely to turn to images of terrorism, war, natural disasters and a seemingly continual stream of school shootings. The horrific scenes at Newtown and Columbine still dominate public consciousness, particularly when our society discusses child trauma. While those events make headlines, however, counseling professionals say the most pervasive traumatic threat to children is found not in big events or stranger danger, but in chronic and systemic violence that happens in or close to the home. This kind of ongoing trauma, much of which takes place out of public view, leaves deep scars that can cause a lifetime of emotional, mental, physical and social dysfunction if left untreated. Research shows that chronic, complex trauma can even rewire a child’s brain, leading to cognitive and developmental issues. The good news is that counselors in all areas of practice — in schools, agencies, shelters, clinics, private practice and elsewhere — can and are working with children and, when possible, their parents to stop the cycle of violence, or at least to mitigate its effects. Behind closed doors The number of children exposed to violence in the United States is staggering. According to the National Survey of Children’s Exposure to Violence (NatSCEV), funded by the U.S. Department of Justice and the Centers for Disease Control and Prevention (CDC) and carried out by the University of New Hampshire’s Crimes against Children Research Center, more than 60 percent of children surveyed had been exposed to direct or indirect violence during the 12 months prior to the survey. Nearly half — 46.3 percent — had been assaulted at least once in the past year, meaning they had experienced one or more of the following: any physical assault, assault with a weapon, assault with injury, attempted assault, attempted or completed kidnapping, assault by a brother or sister, assault by another child or adolescent, nonsexual genital assault, dating violence, bias attacks or threats. One in 10 had experienced some form of maltreatment, which includes nonsexual physical abuse, psychological or emotional abuse, child neglect and custodial interference. Other CDC research indicates that 1 in 4 girls and 1 in 6 boys are victims of sexual abuse. However, many experts emphasize that due to the stigma involved, sexual abuse is underreported. Significant exposure to violence and trauma can also lead to illness later in life. From 1995-1997, the CDC, in collaboration with Kaiser Permanente, collected detailed medical information from 17,000 patients at Kaiser’s Health Appraisal Clinic in San Diego. These patients also answered detailed questions about childhood experiences of abuse, neglect and family dysfunction. The initial study, Adverse Childhood Experiences, as well as more than 50 studies since using the same population, found that adult survivors of childhood abuse are more likely to develop chronic conditions and diseases such as heart disease, obesity, cancer, chronic obstructive pulmonary disease and liver disease. They are also more likely to engage in risky health behaviors such as smoking and drug and alcohol abuse. In addition, adult survivors of child abuse may have autobiographical memory problems; exhibit increased problems with depression, anxiety and other mental illnesses; and struggle with suicidal tendencies. NatSCEV data, collected between January and May 2008, indicate that one in 10 children surveyed experienced five or more incidents of direct violence. It is this kind of ongoing abuse that can cause polyvictimization, or what many researchers call complex trauma — repeated exposure to traumatic events over time and often at the hands of caregivers or other loved ones. “This cumulative trauma has much more serious effects than a single event,” says David Lawson, a licensed professional counselor (LPC) and licensed marriage and family therapist in Nacogdoches, Texas, who has worked with victims and perpetrators of sexual and domestic abuse since the 1980s. Because the abuse is ongoing, it disrupts a child’s sense of security, safety and self and alters the way he or she sees others, explains Lawson, an American Counseling Association member who is also a researcher and professor in the school psychology and counseling program at Stephen F. Austin State University in Nacogdoches. “In childhood, attachments are still forming, and abuse can shatter this developing ability,” says Jennifer Baggerly, an ACA member, LPC and play therapist who studies child trauma intervention. “It can also distort their forming personality and the way they interact with people as a whole.” This distortion can cause the child to believe that the world is an unsafe place and that people aren’t trustworthy, adds Baggerly, an associate professor and chair of the Department of Counseling and Human Services at the University of North Texas at Dallas. That pattern of uncertainty and instability can cause cognitive distortion, dissociation and problems with emotional self-regulation and relationship formation, and even alter a child’s brain structure, notes Lawson, the author of Family Violence: Explanations and Evidence-Based Clinical Practice, published by ACA in 2013. “Children get stuck in flight or fight,” adds Baggerly. “Everything is a threat, so instead of strengthening the prefrontal cortex, the brain operates more from the limbic system, which causes them to be more hypervigilant.” Because they are almost constantly on alert, these children and adolescents most of the time use what Lawson calls their “survival brain” instead of their “learning brain.” Childhood and adolescence are periods in which the brain is developing rapidly and crucial cognitive skills are being learned. If children and adolescents spend too much time in survival mode, they are not accessing areas in the brain that are responsible for learning developmentally appropriate cognitive skills and laying down the neural pathways that are critical to future learning. “As the child gets older, this chronic hypervigilance — and the overload of cortisol that comes with it — completely remaps the brain and just stifles development,” says Gail Roaten, president-elect of the Association for Child and Adolescent Counseling, a division of ACA. “You see them lose ground cognitively, especially in their ability to learn.” Support and stability Traumatized children’s problems with cognition, learning, self-regulation and development can last a lifetime, making it more likely that they will continue the cycle of abuse in their relationships, abuse drugs and alcohol, have trouble finding and keeping jobs or end up in the criminal justice system. Adults who were traumatized as children also are much more likely to face a host of physical and mental health problems. The situation is far from hopeless, however. Counseling interventions for trauma can make a dramatic difference, and the earlier a child starts receiving therapy, the better. A variety of techniques have proved to be effective, but interventions are most successful when a supportive environment is created, Lawson emphasizes. Whenever possible, a parent or parents should be participants in a child’s therapy (as long as they are not the perpetrators of the abuse), and if not the biological parents, then foster parents or grandparents. “I try to bring in whoever can help build a support system for the child,” Lawson says, “because an hour a week [of counseling] is woefully inadequate, and I need to have them able to take what they learn in therapy into the home.” In many cases, parents or caregivers need help learning how to support the abused child emotionally, he says. When parents come to sessions with their children, the counselor can help the parents learn not just the best way to support the child in therapy, but also how to strengthen their parenting skills. “We really emphasize connection,” Lawson says. “Once they [abused children] have attachment, they may be ready to tell parents about their abuse and may just blurt it out at home. I try to prepare parents to listen to the child. If the parents are not comfortable addressing this [topic], I have them at least write down what the child says and then use that as a therapeutic prompt.” In sessions, Lawson guides parents, teaching them how to interact and better bond with children who have been traumatized. Some parents and caregivers have never really learned how to play with their children, he says. At the same time, he notes that learning positive interaction skills is not just about the fun stuff. Parents and caregivers also need to know how to effectively discipline the child. “Many times when parents find out that their child has been abused, they are hesitant to discipline or correct behavior because they feel sorry for them,” he says. “Or they come down too hard.” Lawson encourages parents to use time-outs, to not respond when a child is acting out with attention-getting behavior and to not use corporal punishment. In the absence of parents or other supportive adults, the counselor may become the stabilizing adult in a traumatized child’s life. Although the counselor is not with the child as often as a parent or caregiver would be, just having someone who is concerned and will listen to whatever the child wants to say can be enough for an abused child to start to heal, Lawson says, even if he or she never chooses to talk about the abuse. He notes that even in the absence of other supportive figures, the therapeutic bond between counselor and child can help in decreasing hyperarousal. Counselors need to know that although it may seem best to address the child’s trauma right away, establishing and cementing the therapeutic relationship must come first, Lawson says. The child needs to feel safe and supported — even if it is only in the counselor’s office — before he or she can begin to process the trauma. “You’re trying to get them in a safe place if possible, or at least a predictable place,” Lawson says. “Then we can start teaching them how to cope [with the trauma] without lashing out or Abused children do not know how to cope with what they are experiencing, Lawson says. It is common for children who are traumatized to lash out in anger when stressed and to feel that the best way to establish some sort of stability in their lives is to try to control everything. They may be moody, irritable or withdrawn. Abused children may also bully and hit other children or turn their anger on themselves and engage in self-abusive behaviors such as cutting. Once a child feels supported, the counselor can also begin to teach the child how to self-soothe. Lawson guides traumatized children in using calming techniques such as diaphragmatic breathing or grounding themselves by focusing on something external such as the ticking of the clock or the texture of their clothes. “The point is to experience emotions in a safe place and cut out bad coping behaviors,” he says. Jennifer Foster, an assistant professor in the Department of Counselor Education and Counseling Psychology at Western Michigan University, studies child sexual abuse. Much of her research has involved listening to the narratives of abuse victims and how they perceive what has happened to them. Although these children display myriad reactions and emotions, Foster says two themes are always prominent: fear and safety. “Child victims of sexual abuse often view the world as unsafe and are likely to enter counseling with unresolved fears,” Foster says. “They need help from their counselor to learn how to cope with their fears.” “Although adults often see disclosure as a positive thing that will put an end to the abuse, for many children it is embarrassing and frightening, especially for those who feel at fault for their abuse and believe they will be blamed or, worse, not believed,” says Foster, who studied the experiences of sexually abused children for her dissertation. Several counseling interventions are designed to help sexually abused children regain a sense of safety. One is called the “safe place technique,” in which a counselor guides the child in visualizing and vividly describing an imaginary safe place. “The counselor may say, ‘Close your eyes and picture a special place where you feel completely safe,’” Foster explains. “This can be followed by specific questions to capture additional details such as: What do you see? What do you hear? What do you feel? What are you doing in your safe place? The details are recorded by the counselor and used to create a script.” Once the safe place has been established, the child can return to it mentally anytime he or she feels stressed or scared, Foster says. Another intervention called the “comfort kit,” developed by Liana Lowenstein, helps children who engage in nonsuicidal self-injury to learn self-soothing strategies, says Foster. “Counselors help children brainstorm and create a list of items that bring them comfort and make them feel better,” she explains. “Although the process is guided by the counselor, children are the ones who choose what will go inside their box or bag.” Foster says children commonly include items such as a blanket, music, a favorite stuffed animal, written or recorded guided imagery, a stress ball, a list of relaxation activities, bubbles (for deep breathing exercises), a favorite book, a picture of a caring person or special place, a journal and pen, art supplies and a list of self-affirmations. Foster is also a proponent of bibliotherapy. “Children’s books about sexual abuse can introduce child victims to others who have had similar experiences, which may lead to decreased feelings of isolation and normalize their trauma-related symptoms,” she says. Books can also provide comfort, offer coping suggestions and teach kids important lessons such as that the abuse is not their fault, Foster adds. Because fear is a predominant issue for child victims of sexual abuse, Foster also recommends stories that specifically address feeling afraid. Her suggestions include Once Upon a Time: Therapeutic Stories That Teach and Heal by Nancy Davis and A Terrible Thing Happened: A Story for Children Who Have Witnessed Violence or Trauma by Margaret Holmes. To help older adolescents explore their memories and feelings connected to sexual abuse, Foster recommends The Secret: Art & Healing from Sexual Abuse by Francie Lyshak-Stelzer. Foster notes that the author’s artwork is particularly effective at capturing fear and the myriad other feelings generated by abuse. Finding relief through play Play therapy is one of the most commonly used interventions with children, particularly those who have suffered complex trauma, meaning they have experienced long-term (and often multiple types of) abuse, says Roaten, an LPC who works with traumatized children in clinics and schools, and an associate professor at Hardin-Simmons University in Abilene, Texas. Most therapeutic playrooms feature a fairly specific set of toys that might include an art center, play dough, a Bobo doll (an inflatable plastic doll modeled after the inflatable clown used in Alfred Bandura’s seminal study on children and aggression), a dollhouse with miniature people, animal figures, toy weapons, costumes and a sandbox. These toys and activities help children to act out their experiences in a safe and less negative manner, Roaten says. For instance, she recounts treating one child who “would just attack and slash the doll where the penis was. She was a victim of sexual abuse.” In some cases, Roaten says, children just “play through,” processing their trauma entirely through play without needing to talk to the play therapist. In many instances, Baggerly says, traumatized children act out things they aren’t able to verbalize. She once treated a 6-year-old who didn’t speak for about 10 sessions because the girl had a severe case of internalized anxiety and depression. But as the girl played, she would express her rage by taking a gun and shooting the Bobo doll in the head, stomach and groin area. Baggerly took this cue as a chance to ask the child about the anger and hurt she was feeling. Catherine Tucker, a licensed mental health counselor who works with traumatized children in her role as a counselor supervisor and consultant, uses a child and family therapy called Theraplay, which was developed by the Theraplay Institute in the 1960s. “Theraplay works on a four-dimensional model: structure, nurture, engagement and challenge,” says Tucker, an associate professor in the college of education at Indiana State University. Theraplay builds and enhances attachment, self-esteem, trust in others and engagement through participation in simple games. The idea is that the four dimensions — structure, nurture, engagement and challenge — are needed by children for healthy emotional and psychological development. The “play” in Theraplay is built around activities that teach participants what the elements of those dimensions are. Ideally, children engage in Theraplay with their parents or caregivers. Participating together teaches skills to parents or caregivers who don’t know how to provide the four dimensions, while enhancing the bond with the child. In the absence of parents or caregivers — whether because they are abusive or because they cannot or do not want to participate — the counselor plays directly with the child so the child can still learn how to interact in an emotionally healthy way. The games and activities are simple — suitable for children as young as 1, yet still engaging for older children — and include things such as blowing bubbles, playing with stuffed animals, cotton ball hockey, cotton ball wars and newspaper basketball. The activities teach parenting skills and also help traumatized children with affect regulation, impulse control, feeling safe and not feeling like they have to be in control of the world, Tucker says. She notes that, oftentimes, kids who have suffered trauma feel like they have to be in charge either because a parent is abusive or simply doesn’t know how to provide a sense of security or stability, or because the child’s sense of control is being undermined by the abuse he or she experienced at the hands of another adult or peer. Finding help at school Counselors who are treating traumatized children should tap all available resources to help these clients, Lawson says, working not only with caregivers or other relatives but also with the child’s school. School counselors may be a source of additional one-on-one counseling for the child, or they could get the child involved in group activities with other children who are trauma victims or with children who share common interests such as music, sports or art, Lawson says. These peer networks provide abused children additional sources of support and can also teach them how to interact with people — something that many abused and isolated children have never learned to do. Perpetrators of abuse seek to control and isolate their victims. An abusive parent has the power to cut off or severely limit a child’s healthy interactions with people outside of the circle of abuse. “[These] kids often didn’t learn social skills because they are kept away from other people,” Lawson says. Abuse is often part of a viciously long-lived cycle, handed down from generation to generation, Lawson adds. Parents who were abused as children often grow up to abuse their own children. Even if parents with an abusive background are not abusive themselves, they may still carry on other dysfunctional behaviors, he says. “You may have three or four generations of people [who] have a very skewed view of how to interact with people,” he says. “So they never learn how to interact with others. You have to help [these children] connect with other sources.” School counselors also can play important roles as advocates and educators. Many people — including teachers and administrators — do not understand that many children who act out are doing so because they have been or are being abused, Tucker asserts. “School counselors can really make a difference by making sure that kids get evaluated instead of just automatically disciplined,” Tucker says. “So many boys end up in the criminal justice system because they were physically acting out in response to trauma,” she adds. School counselors can also help abused and traumatized children learn how to help themselves, says Elsa Leggett, an ACA member, associate professor of counseling at the University of Houston-Victoria and president of the Association for Child and Adolescent Counseling. “Talk to kids about safety plans,” Leggett urges. “Ask them, ‘When abusive things are going on at home, where do you go? How do you know when things are getting dangerous?’” The most important thing that all practicing counselors can do to address childhood trauma is to ask questions, Lawson says. Children — and sometimes adults who were traumatized as children — don’t always recognize what they’ve experienced as abuse, so rather than asking “have you been abused?” Lawson instructs his students to pose questions such as “has anyone ever hit you?” and “has anyone ever touched you in a way that made you feel uncomfortable?” ACA member Cynthia Miller is an assistant professor of counseling at South University in Richmond, Virginia, and an LPC who has worked with incarcerated women. She has seen the kind of positive change that can occur when people get the help they need, but she has also witnessed the pattern of incarceration, addiction and institutionalization that can become entrenched in generation after generation. “If you want to decrease the amount of money we spend on treating people with substance abuse or incarceration,” Miller says, “address child abuse.” Caring for children during a disaster Although ongoing trauma causes the biggest and longest-lasting kind of damage, one-time events can also create problems that linger. It is particularly important for children to receive timely counseling intervention, experts say. “Typically, most children will have short-term responses to a disaster that include five basic realms,” Baggerly says. These realms are: - Physical: Symptoms include headache or stomachache - Thought process: Children exhibit confusion and inattention - Emotional: Children are scared and sad - Behavioral: Children might become very withdrawn or clingy, or may start sucking their thumb or wetting the bed again - Spiritual/worldview: Children may question their beliefs about God and the world (For more information about typical trauma responses and recommended interventions, see “Children’s trauma responses and intervention guidelines” below.) “Typically these [responses] don’t last long,” Baggerly says, “but that depends on the kind of support kids get in the immediate aftermath.” Ultimately, the purpose of any counseling intervention after a traumatic event is to reduce or eliminate a child’s anxiety and stress, Baggerly asserts. She attempts to do that by “resetting” the child and connecting him or her to coping strategies. “They need caring family and community support,” Baggerly says, “but if it is a huge disaster, then parents and teachers are equally traumatized, so they are not able to give support to kids. That’s when you need to bring people from outside.” Some children are at greater risk than others, Baggerly says. “Kids who don’t have supportive family [and] who already have anxiety or have some type of developmental disability often will have ongoing symptoms that go longer than 30 days,” she explains. “Counselors need to triage to find out who is at most risk.” During her roughly dozen years of experience working with chronic trauma and disasters, Baggerly has developed an integrated approach that she calls disaster response play therapy. The approach uses a trauma-informed philosophy in which counselors train parents and teachers in typical and atypical reactions to disasters so they can screen children and determine which ones need more help, she explains. “We also normalize typical symptoms, provide psychoeducation that informs kids about the impact of disasters, teach them coping strategies and provide them with child-centered play therapy.” Baggerly usually begins by gathering a group of children and talking with them about rebuilding the community. She also encourages children to use expressive arts or drama to communicate their feelings. “The other part of what we do is facilitate connection and conversation between kids and parents,” Baggerly says. “We may start out with Theraplay and do structured activities, such as holding hands or singing ‘Row, Row, Row Your Boat.’ The point is to have them [parents and children] looking at each other so that the mirror neurons can be engaged.” Baggerly also educates parents on activities they can do at home with their children. She refers them to an online workbook, “After the Storm,” which has scales of 1 to 10 or a thermometer that kids can fill in to indicate how much stress they are feeling. Roaten often does volunteer trauma work and provided on-site support in the wake of the April 2013 fertilizer plant explosion in West, Texas, that killed 15 people, injured more than 150 and caused extensive damage to buildings and property. “One girl, a seventh-grader, had been standing outside in a neighborhood with a view of the plant and observed the explosion itself,” Roaten says. “So she had that image in her head and it would not go away. I taught her some deep breathing and progressive relaxation and did some guided imagery about her favorite place to be. “When that picture came up in her mind, she could breathe, relax and go to her good place. By the fourth day I was there, she was no longer seeing the image.” Roaten uses expressive therapy for children who aren’t very verbal or who don’t have the vocabulary to talk about their feelings. She brings a sand tray with miniatures of fences, people and buildings. She then allows children (and even adults) to set up scenarios or vignettes that help them express and act out what they are feeling. “I might say something like, ‘Create your world before [Hurricane] Katrina; then create your world after Katrina,” Roaten explains. Roaten also uses trauma-focused cognitive behavior therapy to help children and adolescents learn coping skills. “You teach them about trauma and its impact on them,” she explains. “Then you teach them relaxation and breathing skills. Once you get them to be able to self-soothe, relax and be calm, you can help them deal with pictures or scenarios that come up. You help them change the story — what they are telling themselves and what that means — which helps them work through the trauma a little bit at a time.” Children’s trauma responses and intervention guidelines Preschool through 2nd grade Typical trauma responses: - Believes death is reversible - Magical thinking - Intense but brief grief responses - Worries others will die - Separation anxiety - Regressive symptoms - Fear of the dark - Reenactment through traumatic play - Give simple, concrete explanations as needed - Provide physical closeness - Allow expression through play - Read storybooks such as A Terrible Thing Happened, Brave Bart, Don’t Pop Your Cork on Monday 3rd through 6th grade Typical trauma responses: - Asks a lot of questions - Begins to understand that death is permanent - Worries about own death - Increased fighting and aggression - Hyperactivity and inattentiveness - Withdrawal from friends - Reenactment though traumatic play - Give clear, accurate explanations - Allow expression through art, play or journaling - Read storybooks Typical trauma responses: - Physical symptoms such as headaches and stomachaches - Wide range of emotions - More verbal but still needs physical outlet - Arguments and fighting - Be accepting of moodiness - Be supportive and discuss when they are ready - Groups with structured activities or games Typical trauma responses: - Understands death is irreversible but believe it won’t happen to them - Risk-taking behaviors - Lack of concentration - Decline in responsible behavior - Rebellion at home or school - Encourage expression of feelings - Groups with guiding questions and projects Source: “Systematic Trauma Interventions for Children: A 10-Step Protocol,” by Jennifer Baggerly in Terrorism, Trauma and Tragedies: A Counselor’s Guide to Preparing and Responding, third edition, American Counseling Association Foundation, 201 ACA Traumatology Interest Network Counselors and counselors-in-training who have an interest in providing counseling services to trauma- or disaster-affected individuals and communities should consider joining the ACA Traumatology Interest Network. Network participants share insights, experiences, new plans and advances in trauma counseling services. For more information on joining the interest network, go to counseling.org/aca-community/aca-groups/interest-networks. To contact individuals interviewed for this article, email: - David Lawson at [email protected] - Jennifer Baggerly at [email protected] - Catherine Tucker at [email protected] - Jennifer Foster at [email protected] - Gail Roaten at [email protected] - Elsa Leggett at [email protected] - Cynthia Miller at [email protected] Laurie Meyers is the senior writer for Counseling Today. Contact her at [email protected]. Letters to the editor: [email protected]
<urn:uuid:03c3bbf2-6aa1-473e-8ad7-6c378144c7b3>
CC-MAIN-2017-13
http://ct.counseling.org/2014/06/the-toll-of-childhood-trauma/
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189252.15/warc/CC-MAIN-20170322212949-00406-ip-10-233-31-227.ec2.internal.warc.gz
en
0.960374
6,240
3.515625
4
When the silent film got a voice The invention of the sound-on-film process An interview with Prof. em. Dr.-Ing. Uwe E. Kraus, long-time head of the Department of Communications Engineering at the University of Wuppertal. The German engineer Hans Vogt was one of the inventors of the new light-tone process in the early 1920s. Together with Joseph Massolle and Joseph Benedict Engl, he founded the company "Tri-Ergon", which means "the work of three". What was revolutionary about this invention? Kraus: After decades of using silent film, it was now possible to make the actors' voices, ambient sounds and accompanying music audible for the first time. In this process, the sound of a motion picture film was stored photographically on a maximum 2.54 mm wide strip called a sound track between the film frames and the perforation holes of the film. For this purpose, the electrical sound signal controls the brightness of a lamp, the light of which falls on the aforementioned sound track via a narrow transverse slit and causes analog blackening of the film in the running direction in addition to the sound signal. For mass distribution, the advantage of the optical sound process is essentially that the film images and the sound track are copied together when the film copies are made. In addition, the optical soundtrack is fixed in terms of location and time with respect to the film images, and is therefore always lip-synchronous. It thus cannot be accidentally erased, as it is the case with magnetic sound. The disadvantage (as with the actual film image) is the susceptibility to scratches, which can lead to sound disturbances. But, even with an undamaged film, the sound frequency range is limited, but this was not considered as annoying at the time. What happens technically in the projector during this process? Kraus: A small lamp of constant brightness in the light sound unit illuminates the sound track through a narrow slit oriented transversely to it. The brightness of the light emerging from the opposite side of the film fluctuates in accordance with the variable blackenings on the sound track. This is converted by a photocell into an electrical signal, i.e. into the reproduced sound signal, and fed, via amplifiers to the loudspeakers, in the cinema auditorium. For projection, the film is moved downward in steps/images between freely oscillating loops, one above and one below the image window. During projection, the film image in the image window stands still; during the film transport to the next film image, a rotating wing shutter interrupts the light in the direction of the screen. The sound track, on the other hand, must be read from a constantly running piece of film. In the sound recorder, the so-called sound shaft, which is connected to a flywheel, ensures that the film runs smoothly and evenly. This happens, looked at in the film running direction, behind the projection lens of the projector. Because of these technically different reproduction conditions, the image and sound track on the film are offset from each other. A sound event is therefore not located directly next to the corresponding image but 20 frames ahead of it in the running direction. To this end, the three German engineers developed new photocells based on electron tubes and applied for about 150 patents. Nevertheless, the invention was not successful in Germany, even after the first light-sound performance on September 17, 1922, at the Alhambra movie theater in Berlin. Why not? Kraus: It was essentially the technical deficiencies.In 1925, one of the three, Joseph Benedict Engl, was responsible for the sound of the short film "Das Mädchen mit den Schwefelhölzern" (The Little Match Girl), based on the fairy tale by Hans Christian Andersen and produced by "Universum-Film AG" (UFA) in Berlin. However, the premiere had to be canceled due to considerable technical deficiencies; the film became a commercial failure. The sound film patents were sold to the American William Fox, who made the sound film world famous from 1928 and who was able to live well off the Tri-Ergon patent until the end of his life, even after the stock market crash. Over the years, other processes were also used, such as the magnetic sound process. What advantage did this process offer in the film industry? Kraus: Magnetic sound processes allow better sound quality than sound-on-film processes. Professional processes with SEPMAG (separated magnetic) worked with magnetic film, i.e. a perforated sound tape of 16 mm, 17.5 mm (2 sound tracks each) or 35 mm width (up to 6 sound tracks), which was played in parallel and synchronously with the picture film on a mechanically or electrically coupled cord machine with multi-track sound heads. When the film was shot, the original sound was first recorded on portable tape recorders (Nagra) with an additional synchronized sound track (50 Hz) and later recorded on magnetic film with an electrically synchronized cord machine. The classic elements of dialogue, sound (effects) and music could be cut onto magnetic film on cutting tables with three sound tracks. They were then ready for mixing in the dubbing studio with film projector and several cord machines coupled to it. The COMMAG (combined magnetic) magnetic sound process was widespread from the 1950s to the 1980s; in this process, the narrow magnetic tape was stuck directly onto the film and played back in a projector with a pickup system. The advantage of this was that only one device was needed instead of two, and synchronizing picture and sound was no longer a problem. Dolby Stereo, a new groundbreaking sound system, hit theaters in 1976. Cinemagoers were able to admire this process for the first time in the film "Tommy" by the rock group The Who. What was new about it? Kraus: The Dolby A noise reduction system improved the sound quality considerably. Quiet sounds are raised in volume before the film is described and lowered again by the same amount during playback. This lowers the annoying film sound noise and significantly improves the signal-to-noise ratio. It was also possible to accommodate two light sound tracks in the space that one track used to require, and to still accommodate the information for a surround channel (rear effects and reverb components) and a center channel (dialogue track) in these two tracks. That was the beginning of Dolby Stereo. What sound systems do professionals work with today? Kraus: Dolby SR is a noise reduction process in use since 1987 for sound recording on analog optical sound on 35 mm film. SR stands for spectral recording and is named this way because it has a spectral compressor function adapted to the ear. The process is the most highly developed audio noise reduction process and marks the end of the development of these analog processes, since noise reduction is no longer needed for digital systems. Dolby Digital was first used as a multi-channel sound format for motion pictures (e.g. Apocalypse Now) and in 1995 was chosen as the standard multi-channel sound for the then newly developed DVD (Digital Versatile Disk) as well as for television broadcasting. It is a 6-channel (5.1) digital system and is also known as AC-3 according to the audio encoding process used. Dolby Atmos is an elaborate surround format for home and cinema use that was introduced in 2012. The format expands purely channel-based surround sound systems with multiple ceiling speakers and theoretically allows an unlimited number of audio tracks; it is backward compatible with older multichannel sound systems such as 5.1 or 7.1. Uwe Blass (Interview on January 7, 2021) Uwe E. Kraus held the Chair of Communications Engineering in the Faculty of Electrical Engineering, Information Technology and Media Technology at the University of Wuppertal until his retirement in 2010. His predecessor in office, Prof. Dr. rer. nat. Dr. h.c. F.J. In der Smitten, had set up a color television test laboratory at Westdeutscher Rundfunk Cologne in the 1960s and, after his appointment at the University of Wuppertal, brought essential apparatus from this laboratory with him to Wuppertal; this was used for research and teaching. Prof. Kraus and Prof. In der Smitten have arranged and operated these devices from the pioneering days of color television in Germany in roughly the same way as they were in the Cologne laboratory at the time. Now called the "Historical Color Television Laboratory at the University of Wuppertal," it offers insights into the transformation of television studio technology over a period that spans from the post-war relaunch of black-and-white television in 1952, through the launch of color television in 1967, to the beginnings of digital television in the mid-1990s. The Historical Color Television Laboratory can also be visited; registration for guided tours are possible at the e-mail: [email protected].
<urn:uuid:2a6b7f00-d015-42d8-91ed-e6afc626ef54>
CC-MAIN-2021-43
https://www.uni-wuppertal.de/en/transfer/science-communication/jahr100wissen-/-100-years-ago/jahr100wissen/the-invention-of-the-sound-on-film-process
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588242.22/warc/CC-MAIN-20211027181907-20211027211907-00508.warc.gz
en
0.957593
1,894
3.828125
4
Established grape vines usually consist of a major trunk with several older, dark colored vines extending outward on a trellis or arbor. After several years, the older vines may become too numerous and some of them may need to removed clear back where they are connected to the main trunk. Lighter colored vines extend from the main lines (and sometimes directly from the trunk). These lighter colored vines were produced last year. If most of these vines are allowed to remain, growth will become too thick and fruit will be limited because of lack of light. Some of them may need to be removed completely back to where they connect to the older, dark colored vines. Most of the lighter colored vines should be shortened to a few inches, leaving only 2 to 4 nodes where new shoots will grow. These nodes will produce flowers and fruit. By limiting the number of fruiting buds, vines produce larger clusters of grapes with larger fruits in the cluster. Some of these short vines may need to be tied to a wire or trellis to give them support. Shortly before harvest the lower leaves surrounding the grape bunches can be removed to provide better sun exposure. This helps to ripen the grapes and also improves air circulation, which helps to prevent disease infection.
<urn:uuid:a7cd6243-1360-4c28-82de-b690ac622e50>
CC-MAIN-2019-35
http://naturalpruningnw.com/pruning-grapes/
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315936.22/warc/CC-MAIN-20190821110541-20190821132541-00477.warc.gz
en
0.96029
252
3.28125
3
Class: Cleveland Class Cruiser, later converted to Little Rock Class Guided Missile Cruiser Launched: August 27, 1944 At: Cramp Shipbuilding Company, Philadelphia, Pennsylvania Commissioned: June 17, 1945 Converted: 1960 at New York Shipbuilding Corporation, Camden, New Jersey Length: 610 feet Beam: 66 feet Draft: 25 feet Displacement: 10,670 tons Armament: Two Mk II Talos Missile Launchers; three 6-inch guns; two 5-inch/38 caliber guns Buffalo & Erie County Naval & Military Park One Naval Park Cove Buffalo, New York 14202 Fax: (716) 847-6405 Latitude: 42.8775537253, Longitude: -78.8807763366 The only World War II cruiser on display in the U.S., USS Little Rock is the sole survivor of the Cleveland class, the most numerous of U.S. wartime cruisers (29 vessels completed). Little Rock served with distinction as flagship for both the Second and Sixth fleets. In 1960, she was converted to a Talos missile cruiser, making four cruises to the Mediterranean and two to the North Atlantic. Little Rock was stricken from the Navy Register in 1976, and acquired by the City of Buffalo in 1977. The ship is now a museum vessel on display at the Naval & Military Park with USS Croaker and USS The Sullivans, as well as PTF-17, a P-39 Bell Aircobra, FJ-4B Fury, M-41 tank, M-84 armored personnel carrier and a UH-1 helicopter. Little Rock conducts youth group overnight encampments.
<urn:uuid:ff7702c4-e689-4442-8a2c-be6f1a97c0af>
CC-MAIN-2020-29
https://www.hnsa.org/hnsa-ships/uss-little-rock-cl-92-later-clg-4/
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896905.46/warc/CC-MAIN-20200708062424-20200708092424-00471.warc.gz
en
0.882168
356
2.703125
3
An inversion layer hangs over residents of Boulder, Colorado. Click on image for full size Source: T. Eastburn Pollution's Effects on Us The atmosphere is one of the few resources shared among all Earthís inhabitants. As a consequence, the pollution that spews from a factory in Asia, a fire in Australia, a dust storm in Africa, or car emissions in North America can have a detrimental impact on people and the environment locally or an ocean away. Scientists have researched and documented many of the local hazards from ozone to atmospheric chemicals that cause acid rain. They have also studied impacts from airborne particles of dust, soot, and other particulate pollutants. From actual events and scientific research, we now know that air pollution can impact human health; that atmospheric haze or smog reduces visibility; and that the acid rain from sulfur dioxide emissions damages property, pollutes water resources, and can harm forests, wildlife, and entire ecosystems. But what are the regional and global impacts of air pollution? Through large scientific field campaigns such as MILAGRO scientists are focusing on the entire life cycle of air pollution. Their goal is to track its transport from large cities into regional and global environments to better understand the full scope of the problem. In doing so, they will be able to determine pollutionís impact on large natural systems such as Earthís climate. Is air pollution an example of the "Tragedy of the Commons" ĖĖ a concept that states that any resource open to everyone will eventually be destroyed? While the evidence of human-produced air pollution lends truth to the statement, it is also true that air becomes a Tragedy of the Commons only if people choose not to preserve the atmosphere for themselves and future generations. Much has been done to improve air quality in recent decades, but we still have a long way to go. Last modified February 17, 2006 by Teri Eastburn. You might also be interested in: Air pollution comes from many different sources. Natural processes that affect air quality include volcanic activity, which produce sulfur, chlorine, and ash particulates, and wildfires, which produce...more Have you ever spent time in a large city? If so, the odds are youíve seen the sky engulfed in a brownish-yellow or grayish-white haze due to air pollution. Such haze can reduce visibility from miles (kilometers)...more As the Summer Olympics in Beijing kicks off this week, the event is giving scientists a once-in-a-lifetime opportunity to observe how the atmosphere responds when a heavily populated region substantially...more Wildfires can boost ozone pollution to levels that violate U.S. health standards, a new study concludes. The research, by scientists at the National Center for Atmospheric Research (NCAR) in Boulder, Colo.,...more Acid rain is a general term used to describe different kinds of acidic air pollution. Although some acidic air pollutants return directly back to Earth, a lot of it returns in rain, snow, sleet, hail,...more What does industry have to do with all of the clouds that form over Southeast Pacific Ocean? While the connection might not be obvious to most of us, scientists in the VOCALS research project are especially...more What do smog, acid rain, carbon monoxide, fossil fuel exhausts, and tropospheric ozone have in common? They are all examples of air pollution. Air pollution is not new. As far back as the 13 th century,...more
<urn:uuid:1fb2a669-0d5c-452d-b55c-0b3de77395f5>
CC-MAIN-2021-39
https://www.windows2universe.org/earth/Atmosphere/pollution_effects_overview.html&edu=high&dev=
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057861.0/warc/CC-MAIN-20210926114012-20210926144012-00275.warc.gz
en
0.925963
714
3.6875
4